text
stringlengths
4
2.78M
--- abstract: 'Turán’s Theorem says that an extremal $K_{r+1}$-free graph is $r$-partite. The Stability Theorem of Erdős and Simonovits shows that if a $K_{r+1}$-free graph with $n$ vertices has close to the maximal $t_r(n)$ edges, then it is close to being $r$-partite. In this paper we determine exactly the $K_{r+1}$-free graphs with at least $m$ edges that are farthest from being $r$-partite, for any $m\ge t_r(n) - \delta_r n^2$. This extends work by Erdős, Győri and Simonovits, and proves a conjecture of Balogh, Clemen, Lavrov, Lidický and Pfender.' author: - 'Dániel Korándi[^1] [^2]' - Alexander Roberts - Alex Scott date: 'April 22, 2020' title: 'Exact stability for Turán’s Theorem' --- Introduction ============ Turán’s classical theorem [@T41] from 1941 says that a $K_{r+1}$-free $n$-vertex graph maximizing the number of edges (an *extremal graph*) is $r$-partite; the $r=2$ case was established earlier by Mantel [@M07], in 1907. The only extremal $n$-vertex graph is the *Turán graph* $T_r(n)$, the complete $r$-partite graph with parts of size ${\left\lfloor n/r \right \rfloor}$ or ${\left\lceil n/r \right \rceil}$, which has $t_r(n) = \left(1-\frac{1}{r} + o(1)\right)\binom{n}{2}$ edges. Turán’s Theorem lay the foundations of extremal graph theory, and has been highly influential in the field ever since. One of the early discoveries related to Turán’s Theorem was that if a $K_{r+1}$-free graph is “close” to extremal in the number of edges, then it must be “close” to the Turán graph in its structure. Indeed, the famous Stability Theorem of Erdős and Simonovits [@E66; @S66] from the 1960s implies the following: if $G$ is a $K_{r+1}$-free $n$-vertex graph with $t_r(n)-o(n^2)$ edges, then it can be made into the Turán graph $T_r(n)$ by changing only $o(n^2)$ edges. It is of little surprise that this powerful structural description of near-extremal graphs has seen many important applications and consequences over the past decades (e.g. [@ABKS04; @BBS04; @S14; @S09]). An alternative form of stability for Turán’s Theorem is to look at the distance from being $r$-partite (rather than the distance to a specific $r$-partite graph, namely the Turán graph). Thus we are looking for a large $r$-partite subgraph, which is what is wanted for most applications. The two problems are equivalent if we are only looking for a $o(n^2)$ bound on the distance. However, for graphs that are closer to extremal, we can obtain more structural information by measuring the distance from being $r$-partite. For example, if we move a constant number of vertices from a smallest vertex class to a largest vertex class of $T_r(n)$ then the resulting graph has $t_r(n)-O(1)$ edges but distance $\Omega(n)$ from the Turán graph. In contrast, a $K_{r+1}$-free graph on $n$ vertices with at least $t_r(n)-c_rn$ edges must already be $r$-partite. This phenomenon was first studied by Simonovits [@S69] and later by many other authors [@B81; @HT91; @KP05; @AFGS13; @TU15]. A tight result was proved by Brouwer [@B81]: \[thm:rpartstab\] Let $r\ge 2$ and $n\ge 2r+1$ be integers. Every $K_{r+1}$-free graph with at least $t_r(n) - {\left\lfloor n/r \right \rfloor} +2$ edges is $r$-partite. Let $f_r(n,t)$ be the smallest number such that any $K_{r+1}$-free graph $G$ with at least $t_r(n)-t$ edges can be made $r$-partite by deleting at most $f_r(n,t)$ edges. For fixed $r$, tells us that $f_r(n,t)=0$ for $t\le n/r+O(1)$, while the Stability Theorem tells us that $f_r(n,t)=o(n^2)$ if $t=o(n^2)$. But what happens in between? Better estimates of this function have only been obtained fairly recently. In a short and elegant paper, Füredi [@F15] proved that $f_r(n,t)\le t$. Later, Roberts and Scott [@RS18] showed that $f_r(n,t) = O(t^{3/2}/n)$ when $t\le\delta n^2$, and that this bound is tight up to a constant factor (in fact, they proved much more general results for $H$-free graphs, where $H$ is edge-critical). Very recently, Balogh, Clemen, Lavrov, Lidický and Pfender [@BCLLP] determined $f_r(n,t)$ asymptotically, and made a conjecture on its exact value. The main aim of this paper is to prove their conjecture. When $r=2$, the exact stability problem was already solved by Erdős, Győri and Simonovits [@erdoscan'tmaths]: they proved that for $t\le n^2/20$ the worst triangle-free graph, defining $f_2(n,t)$, is a blowup of $C_5$. One can generalize this construction to obtain a family of $K_{r+1}$-free graphs with many edges as follows. Consider a complete $(r-1)$-partite graph with parts $Z,Z_3,\dots,Z_r$, and insert a blowup of $C_5$ on $Z$ with independent sets $X, Y_1, Y_2, Z_1, Z_2$ as in (so $Z= X\cup Y_1\cup Y_2\cup Z_1\cup Z_2$). We will call this a *[pentagonal Turán]{}graph* if it further satisfies $|X| \le |Y_1| = |Y_2| \le |Z_i|$ for every $i\in[r]$, and each of the sets $X\cup Y_1\cup Z_1,X\cup Y_2\cup Z_2, Z_3,\dots,Z_r$ has size ${\left\lfloor \frac{n+|X|}{r} \right \rfloor}$ or ${\left\lceil \frac{n+|X|}{r} \right \rceil}$. (0.3,1) – (0.3,3.5) – (1.2,3.5) – (1.75,2.75) – (2.3,3.5) – (3.75,3.5) – (4.25,2.75) – (4.75,3.5) – (8.5,3.5) – (8.5,1) – (0.25,1); (0.35,4.5) – (2.35,5.25) – (3.6,5.25) – (1.25,4.5) – cycle; (2.35,5.25) – (3.6,5.25) – (8.5,4.5) – (5,4.5) – cycle; (8.5,4.5) – (5,4.5) – (5,5.25) – (8.5,5.25) – cycle; (5,5.25) – (8.5,5.25) – (3.6,4.5) – (2.35,4.5) – cycle; (1.5,3.75) – (2,3.75) – (2,4.25) – (1.5,4.25) – cycle; (0,0) rectangle (9,1); at (4.5,0.5) [$|Z_4|=x+y+z$]{}; (0,1.75) rectangle (9,2.75); at (4.5,2.25) [$|Z_3|=x+y+z$]{}; (0,3.5) rectangle (1.5,4.5); at (0.8,4) [$|X| =x_{\phantom{2}}$]{}; (2,3.5) rectangle (4,4.5); at (3,4) [$|Y_2|=y$]{}; (4.5,3.5) rectangle (9,4.5); at (6.75,4) [$|Z_2|=z$]{}; (2,5.25) rectangle (4,6.25); at (3,5.75) [$|Y_1|=y$]{}; (4.5,5.25) rectangle (9,6.25); at (6.75,5.75) [$|Z_1|=z$]{}; (9.7,3.375) – node\[right=2pt\] [$Z$]{} (9.7,6.375); Balogh, Clemen, Lavrov, Lidický and Pfender [@BCLLP] conjectured that $f_r(n,t)$ is witnessed by a [pentagonal Turán]{}graph if $t$ is small enough. Our main result is a proof of their conjecture. For a graph $G$ and integer $r\ge2$, let $D_r(G)$ be the minimum number of edges that must be removed from $G$ to make it $r$-partite. We prove the following theorem. \[pentstab\] For every $r \ge 2$ there is a $\delta_r > 0$ such that the following holds: If $G$ is a $K_{r+1}$-free graph on $n$ vertices with $e(G) \ge t_r(n) - \delta_r n^2$ edges, then there is a [pentagonal Turán]{}graph ${{G}^*}$ on $n$ vertices with $e({{G}^*}) \ge e(G)$ and $D_r({{G}^*}) \ge D_r(G)$. The rest of the paper is organized as follows. In , we present a brief overview of the proof, and collect some necessary tools. We need a special argument when the number of edges in $G$ is very close to $t_r(n)$, and the short proof of this case is presented in . contains the general argument of the proof of . We finish the paper with some discussion and open problems in . We follow standard notation throughout. $G$ is always a simple graph with vertex set $V(G)$ and edge set $E(G)$. The number of edges is denoted by $e(G)=|E(G)|$. We write $\Gamma_G(v){\subseteq}V(G)$ to denote the neighborhood of a vertex $v\in V(G)$, and $d_G(v)=|\Gamma_G(v)|$ to denote its degree. When the graph in question is clear, we may omit the subscript. For a set of vertices $S{\subseteq}V(G)$, we write $G-S$ for the subgraph induced on $V(G)\setminus S$. When $S=\{v\}$, we simply write $G-v$. Overview and tools {#sec:tools} ================== Given an $r$-partition of the vertices of a graph $G$, we say that an edge connecting different parts is *crossing*, and an edge connecting vertices in the same part is *internal*. So $D_r(G)$ is the minimum number of internal edges in an $r$-partition of the vertices of $G$. In their proof of the triangle-free case of , Erdős, Győri and Simonovits [@erdoscan'tmaths] start with a close to optimal bipartition of $G$, and construct a [pentagonal Turán]{}graph (in this case, a blowup of $C_5$) with the same number of internal edges, but more crossing edges. An important idea in their proof is to find a large matching of internal edges: as $G$ is triangle-free, this can be used to show that many crossing edges are missing from $G$. Our proof for the general case follows a similar spirit, although we need to work harder to find the necessary missing edges when $K_{r+1}$ is forbidden instead of $K_3$. We will need several estimates comparing Turán numbers $t_r(n)$ for various $r$ and $n$. Recall that $t_r(n)$ is the number of edges in the Turán graph $T_r(n)$, which is the complete $r$-partite graph on an *$r$-equipartitioned* vertex set, i.e., when each part has size ${\left\lfloor n/r \right \rfloor}$ or ${\left\lceil n/r \right \rceil}$. It is easy to see that $t_r(n) \ge t_r(n-1) + \frac{r-1}{r}(n-1)$, by adding a vertex to a smallest part of $T_r(n-1)$. Similarly, $t_r(n-1) \ge t_r(n) - \frac{r-1}{r}n$ can be obtained by deleting a vertex from a largest part of $T_r(n)$. The next lemma follows from these inequalities by iterating them, and by noting that $t_r(n)$ is the unique integer between $t_r(n-1) + \frac{r-1}{r}(n-1)$ and $t_r(n-1) + \frac{r-1}{r}n$. \[lem:turan\] Let $r\ge 2$ and $n$ be integers. Then: 1. $t_r(n) = t_r(n-1) + {\left\lceil \frac{r-1}{r}(n-1) \right \rceil} = t_r(n-1) + {\left\lfloor \frac{r-1}{r}n \right \rfloor}$, 2. $t_r(n') + \frac{r-1}{r}n(n-n') \ge t_r(n) \ge t_r(n') + \frac{r-1}{r}n'(n-n')$,for every $n'\le n$, 3. $\frac{r-1}{r}\binom{n+1}{2} \ge t_r(n) \ge \frac{r-1}{r}\binom{n}{2}$. To find a large matching among the internal edges, we will use the following lemma, which follows easily from the Tutte-Berge formula (and is a special case of a theorem of Chvátal and Hanson [@CH76]). We include a sketch of the argument for completeness. \[korandi\] Let $G$ be a graph on $n$ vertices with maximum degree $\Delta$ and let $k\ge 1$ be an integer. If $e(G) > (k-1) \Delta$ and $\Delta \ge 2k-1$, then $G$ contains a matching of size $k$. If $G$ has no $k$-matching, then it contains a set $S$ such that $G-S$ has at least $n-2(k-1)+|S|$ odd components (note that perforce $|S| \le k-1$). The number of edges in this setup is maximized when $G-S$ has $n-2(k-1)+|S|-1$ singletons and a $(2(k-1-|S|)+1)$-clique. Then $G-S$ induces $(k-1-|S|)(2(k-1-|S|)+1)\le (k-1-|S|)\Delta$ edges, and $S$ touches at most $|S|\Delta$ edges, so $G$ has at most $(k-1)\Delta$ edges, contradicting our assumption. For an integer vector $\mathbf{n} = (n_1,\dots,n_r) \in {\mathbb{N}}^r$, let $K_{\mathbf{n}}$ be the complete $r$-partite graph with parts of size $n_1,\dots,n_r$. The next lemma will be our main tool for bounding the number of missing crossing edges using the $K_{r+1}$-freeness of our graph. We will generally apply it to the neighborhood of a vertex. This is a folklore result (see, for example, [@BMSW]), but we include a short proof for completeness. \[folklore\] Let $r \ge 2$ and let $\mathbf{n} = (n_1,\dots,n_r) \in {\mathbb{N}}^r$ be such that $n_1 \le n_2 \le \dots \le n_r$. Then any $K_r$-free subgraph of $K_{\mathbf{n}}$ contains at most $e\left(K_{\mathbf{n}}\right) - n_1n_2$ edges. There are exactly $\prod_{i=1}^{r} n_i$ copies of $K_r$ in $K_{\mathbf{n}}$. Each edge is contained in at most $\prod_{i=3}^{r} n_i$ of these copies, so a $K_r$-free subgraph must have at least $n_1n_2$ missing edges. We will also make use of the following classical result saying that every $K_{r+1}$-free graph with relatively large minimum degree is $r$-partite. \[thm:aesos\] Let $r \ge 2$ and let $G$ be a $K_{r+1}$-free graph $n$ vertices. If the minimum degree $\delta$ of $G$ is strictly greater than $\frac{3r-4}{3r-1}n$, then $G$ is $r$-partite. A *blowup* $H=G[n_1,\dots, n_k]$ of a $k$-vertex graph $G$ is defined on vertex set $\bigcup_{i \in [k]} W_i$ with $|W_i|=n_i$, where the $W_i$ are disjoint, and $w\in W_i$ and $w'\in W_j$ are adjacent in $H$ if and only if $v_i$ and $v_j$ are adjacent in $G$. Note that every [pentagonal Turán]{}graph is a blowup $L_r[x,y,y,n_1,\dots,n_r]$, where $L_r$ is the graph whose first five vertices induce the pentagon $v_1v_2v_5v_4v_3$, and all other edges are present. Indeed, let us call such a blowup a *complete pentagon-$r$-partite* (or *[CPR]{}*) *graph* if $x\le y\le n_i$ for every $i\in [r]$. A [pentagonal Turán]{}graph is then a [CPR]{} graph such that the numbers $x+y+n_1,x+y+n_2,n_3,\dots,n_r$ do not differ by more than 1 (i.e., each of them is equal to ${\left\lfloor \frac{n+x}{r} \right \rfloor}$ or ${\left\lceil \frac{n+x}{r} \right \rceil}$). The following statement tells us how to make blowups $r$-partite. We sketch the proof for completeness. Let $H=G[n_1,\dots,n_k]$. Then one can delete $D_r(H)$ edges from $H$ to obtain $G'[n_1,\dots,n_k]$ for some $r$-partite subgraph $G'$ of $G$. Take an $r$-partite subgraph of $H$ obtained by deleting $D_r(H)$ edges from $H$, and “symmetrize” it, i.e., for $i = 1,\dots,k$, carry out the following: Pick some $v \in W_i$ with $d(v)$ largest. Then for each $w\in W_i \setminus v$, change the edges touching $w$ so that its neighborhood $\Gamma(w)$ becomes the same as $\Gamma(v)$. Through this process, the graph remains an $r$-partite subgraph of $H$, and the number of edges in it does not decrease (thus stays equal to $e(H)-D_r(H)$). At the end, we have $\Gamma(v) = \Gamma(w)$ whenever $v$ and $w$ belong to the same blowup part $W_i$, so the resulting graph is the blowup of some $G'{\subseteq}G$ itself. Deleting any edge of $L_r$ makes it $r$-partite, so we get the following. \[lem:ptgcount\] If $G= L_r[x,y,y,n_1,\dots,n_r]$ is a [CPR]{} graph with $x\le y\le n_i$ for every $i\in[r]$, then $D_r(G)=xy$. This means that an optimal $r$-partition of a [CPR]{} graph (minimizing the number of internal edges) can be obtained by putting $Y_1\cup Z_1$ in the first part, $X\cup Y_2\cup Z_2$ in the second, and $Z_i$ in the $i$th part for every $i\ge 3$. Let us call this the *standard $r$-partition* of such a graph. As a benchmark, it will be helpful to understand roughly how many internal edges there are in the conjectured extremal graphs, so that we can cut short some obscure cases in our analysis. \[lem:sampleptg\] For any integers $r\ge 2$, $n$ and $0\le s\le \frac{n}{r^4}$, there is a [CPR]{} graph $G$ with $n$ vertices and at least $t_r(n)-\frac{sn}{r}(1+1/r^3)$ edges such that $D_r(G) \ge \frac{\sqrt{s^3n}}{r^2}$. If $s=0$, then $G=T_r(n)$ satisfies the conditions, so we may assume that $s\ge 1$. Let $t= {\left\lceil \frac{\sqrt{sn}}{r^2} \right \rceil}$. As $\sqrt{s} \le \frac{\sqrt{n}}{r^2}$, we have $s \le \frac{\sqrt{sn}}{r^2} \le t \le \frac{2\sqrt{sn}}{r^2} \le \frac{2n}{r^4}$. We claim that the graph $G=L_r[s,t,t,n_1,\dots,n_r]$ works if each of the numbers $n_1+t+{\left\lceil s/2 \right \rceil}, n_2+t+{\left\lfloor s/2 \right \rfloor}, n_3, n_4,\dots, n_r$ is equal to ${\left\lceil n/r \right \rceil}$ or ${\left\lfloor n/r \right \rfloor}$, in a non-increasing order. This graph is well-defined because, using $s\le t \le \frac{2n}{r^4}$ and $2 \le r$, $$t+{\left\lceil s/2 \right \rceil} \le t+s \le 2t\le \frac{4n}{r^4} \le \frac{n}{2r}.$$ Moreover, since $\lfloor 2x \rfloor \ge 2 \lfloor x\rfloor$ for any $x>0$, this shows that $s\le t\le n_i$ for every $i\in [r]$, so by , $D_r(G) = st \ge \frac{\sqrt{s^3n}}{r^2}$. To count the edges in $G$, let us split $X$ into two sets $X_1$ and $X_2$ of size ${\left\lceil s/2 \right \rceil}$ and ${\left\lfloor s/2 \right \rfloor}$, respectively, and note that $(X_1\cup Y_1 \cup Z_1, X_2\cup Y_2\cup Z_2, Z_3, Z_4, \dots, Z_r)$ is an $r$-equipartition of the vertex set with exactly $st$ internal edges. There are $t_r(n)$ potential crossing edges, but $|X_1|(|X_2|+|Z_2|) + |X_2|(|X_1|+|Z_1|) - |X_1||X_2| + |Y_1||Y_2|$ of them are missing. Here $|Y_1||Y_2| = t^2 = {\left\lceil \frac{\sqrt{sn}}{r^2} \right \rceil}^2 \le \frac{sn}{r^4} + 2t - 1$ because $({\left\lceil x \right \rceil}-1)^2 \le x^2$, and therefore ${\left\lceil x \right \rceil}^2 \le x^2 + 2{\left\lceil x \right \rceil} - 1$ for every $x\ge 1$. Also, $|X_1||X_2| = {\left\lfloor s/2 \right \rfloor}{\left\lceil s/2 \right \rceil} = {\left\lfloor s^2/4 \right \rfloor}$. Finally, $|X_1|+|Z_1|$ and $|X_2|+|Z_2|$ are both at most ${\left\lceil n/r \right \rceil}-t \le \frac{n}{r}+1-t$, so we get $|X_1|(|X_2|+|Z_2|) + |X_2|(|X_1|+|Z_1|) \le s(\frac{n}{r}+1-t)$. In total, this gives at least $$st + t_r(n) - s \left( \frac{n}{r}+1-t \right) + {\left\lfloor \frac{s^2}{4} \right \rfloor} - \left( \frac{sn}{r^4} + 2t - 1 \right) = t_r(n) - \frac{sn}{r} - \frac{sn}{r^4} + 2st-2t + {\left\lfloor \frac{s^2}{4} \right \rfloor} -s +1$$ edges in $G$. We can see that this is at least $t_r(n)-\frac{sn}{r}(1+1/r^3)$ using the fact that $2st\ge 2t$ and ${\left\lfloor s^2/4 \right \rfloor} + 1 \ge s$ hold for every integer $s\ge 1$. Very dense graphs {#sec:dense} ================= says that every $K_{r+1}$-free graph $G$ with very close to $t_r(n)$ edges is $r$-partite. The next lemma shows that $G$ is at most one vertex away from being $r$-partite, even if we allow slightly fewer edges. \[lem:almostpartite\] Let $r\ge 2$, and suppose $G$ is a $K_{r+1}$-free graph on $n\ge 9r^4$ vertices with at least $t_r(n) - \frac{n}{r}(1+1/r^3)$ edges. Then there is a vertex $v\in V(G)$ such that $G-v$ is $r$-partite. If the minimum degree of $G$ is greater than $\frac{3r-4}{3r-1}n$, then by , $G$ itself is $r$-partite. Otherwise, there is a vertex $v$ of degree at most $\frac{3r-4}{3r-1}n$, and hence $G-v$ has $$\begin{aligned} e(G-v) &\ge t_r(n) - \frac{n}{r}(1+1/r^3) - \frac{3r-4}{3r-1}n \\ &\ge t_r(n-1) + \frac{r-1}{r}n - \frac{r-1}{r} - \frac{n-1}{r} - \frac{1}{r} - \frac{n}{r^4} - \frac{3r-4}{3r-1}n \\ &= t_r(n-1) - \frac{n-1}{r} + \frac{1}{r(3r-1)}n - \frac{n}{r^4} - 1 \\ &\ge t_r(n-1) - \frac{n-1}{r} + \frac{n}{3r^4} -1 \\ &\ge t_r(n-1) - \frac{n-1}{r} + 2 \end{aligned}$$ edges, where we used $t_r(n) \ge t_r(n-1) + \frac{r-1}{r}(n-1)$ from in the second line, $3r^2 \le 3r^4/4$ in the fourth, and $n\ge 9r^4$ in the fifth. But then $G-v$ is $r$-partite by . This structural lemma allows us to establish our main result when the number of edges is very close to extremal. \[thm:verydense\] Let $r\ge 2$ and $n\ge 2^8r^4$, and suppose $G$ is a $K_{r+1}$-free graph with $n$ vertices and at least $t_r(n)-\frac{n}{r}(1+1/r^3)$ edges. Then there is a [CPR]{} graph ${{G}^*}$ such that $D_r({{G}^*}) \ge D_r(G)$ and $e({{G}^*}) \ge e(G)$. If $G$ is $r$-partite, then we can just take ${{G}^*}=T_r(n)$, so let us assume that $G$ is not $r$-partite. By , there is a vertex $v$ such that $G-v$ is $r$-partite, say with parts $U_1, \dots, U_r$ of size $n_1,\dots, n_r$. Let $a_i$ be the number of neighbors of $v$ in $U_i$. We may assume that $a_1\le \dots\le a_r$. Then clearly, $1\le D_r(G)\le a_1$. We claim that ${{G}^*}= L_r[1,a_1,a_1,n_1-a_1,n_2-a_1,n_3,n_4\dots,n_r]$ works. To show this, note that $G$ has $$\label{eq:denseedges} e(G) \le \sum_{i<j} n_in_j - a_1a_2 + \sum_{i\in [r]} a_i$$ edges. This is because there are $\sum_{i<j} n_in_j$ potential edges in the $r$-partite graph induced by $U = U_1\cup\dots \cup U_r$, but the neighborhood of $v$ is $K_r$-free, so by , at least $a_1a_2$ of these edges are missing. The number of edges in $G$ not induced by $U$ is precisely $\sum_{i\in [r]} a_i$. On the other hand, $$e({{G}^*}) = \sum_{i<j} n_in_j - a_1^2 + 2a_1 + \sum_{i=3}^r n_i \ge \sum_{i<j} n_in_j - a_1^2 + a_1-a_2 + \sum_{i\in [r]} a_i.$$ As $a_1a_2 \ge a_1^2-a_1+a_2$ for any positive integers $a_2\ge a_1$, we get $e({{G}^*})\ge e(G)$. To conclude the argument, it is enough to prove that $n_i\ge 2a_1$ for every $i\in[r]$. Indeed, this will establish that ${{G}^*}$ is a [CPR]{} graph, and, using , imply that $D_r({{G}^*}) = a_1$. We can show this through a fairly straightforward calculation. As the number of edges in an $r$-partite graph is maximized by the Turán graph, we have $\sum_{i<j} n_in_j \le t_r(n)$. Combining this with , we get $e(G) \le t_r(n) - a_1a_2 + n$. But we assumed that $e(G) > t_r(n) - n$, so $a_1\le \sqrt{2n}$. On the other hand, suppose that $n_{i'} \le 3\sqrt{n}$ for some $i'\in [r]$. Let $\mathbf{n} = (n_1,\dots,n_r)$ and $\mathbf{n}'=(n_1,\dots,n_{i'-1},n_{i'+1},\dots,n_r)$. Once again, the maximality of Turán graphs gives $$\sum_{i<j} n_in_j = e\left(K_{\mathbf{n}}\right) \le e\left(K_{\mathbf{n}'}\right) + n_{i'}n \le t_{r-1}(n-n_{i'}) + n_{i'}n \le t_{r-1}(n) + n_{i'}n.$$ We can therefore further bound as $$e(G) \le t_{r-1}(n) + 3n^{3/2} + n \le \frac{r-2}{r-1}\cdot \frac{n^2}{2} + 4n^{3/2} \le \frac{r-1}{r}\cdot \frac{n^2}{2} - \frac{n^2}{2r^2} + \frac{n^2}{4r^2} \le t_r(n) - n,$$ using $n\ge 2^8r^4$ and $\frac{r-1}{r}\cdot \frac{n^2}{2}+n \ge t_r(n)\ge \frac{r-1}{r}\cdot \frac{n^2}{2} - n$ from . But this contradicts our assumption on $e(G)$, so indeed, $n_i\ge 3\sqrt{n} \ge 2a_1$ for every $i\in[r]$. Proof of {#sec:mainproof} ========= It will be more convenient for us to prove the following, slightly weaker analog of . \[thm:weakpentstab\] For every $r \ge 2$ there is a $\delta_r > 0$ such that the following holds: If $G$ is a $K_{r+1}$-free graph on $n$ vertices with $e(G) \ge t_r(n) - \delta_r n^2$ edges, then there is a [CPR]{} graph ${{G}^*}$ on $n$ vertices with $e({{G}^*}) \ge e(G)$ and $D_r({{G}^*}) \ge D_r(G)$. This statement easily implies the full theorem: shows the existence of a [CPR]{} graph ${{G}^*}$, such that $e({{G}^*}) \ge e(G)$ and $D_r({{G}^*}) \ge D_r(G)$. Let us choose such a ${{G}^*}$ so that $e({{G}^*})$ is maximum. We claim that this ${{G}^*}$ is in fact a [pentagonal Turán]{}graph. We know that ${{G}^*}=L_r[x,y,y,n_1,\dots,n_r]$ such that $x\le y\le n_i$ for every $i\in [r]$. Note that $e({{G}^*})\le t_r(n)-y^2$, so if $\delta_r<r^{-10}$, then $y\le \frac{n}{4r}$. To show that ${{G}^*}$ is a [pentagonal Turán]{}graph, we just need to check that the numbers $x+y+n_1,x+y+n_2,n_3,\dots,n_r$ do not differ by more than 1. Suppose that the $i$-th of these quantities is the largest among them, and the $j$-th is the smallest. If their difference was at least 2, then the graph ${\widetilde{G}}=L_r[x,y,y,n_1,\dots,n_i-1,\dots,n_j+1,\dots,n_r]$ would have more edges than ${{G}^*}$. Also, $x+y+n_i\ge \frac{n}{r}$ and $x\le y\le \frac{n}{4r}$, so ${\widetilde{G}}$ is a [CPR]{} graph with $D_r({\widetilde{G}})=xy = D_r({{G}^*})$. This contradicts the maximality of ${{G}^*}$ and establishes the theorem. Our proof of divides into two main parts: defining a [CPR]{} graph ${{G}^*}$ based on our $G$, and comparing the number of edges in $G$ and ${{G}^*}$. In the first part of the proof, we find an appropriate $r$-partition of $G$, with a large enough matching of internal edges, and use structural considerations to construct a ${{G}^*}$ that has at least as many *internal* edges in its standard $r$-partition as $G$. Then in the second part, we use the $K_{r+1}$-freeness of $G$ to prove that it misses many of its *crossing* edges, and ultimately show that ${{G}^*}$ has more crossing edges in its standard $r$-partition. We will start with defining an $r$-partition on $G$. Assume $\delta_r \le r^{-60}$, and suppose our $K_{r+1}$-free graph $G=(V,E)$ has $t_r(n)-\delta n^2$ edges for some $\delta \in (0,r^{-60})$. We may assume that $\delta n^2\ge 1$, and hence $n\ge \delta^{-1/2} \ge r^{20}$. Now if $\delta n^2 \le \frac{n}{r}(1+1/r^3)$, then we can apply , noting that $n\ge r^{20} \ge 2^8r^4$, to obtain the desired ${{G}^*}$. So we may also assume that $\delta n^2 > \frac{n}{r}(1+1/r^3)$, and in particular, $n\ge \frac{1}{\delta r}$. We first show that $G$ contains a large induced subgraph with high minimum degree. \[weeding\] There is a vertex subset $S {\subseteq}V$ with $|S| \le 2\delta r^{10}n$ such that for all $v \in V\setminus S$, $$d_{G - S}(v) \ge n\left(\tfrac{r-1}{r} - r^{-10}\right).$$ Let us iteratively remove vertices of degree less than $n(\frac{r-1}{r} - r^{-10})$. If this procedure stops with at most $2\delta r^{10}n$ removals, then we are done by choosing $S$ to be the set of removed vertices. So suppose otherwise, and let $B$ be the set of the first ${\left\lceil 2\delta r^{10} n \right \rceil}$ vertices deleted. Then the number of edges in the graph $J = G - B$ can be bounded by $$e(J) \ge e(G) - n\left(\tfrac{r-1}{r} - r^{-10}\right)|B| = t_r(n) - \delta n^2 - n\left(\tfrac{r-1}{r} - r^{-10}\right)|B|.$$ By , we have $t_r(n) \ge t_r(n-|B|) + \frac{r-1}{r}(n-|B|)|B|$, and hence $$e(J) \ge t_r(|J|) - \delta n^2 + r^{-10} n |B| - \tfrac{r-1}{r}|B|^2.$$ Note that $|B|\ge 2$ (as $2\delta r^{10} n \ge 2r^9>1$), so $2\delta r^{10} n \le |B| \le 4\delta r^{10}n$. Using $1 > \delta r^{60}> 16\delta r^{20}$, this yields $$r^{-10} n|B| \ge 2\delta n^2 > \delta n^2 + 16 \delta^2 r^{20}n^2 \ge \delta n^2 + |B|^2.$$ But then $e(J) > t_r(|J|)$, contradicting the fact that $J$ is $K_{r+1}$-free. implies that $G - S$ is $r$-partite. Let $U_1\cup \dots\cup U_r$ be an $r$-partition of $G-S$. By the minimum degree condition of $G - S$, every vertex $x\in U_i$ has at least $n(\frac{r-1}{r}-r^{-10})$ neighbors in $G - S - U_i$, so $|U_i|\le n(\frac{1}{r} + r^{-10}) - |S|$ for each $i$. On the other hand, $|U_i| \ge n - |S| - \sum_{j\ne i} |U_j|$, so we get that for every $i$, $$\label{usizebound} |U_i| \ge n\left(\tfrac{1}{r} - (r-1)r^{-10}\right).$$ This also means that the neighborhood of each vertex in $U_i$ misses at most $r^{-9}n$ vertices in $\bigcup_{j\neq i}U_j$ and so the number of crossing edges missing between the $U_i$ is at most $r^{-9}n^2$. Now let us extend this partition into an $r$-partition $V = V_1\cup \dots\cup V_r$ of the entire vertex set of $G$ that maximizes the number of crossing edges, assuming $U_i {\subseteq}V_i$. In particular, each vertex of $S$ has at most as many neighbors in its own part as in any other part, i.e., for $s\in S\cap V_i$, $$|\Gamma(s) \cap V_i| = \min_{j \in [r]}|\Gamma(s) \cap V_j|.$$ Let us define $D$ to be the maximum internal degree of $G$ in this partition, i.e., $$D = \max_{i\in [r]} \max_{v\in V_i} |\Gamma(v) \cap V_i|$$ \[Ddef\] We may assume that $D$ is the degree of some vertex $u\in S$, and that $$6|S|\le D \le 2r^{-4.5} n.$$ Note that all internal edges are incident with $S$ and so $D_r(G) \le |S| D$. If $D$ is smaller than $6|S|$, then $D_r(G)\le 6|S|^2\le 24\delta^2 r^{20}n^2$. We claim that there is a [CPR]{} graph ${{G}^*}$ with at least $t_r(n)-\delta n^2$ edges such that $D_r({{G}^*})$ is larger than this. Indeed, apply with $s={\left\lfloor \frac{\delta rn}{1+1/r^3} \right \rfloor}$ to obtain the graph ${{G}^*}$ with at least $t_r(n)-\delta n^2$ edges and $D_r({{G}^*})\ge \frac{\sqrt{s^3n}}{r^2}$. Our previous assumption that $\delta n^2 >\frac{n}{r}(1+1/r^3)$ implies that $s\ge 1$, and therefore $s\ge \frac{\delta rn}{4}$. This means that $$D_r({{G}^*}) \ge \frac{\delta^{3/2}r^{3/2}n^2}{8r^2} > \frac{\delta^2 r^{29} n^2}{8} > 24\delta^2 r^{20}n^2 \ge D_r(G),$$ as required (we used $1>\sqrt{\delta}r^{30}$ and $r\ge 2$). So we may assume that $D\ge 6|S|$. In particular, as the internal degree of each vertex in $V\setminus S$ is at most $|S|$, a vertex of maximum internal degree $D$ must lie in $S$. Let $u$ be any such vertex. Now from the definition of our $r$-partition, we have $|\Gamma(u) \cap U_i| \ge D -|S| \ge \frac{5D}{6}$ for each $i \in [r]$. Since $\Gamma(u)$ is $K_r$-free, tells us that there are at least $\left(\tfrac{5D}{6}\right)^2 \ge D^2/2$ crossing edges missing between the $U_i$. On the other hand, we have seen that there are at most $r^{-9}n^2$ such edges missing, so $D \le 2r^{-4.5}n$. Let $u \in S$ be the vertex from , so by the definition of our $r$-partition, it has at least $D$ neighbors in each $V_i$. For each $i \in [r]$, fix a set $P_i {\subseteq}\Gamma(u) \cap V_i$ with $|P_i| = D$. We now come to finding a suitable matching consisting of internal edges. Let $H = \bigcup_{i \in [r]}G[V_i]$ be the subgraph of $G$ containing only the internal edges. Then $H$ has at most $D|S|$ edges and maximum degree $D$. Let $k={\left\lceil \frac{e(H)}{D} \right \rceil}$ and note that $k \le |S|$, so $D \ge 6|S| \ge 2k$. Therefore, by , we can find a matching $M$ of size $k$ in $H$. For each $i \in [r]$, let $M_i = M[V_i]$ be the set of matching edges in $V_i$. Further split each $M_i$ into three sets $M_i = A_i \cup B_i \cup C_i$ according to the matching pairs’ interaction with $P_i$: $$\begin{aligned} A_i &= \left\{uv \in M_i : u,v \notin P_i\right\}, \\ B_i &= \left\{uv \in M_i : u \in P_i, v \notin P_i\right\}, \\ C_i &= \left\{uv \in M_i : u,v \in P_i\right\}.\end{aligned}$$ Then define $a_i = |A_i|$, $b_i = |B_i|$ and $c_i = |C_i|$, and set $a = \sum_{i\in [r]}a_i$, $b = \sum_{i\in [r]}b_i$, and $c = \sum_{i\in [r]}c_i$ (so we have $k=a+b+c$). Note that if $V^A_i, V^B_i, V^C_i$ and $V^M_i$ denote the vertex sets of the matchings $A_i,B_i,C_i$ and $M_i$ respectively, then $|V^A_i| = 2a_i$, $|V^B_i| = 2b_i$ and $|V^C_i| = 2c_i$. We denote the unions over $i\in [r]$ by $V^A, V^B, V^C$ and $V^M$, so $|V^M| = 2k$. Finally, we set $R_i = V_i \setminus (P_i \cup V^M_i)$ and $K_i = |R_i|$. With this notation at hand, we note that $|V_i| = K_i + D + 2a_i+b_i$ for each $i \in [r]$. To bound $K_i$ from below, recall that $U_i{\subseteq}V_i$ is an independent set, so at most $k\le |S|$ of its vertices are covered by $M$. So by , $\delta < r^{-60}$, and , we have $$\begin{aligned} K_i \ge |U_i| - |S| - D &\ge n\left(\frac{1}{r} - (r-1)r^{-10}\right) - 2\delta r^{10} n - 2r^{-4.5}n \nonumber \\ &\ge n\left(r^{-1} - r^{-9} - 2r^{-50} - 2r^{-4.5}\right) \nonumber \\ &\ge r^{-4.5}n\left(r^{3.5} - 3\right) \ge 8r^{-4.5}n \ge 4D.\end{aligned}$$ We may assume without loss of generality that $K_1 \le K_2 \le \dots \le K_r$. Together with , we get the following relationship between our quantities, which we will use throughout the proof: $$\label{eq:consts} K_r \ge \dots \ge K_2\ge K_1 \ge 4D \ge 24k.$$ in [1,...,4]{} (5,0 - 2\*) rectangle (10,1.2 - 2\*); (1,0 - 2\*) rectangle (4,1.2 - 2\*); (.7,.9 - 2\*) – (1.3,.9 - 2\*) ; (.7,.6 - 2\*) – (1.3,.6 - 2\*) ; (.7,.3 - 2\*) – (1.3,.3 - 2\*) ; (.5,0 - 2\*) rectangle (4,1.2 - 2\*); (1.5,.75 - 2\*) – (2.1,.75 - 2\*) ; (1.5,.45 - 2\*) – (2.1,.45 - 2\*) ; (-.6,.75 - 2\*) – (0,.75 - 2\*) ; (-.6,.45 - 2\*) – (0,.45 - 2\*) ; (-1,-.1 - 2\*) rectangle (10.2,1.3 - 2\*); (5,-.35) – node\[above=2pt\] [$R$]{} (10,-.35); (.5,-.35) – node\[above=2pt\] [$Q = P \cup V^B$]{} (4,-.35); (-.7,-.35) – node\[above=2pt\] [$V^A$]{} (.1,-.35); (-.7,-8.5) – node\[below=2pt\] [$V^M$]{} (2.2,-8.5); (11,-.1-2) – node\[right=2pt\] [$V_1$]{} (11,1.3-2); at (7.5,0.6 - 2) [$R_1$]{}; at (3,0.6 - 2) [$P_1$]{}; [at (7.5,0.6 - 2\*3) [$|R_i| = K_i$]{}; at (3,0.6 - 2\*3) [$|P_i| = D$]{}; at (12.5,0.6 - 2\*3) [$|V_i| = K_i + D + 2a_i + b_i$]{}; ]{} We are now ready to introduce an appropriate [CPR]{} graph ${{G}^*}$ that will satisfy both $e({{G}^*}) \ge e(G)$ and $D_r({{G}^*}) \ge D_r(G)$. Let ${{G}^*}= L_r[k,D,D,n_1,\dots,n_r]$ be the graph on vertex set $X\cup Y_1 \cup Y_2\cup Z_1\cup \dots \cup Z_r$ as defined in the introduction, where $|X|=k$, $|Y_1|=|Y_2|=D$, $|Z_j|=n_j = K_j+D+2a_j+b_j$ for $j\ge 3$, and $$\begin{aligned} |Z_1| &= n_1 = K_1 + a_1 - c_1\\ |Z_2| &= n_2 = K_2 + a_1+b_1+c_1+2a_2+b_2- k.\end{aligned}$$ Note that $|V_j|=|Z_j|$ for $j\ge 3$, and $|V_1|+|V_2| = |X|+|Y_1|+|Y_2|+|Z_1|+|Z_2|$, so $G$ and ${{G}^*}$ have an equal number of vertices. and give $D_r({{G}^*}) = kD$. As $kD\ge e(H) \ge D_r(G)$, we get $D_r({{G}^*})\ge D_r(G)$, and that ${{G}^*}$ has at least as many internal edges in its standard $r$-partition as $G$. It is therefore enough to show that ${{G}^*}$ also has at least as many crossing edges as $G$. We start with a lower bound for ${{G}^*}$. \[yaycounting\] The number of crossing edges in ${{G}^*}$ is at least $$\sum_{i<j} |V_i||V_j| - \left(D^2 + b_1b_2 + (a_1+b_1+c_1)K_2 + (k-a_1-b_1-c_1)K_1 + (a_1+a_2)\frac{D}{2}\right).$$ First of all, as $|V_1\cup V_2| = |Z_1 \cup Z_2\cup Y_1\cup Y_2\cup X|$, and $|V_i|=|Z_i|$ for every $i\ge 3$, there are exactly $\sum_{i<j} |V_i||V_j| - |V_1||V_2|$ crossing edges in ${{G}^*}$ incident to $\bigcup_{i=3}^r Z_i$. As for the edges induced by $Z = Z_1 \cup Z_2\cup Y_1\cup Y_2\cup X$, there are $$\big(|V_1| - (a_1+b_1+c_1)\big)\big(|V_2|+ (a_1+b_1+c_1)\big) = |V_1||V_2| - (|V_2|-|V_1| + a_1+b_1+c_1) (a_1+b_1+c_1)$$ potential crossing edges in the standard $r$-partition of ${{G}^*}$, out of which $$|Y_1||Y_2| + |X||Z_1| = D^2 + k(K_1+a_1-c_1)$$ are missing. Here $|V_2|-|V_1| + a_1+b_1+c_1 = K_2-K_1 +2a_2 + b_2 - a_1+c_1$, so by rearranging, we get that the number of crossing edges in ${{G}^*}$ is $$\sum_{i<j} |V_i||V_j| - \big( D^2 + b_1b_2 + (a_1+b_1+c_1)K_2 + (k-a_1-b_1-c_1)K_1 + \Lambda \big),$$ where $$\Lambda = a_1(k+2a_2+b_2-a_1-b_1) + a_2(2b_1+2c_1) - c_1 (k-b_1-b_2-c_1) \le (a_1+a_2)\cdot 3k$$ The result then follows from $D\ge 6k$. Recall that there are exactly $\sum_{i<j} |V_i||V_j|$ potential crossing edges in $G$. It therefore suffices to show that at least $$\begin{aligned} D^2+b_1b_2 + (a_1+b_1+c_1)K_2 + (k-a_1-b_1-c_1)K_1 + (a_1+a_2)\frac{D}{2} \label{wanted}\end{aligned}$$ of them are missing from $G$. It will be easier to split the graph into two, and bound the number of missing edges separately. Let $Q_i = P_i \cup V^B_i$ be the set obtained by extending $P_i$ with the vertices of the matching $B_i$ for every $i\in[r]$, so that $V^A_i, Q_i$ and $R_i$ partition $V_i$, and let $Q=\bigcup_{i\in [r]} Q_i$. We first count the number of crossing edges with both endpoints in $Q$, and then the ones with at most one end in $Q$. \[doubleup\] $G$ misses at least $D^2 + b_1b_2$ of the crossing edges induced by $Q$. We use a similar argument to the proof of . Let ${\mathcal{F}}$ be the family of all $r$-sets $\{v_1,\dots,v_r\}$ such that $v_i\in P_i$ for every $i=1,\dots,r$, but $v_1\notin V^B_1$ or $v_2\notin V^B_2$. Then $|{\mathcal{F}}| = D^r-b_1b_2D^{r-2}$. Similarly, let ${\mathcal{G}}$ be the family of all $(r+2)$ sets $\{v_1,\dots,v_r, v'_1,v'_2\}$ such that $v_1v_1'\in B_1$, $v_2v_2'\in B_2$, and $v_i\in P_i$ for every $i=3,\dots,r$. Then $|{\mathcal{G}}| = b_1b_2D^{r-2}$. Recall that $P_1,\dots, P_r$ were all in the neighborhood of some vertex $u$. This means that there must be a (crossing) edge missing in $G[X]$ for every $X\in {\mathcal{F}}$. Also, for $Y\in{\mathcal{G}}$, $G[Y]$ is a $K_{r+1}$-free graph on $r+2$ vertices and so must be missing at least two edges. As $v_1v'_1$ and $v_2v'_2$ are both present in $G$, the missing edges in $G[Y]$ are also crossing. Summing over the sets in ${\mathcal{F}}\cup {\mathcal{G}}$ gives at least $D^r + b_1b_2D^{r-2}$ missing crossing edges in total. It is easy to check that each missing edge $v_iv_j$ (or $v'_iv_j$ or $v'_iv'_j$) in $G$ is contained in exactly $D^{r-2}$ sets from ${\mathcal{F}}\cup{\mathcal{G}}$, so $G[Q]$ misses at least $D^2+b_1b_2$ crossing edges. \[restmissing\] $G$ misses at least $$\label{eq:remain} (a_1+b_1+c_1)K_2 + (k-a_1-b_1-c_1)K_1 + (a_1+a_2)D/2$$ crossing edges with at most one endvertex in $Q$. As a first attempt, we try to find a set of missing crossing edges for each matching edge in $M$ so that they are all disjoint and not induced by $Q$. More specifically, we want to show that for every edge $e\in M_1$, there are $K_2$ missing edges between $e$ and $R=\bigcup_{i\in [r]} R_i$, and for every remaining edge $e\in M\setminus M_1$, there are $K_1$ missing edges between $e$ and $R$. Moreover, for every $e\in A_1\cup A_2$, we want $D/2$ additional missing edges between $e$ and $Q$. This would be exactly the amount we need.[^3] Of course, it may well be that some edge in $M$ is incident to fewer missing edges. Let $M'_1 = M_1$ and $M'_2 = M\setminus M_1$. To first bound the number of crossing edges between $M$ and $R$, we define $T$ to be the largest “deficit” in the above counting, i.e., the smallest *nonnegative* integer such that for each $i=1,2$ and every edge $vv'\in M'_i$, there are at least $K_{3-i}-T$ missing edges between $\{v,v'\}$ and $R\setminus R_i$. To count the missing edges between $A_1\cup A_2$ and $Q$, we split $A_i$ into $A^g_i \cup A^b_i$ for each $i=1,2$ as follow. $A^g_i$ is the set of “good” edges $vv'$, such that there are at least $K_{3-i}-T+D/2$ edges missing between $\{v,v'\}$ and $(Q\cup R) \setminus (Q_i\cup R_i)$, and $A^b_i$ is the set of “bad” edges, where this is not the case. So far this gives at least $$|A^g_1|(K_2-T) + |A^g_2|(K_1-T) + (|A^g_1| + |A^g_2|)D/2$$ missing crossing edges between the good edges of $A$ and $Q\cup R$, and another $$(|M_1'|-|A^g_1|)(K_2-T) + (|M_2'|-A^g_2)(K_1-T)$$ between all other edges of $M$ and $R$. This is a total of $$\label{eq:stdedges} |M_1'|(K_2-T) + |M_2'|(K_1-T) + (|A^g_1| + |A^g_2|)D/2$$ missing edges between $V^M$ and $R$. To get , we need to analyze the structure a bit. Suppose $vv'\in A^b_i$ is a bad edge for some $i$. Then there are at most $K_{3-i}-T+D/2$ missing edges from $\{v,v'\}$ to $(R\cup Q)\setminus (R_i\cup Q_i)$, at least $K_{3-i}-T$ of which are incident with $R\setminus R_i$. So $vv'$ must have at least $D/2$ common neighbors in each $P_j$ with $j\ne i$. In particular, as $k \le D/6$, for every $j\ne i$ there is a set $N_j{\subseteq}P_j\setminus (V^B_j\cup V^C_j)$ of at least $D/6$ common neighbors in $P_j$ that is disjoint from $M$. Choose $i'\ne i$ so that $\Gamma(v)\cap \Gamma(v')\cap R_{i'}$ is smallest. Then for every $j\ne i,i'$, $$|\Gamma(v)\cap \Gamma(v')\cap R_j| \ge \frac{(K_{i'} + K_j) - (K_{3-i}+D/2)}{2} \ge \frac{K_2}{2} - \frac{D}{4} \ge \frac{7K_2}{16}$$ because there are at most $K_{3-i}+D/2$ missing edges from $\{v,v'\}$ to $R_{i'}\cup R_j$, and $D\le K_2/4$. We may assume that every triangle induced by $V_i\cup V_i'$ has at most $K_2/4$ common neighbors in some $R_j$ with $j\ne i,i'$. Indeed, the common neighborhood of this triangle is $K_{r-2}$-free. The case $r\le 3$ is then vacuously true, so suppose $r \ge 4$. Then if the triangle has at least $K_2/4$ common neighbors in every $R_j$ with $j\ne i,i'$, then by , $G[R]$ misses at least $K_2^2/16$ crossing edges. But $K_2^2/16 \ge k(K_2+D) \ge \eqref{eq:remain}$, so we are done. This means that we can assume that every triangle $vv'w$ with $w\in N_{i'}$ has at most $K_2/4$ common neighbors in some $R_j$ with $j\ne i,i'$, so there are at least $\frac{7K_2}{16}-\frac{K_2}{4} = \frac{3K_2}{16} > 4k$ missing edges between $w$ and $R\setminus (R_i\cup R_j)$. Summing over all $w\in N_{i'}$, we find at least $$\label{eq:badedges} 4kD/6 \ge kD/2 \ge (|A^b_1| + |A^b_2|)D/2$$ missing edges between $Q\setminus V^M$ and $R$. If $T=0$, then we are already done: and together give enough edges for . So let us assume that $T>0$, i.e., there is an edge $vv'\in M'_i$ for some $i$ such that there are exactly $K_{3-i}-T$ missing edges between $\{v,v'\}$ and $R\setminus R_i$. Once again, choose $i'\ne i$ so that $\Gamma(v)\cap \Gamma(v')\cap R_{i'}$ is smallest. Then $$|\Gamma(v)\cap \Gamma(v')\cap R_{i'}| \ge K_{i'} - (K_{3-i} - T) \ge T$$ and for every $j\ne i,i'$, $$|\Gamma(v)\cap \Gamma(v')\cap R_j| \ge \frac{K_{i'} + K_j - (K_{3-i} - T)}{2} \ge \frac{K_2}{2}.$$ By , there must be at least $$\label{eq:Redges} \frac{K_2}{2} \cdot T \ge kT$$ missing edges induced by $R$. Adding , and together, we get . Putting and together, we obtain the theorem. Concluding remarks {#sec:conclusion} ================== With in hand, finding the exact [pentagonal Turán]{}graph $G$ that maximizes $D_r(G)$ assuming $e(G)\ge t_r(n) - \delta n^2$ is a matter of calculation. The result of Balogh, Clemen, Lavrov, Lidický and Pfender [@BCLLP] shows that among [pentagonal Turán]{}graphs with $t_r(n)-\delta n^2$ edges, $D_r(G)$ is maximized when $x\approx \frac{2r}{3}\delta n$, $y\approx \sqrt{\frac{\delta}{3}}n$, $n_j\approx (\frac{1}{r}+\frac{2}{3}\delta)n$ for $j\ge 3$, and $n_i\approx (\frac{1}{r}-\frac{2(r-1)}{3}\delta - \sqrt{\frac{\delta}{3}})n$ for $i=1,2$, and the maximum is $D_r(G)\approx \frac{2r}{3\sqrt{3}}\delta^{3/2}n^2$. It would be very interesting to find exact stability results for other classes of graphs. Of course, this is generally a harder problem than determining the exact extremal graphs, which is often already a difficult task on its own. A natural next step is to consider $H$-free graphs where $H$ is a graph with a critical edge, that is, there is an edge $e \in E(H)$ such that the deletion of $e$ from $H$ reduces the chromatic number. Examples of such graphs include cliques and odd cycles. An old theorem of Simonovits [@S74] says that when $H$ is an $(r+1)$-chromatic graph with a critical edge, the Turán graph $T_r(n)$ is the unique $H$-free graph maximizing the number of edges, provided $n$ is large enough. But even in this case, it seems unclear what the right conjecture should be for the set of $H$-free graphs $G$ that maximize $D_r(G)$ when $e(G)\ge t_r(n)-t$. For odd cycles, we propose the following conjecture. Let $k\ge 2$, and suppose $G$ is a $C_{2k-1}$-free graph with $n$ vertices and at least $(\frac{1}{4}-\delta)n^2$ edges. Then some blowup ${{G}^*}$ of $C_{2k+1}$ satisfies $e({{G}^*})\ge e(G)$ and $D_2({{G}^*}) \ge D_2(G)$. Blowups of $C_{2k+1}$ might also be optimal for every 3-chromatic graph $H$ with a critical edge, whose shortest odd cycle has length $2k-1$. Such graphs are certainly $H$-free, and results of Roberts and Scott [@RS18] imply that the bound they give on $D_2(G)$ (with $e(G)$ fixed) is tight up to a constant factor. It is tempting to guess that when $H$ is a general $(r+1)$-chromatic graph with a critical edge, then the optimum $D_r(G)$ is attained by complete $C_{2k+1}$-Turán graphs (defined analogously to [pentagonal Turán]{}graphs by inserting a blowup of $C_{2k+1}$ into a part of a complete $(r-1)$-partite graph), where $k$ is some parameter depending only on $H$. A closely related problem, which served as the main motivation for the paper of Erdős, Győri and Simonovits [@erdoscan'tmaths], is the old conjecture of Erdős [@E76] claiming $D_2(G) \ge \frac{n^2}{25}$ for every $K_3$-free graph $G$ on $n$ vertices. This trivially holds when $e(G)\le \frac{2n^2}{25}$, and was proved for $e(G)\ge \frac{n^2}{5}$ by Erdős, Faudree, Pach and Spencer [@EFPS88]. If true, the conjecture is tight for a balanced blowup of $C_5$. This problem led to further research into how far $K_{r+1}$-free graphs can be from being bipartite. Sudakov [@S07] proved a variant of the conjecture for 4-cliques, showing that $D_2(G)$ is maximized by $G=T_3(n)$ among $K_4$-free graphs. Sudakov conjectured that this generalizes to larger cliques (i.e., among $K_{r+1}$-free graphs, $D_2(G)$ is maximum when $G=T_r(n)$). A proof of this for $K_6$ has been announced by Hu, Lidický, Martins, Norin and Volec [@HLMNV]. The remaining cases remain wide open. [\[AHU\]]{} N. Alon, J. Balogh, P. Keevash and B. Sudakov, The number of edge colorings with no monochromatic cliques, *J. Lond. Math. Soc.* 70 (2004), 273–288. K. Amin, J. Faudree, R. J. Gould and E. Sidorowicz, On the non-$(p-1)$-partite $K_p$-free graphs, *Discuss. Math. Graph Theory* 33 (2013), 9–23. B. Andrásfai, P. Erdős and V.T. Sós, On the connection between chromatic number, maximal clique and minimal degree of a graph, *Discrete Math.* 8 (1974), 205–218. J. Balogh, B. Bollobás and M. Simonovits, The number of graphs without forbidden subgraphs, *J. Combin. Theory Ser. B* 91 (2004), 1–24. J. Balogh, F.C. Clemen, M. Lavrov, B. Lidický and F. Pfender, Making $K_{r+1}$-free graphs $r$-partite, *arXiv:1910.00028* preprint. J. Balogh, R. Morris, W. Samotij and L. Warnke, The typical structure of sparse $K_{r+1}$-free graphs, *Trans. Amer. Math. Soc.* 368 (2016), 6439–6485. A.E. Brouwer, Some lotto numbers from an extension of Turán’s theorem, *Math. Centr. report* ZW152, Amsterdam (1981), 6pp. V. Chvátal and D. Hanson, Degrees and matchings, *J. Combin. Theory Ser. B* 20 (1976), 128–138. P. Erdős, Some recent results on extremal problems in graph theory (Results), *Theory of Graphs (Internl. Symp. Rome)* (1966), 118–123. P. Erdős, Problems and results in graph theory and combinatorial analysis, in: *Proc. Fifth British Combinatorial Conference (Univ. Aberdeen, Aberdeen, 1975),* Congress. Numer. XV Utilitas Math., Winnipeg, Man. (1976) 169–192. P. Erdős, R. Faudree, J. Pach and J. Spencer, How to make a graph bipartite, *J. Combin. Theory Ser. B* 45 (1988), 86–98. P. Erdős, E. Győri and M. Simonovits, How many edges should be deleted to make a triangle-free graph bipartite?, in: *Sets, graphs and numbers (Budapest, 1991),* Colloq. Math. Soc. János Bolyai, North-Holland, Amsterdam, 60 (1992) 239–263. Z. Füredi, A proof of the stability of extremal graphs, Simonovits’ stability from Szemer[é]{}di’s regularity, *J. Combin. Theory Ser. B* 115 (2015), 66–71. D. Hanson and B. Toft, $k$-saturated graphs of chromatic number at least $k$, *Ars Combin.* 31 (1991), 159–164. P. Hu, B. Lidický, T. Martins, S. Norin and J. Volec, Large multipartite subgraphs in $H$-free graphs, manuscript. M. Kang and O. Pikhurko, Maximum $K_{r+1}$-free graphs which are not $r$-partite, *Mat. Stud.* 24 (2005), 12–20. W. Mantel, Problem 28, *Wiskundige Opgaven* 10 (1907), 60–61. A. Roberts and A. Scott, Stability results for graphs with a critical edge, *European J. Combin.* 94 (2018), 27–38. W. Samotij, Stability results for random discrete structures, *Random Structures Algorithms* 44 (2014), 269–289. M. Simonovits, Extrém gráfok struktúrájáról (On the structure of extremal graphs, in Hungarian), *CSc Thesis,* Eötvös Loránd University, Budapest (1969), 112pp. M. Simonovits, Extremal graph problems with symmetrical extremal graphs. Additional chromatic conditions, *Discrete Math.* 7 (1974), 349–376. M. Simonovits, A method for solving extremal problems in graph theory, stability problems, in: *Theory of Graphs (Proc. Colloq. Tihany, 1966),* Academic Press, New York (1968), 279–319. B. Sudakov, Making a $K_4$-free graph bipartite, *Combinatorica* 27 (2007), 509–518. K. J. Swanepoel, Unit distances and diameters in Euclidean spaces, *Discrete Comput. Geom.* 41 (2009), 1–27. P. Turán, Eine Extremalaufgabe aus der Graphentheorie, *Mat. Fiz. Lapok* 48 (1941), 436–452. M. Tyomkyn and A.J. Uzzel, Strong Turán stability, *Electron. J. Combin.* 22 (2015), P3.9, 24pp. [^1]: Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, United Kingdom. E-mail: `{korandi, robertsa, scott}@maths.ox.ac.uk`. [^2]: Supported by an SNSF Postdoc.Mobility Fellowship [^3]: The reader might find it helpful to check what the bound means when $G$ is a [CPR]{} graph: the $r$-partition $V_1\cup\dots\cup V_r$ is much like the standard $r$-partition, except the set $X$ might be split between $V_1$ and $V_2$. In any case, we always have $M = B_1\cup B_2$ (in particular, $a_1=a_2=0$), and every edge in $B_i$ contributes exactly $K_{3-i}$ missing edges: one to each vertex of $R_{3-i}$.
--- abstract: | We have obtained multi-epoch, high-resolution spectroscopy of [218]{}candidate low-mass stars and brown dwarfs in the young clusters around and . We find that [196]{} targets are cluster members based on their radial velocity, the equivalent width of their NaI8200 lines and the spectral type from their TiO band strength. We have identified [11]{} new binary stars among the cluster members based on their variable radial velocity and an additional binary from the variation in its line width and shape. Of these, 6 are double-lined spectroscopic binaries (SB2) where the components of the binary are of comparable brightness. The others are single-lined binaries (SB1) in which the companion is faint or the spectra of the stars are blended. There are 3 narrow-lined SB1 binaries in our sample for which the companion is more than 2.5 magnitudes fainter than the primary. This suggests that the mass ratio distribution for the spectroscopic binaries in our sample is broad but that there may be a peak in the distribution near $q=1$. The sample covers the magnitude range ${\mbox{${\rm I}_{\rm C}$}}= 14$–18.9 (mass $\approx 0.55 \-- 0.03{\mbox{${M}_{\sun}$}}$), but all of the binary stars are brighter than ${\mbox{${\rm I}_{\rm C}$}}=16.6 $ (mass $\approx 0.12{\mbox{${M}_{\sun}$}}$) and 10 are brighter than ${\mbox{${\rm I}_{\rm C}$}}= 15.5$ (mass $\approx 0.23{\mbox{${M}_{\sun}$}}$). There is a significant lack of spectroscopic binaries in our sample at faint magnitudes even when we account for the decrease in sensitivity with increasing magnitude. We can reject the hypothesis that the fraction of spectroscopic binaries is a uniform function of  magnitude with more than 99 percent confidence. The spectroscopic binary fraction for stars more massive than about $0.1{\mbox{${M}_{\sun}$}}$ (${\mbox{${\rm I}_{\rm C}$}}< 16.9$) is $f_{\rm bright} = 0.095^{+0.012}_{-0.028}$. The 90 percent confidence upper limit to the spectroscopic binary fraction for very low mass (VLM) stars (mass $< 0.1{\mbox{${M}_{\sun}$}}$) and brown dwarfs (BDs) is $f_{\rm faint} < 7.5$percent. The hypothesis that $f_{\rm bright}$ and $f_{\rm faint}$ are equal can be rejected with 90percent confidence. The average detection probability for our survey is 50percent or more for binaries with separations up to 0.28au for stars with ${\mbox{${\rm I}_{\rm C}$}}< 16.9$ and 0.033au for the fainter stars in our sample. We conclude that we have found strong evidence for a change in the fraction of spectroscopic binaries among young VLM stars and brown dwarfs when compared to more massive stars in the same star-forming region. This implies a difference in the total binary fraction between VLM stars and BDs compared to more massive stars or a difference in the distribution of semi-major axes, or both. bibliography: - 'mybib.bib' date: Submitted 2007 title: 'A survey for low mass spectroscopic binary stars in the young clusters around $\sigma$ Orionis and $\lambda$ Orionis.' --- \[firstpage\] binaries: spectroscopic – stars: low-mass, brown dwarfs. Introduction ============ The origins of very low-mass stars (VLMS) and brown dwarfs (BD) are proving difficult to understand, despite being more common than stars of higher mass. Ideas include ejection from protostellar aggregates [@2001AJ....122..432R], formation within convergent flows generated by turbulence [@2004ApJ...617..559P], the photo-erosion of pre-stellar cores or fragmentation within the outer parts of circumstellar discs . The frequency and separation distribution of binary systems is an important constraint on the likely formation process. There is strong evidence that the binary properties of the lowest mass stars and brown dwarfs are quite different to those of higher mass objects (see the review of @2007prpl.conf..427B). Resolved imaging of nearby VLMS and BDs in the field show that about 15–20 percent of systems are binaries with separations greater than 1–2au, but that very few have separations greater than 20au (e.g. @2003ApJ...587..407C; @2003AJ....126.1526B). This contrasts with higher mass stars where overall binary frequencies are 30–60 percent, with a much broader spread of possible separations (@1992ApJ...396..178F, ). A missing part of the picture is how many VLMS and BDs are binary systems with separations less than about 1au, where the imaging observations cannot reach. @2005MNRAS.362L..45M used previously published radial velocity (RV) results to show that an overall binary frequency (at all separations) of 32–45 percent was needed to explain the presence of several binaries detected by RV variations, with most of them at small separations. This high frequency found support from a small RV survey by @2006MNRAS.372.1879K who found a frequency of 11–40 percent for separations less than 0.1au in a young association. On the other hand @2006AJ....132..663B found few RV variables in their survey of field VLMS/BDs and concluded that the overall binary frequency was 16–36 percent, with very few binaries at separations below 1au The situation is unresolved and it is quite likely that the apparent discrepancies between these various authors arise from biases within the samples considered; differences in analysis technique and that in terms of absolute numbers, very few short-period VLMS/BD binary systems have yet been found, thus limiting the statistical precision possible. In this paper we present the results of an RV survey of a large number of low-mass stars and brown dwarfs in the  and  clusters. These clusters are young and nearby (3–5 Myr, 330–450pc) and contain large populations of VLMS and BDs (@2004AN....325..705B; @2004ApJ...610.1064B; @2005MNRAS.356...89K). We have observed more than 200 objects using the [flames]{} multi-fibre spectrograph on the VLT-Kueyen telescope to measure radial velocities at several epochs and searched for close binary systems at a range of masses. Observations and data reduction {#ObsAndRed} =============================== Observations {#Observations} ------------ We have used the [flames]{} multi-object spectrograph [@2002Msngr.110....1P] on ESO’s VLT UT2 (Kueyen) telescope to obtain multi-epoch, high-resolution spectroscopy of [218]{} faint stars in the clusters around  and . The instrument is capable of providing spectra of up to 130 targets in one setting over a field of view 25 arcmin in diameter. We selected targets around  from the photometric catalogue of @2005MNRAS.356...89K. We included all stars in the magnitude range $14 < {\mbox{${\rm I}_{\rm C}$}}< 19$ with the correct  magnitude and $({\mbox{${\rm R}_{\rm C}$}}- {\mbox{${\rm I}_{\rm C}$}})$ colour to be a cluster member in the input catalogue for the fibre allocation process, irrespective of any other membership information that may have been available. For  all the candidate members identified in the catalogue of @2004ApJ...610.1064B were included in the input catalogue for the fibre allocation process. Fibres were allocated in 6 fields, 2 fields near and 4 fields near . The central position of each field and the number of targets for which useful spectra were obtained are given in Table \[ObsDateTable\]. The position of the targets in the  v. $({\mbox{${\rm R}_{\rm C}$}}- {\mbox{${\rm I}_{\rm C}$}})$ colour-magnitude diagram is shown in Fig. \[RIFig\]. Those fibres that could not be allocated to targets were used to obtain simultaneous background spectra for sky subtraction. The two fields around  overlap so 10 stars were observed in 2 fields. For most stars we have obtained 3–4 spectra with the exception of 10 stars observed in 2 fields for which we typically have 6–7 spectra. The baseline of the observations is at least 24 days for all stars and is more typically 60–70 days. The sensitivity of our survey drops rapidly for binaries with semi-major axes $a\goa 0.25$au as a result of the baseline of our observations. ![image](sori_ri_arxiv.eps){width="47.00000%"} ![image](lori_ri.eps){width="47.00000%"} ![image](SOri_1_R1_tellfit_small.eps){width="98.00000%"} ![image](example_spectra_arxiv.ps){width="95.00000%"} [@lrrrrr]{} & & & &\ & & & & &\ $\lambda$ Ori1 & 053558.5& $+$095121 & 34 &20051014 & 0.66\ & & & 34 &20051117 & 0.59\ & & & 35 &20051230 & 0.67\ $\lambda$ Ori2 & 053444.1& $+$095144 & 49 &20051014 & 0.64\ & & & 49 &20051127 & 0.99\ & & & 50 &20051128 & 0.63\ & & & 50 &20051230 & 0.58\ $\sigma$ Ori1 & 054028.0& $-$021535 & 26 &20051113 & 0.60\ & & & 22 &20051117 & 0.60\ & & & 26 &20060108 & 0.64\ & & & 26 &20060113 & 0.92\ $\sigma$ Ori2 & 054001.6& $-$024002 & 17 &20051113 & 0.67\ & & & 29 &20051207 & 0.65\ & & & 28 &20060113 & 0.74\ $\sigma$ Ori3 & 053826.9& $-$024121 & 56 &20051113 & 0.64\ & & & 55 &20051124 & 0.67\ & & & 56 &20051207 & 0.56\ & & & 55 &20060113 & 0.60\ $\sigma$ Ori4 & 053823.8& $-$021459 & 33 &20051111 & 0.59\ & & & 34 &20051111 & 0.58\ & & & 34 &20051207 & 0.62\ & & & 32 &20060113 & 0.60\ \[resultstable\] Light from the fibres was fed to the [giraffe]{} spectrograph operated in a high resolution mode with the H836.6 echelle grating. A filter was used to select light from the 6$^{th}$ echelle order which covers the wavelength region 8073–8632Å. The resolving power of the spectra is $R\approx 16000$. Spectra were obtained in service mode by ESO staff on the dates given in Table \[ObsDateTable\]. The exposure time in each case was 2750s. The seeing during the exposures was typically 0.92arcsec but varied from 0.48arcsec to 1.45arcsec. We also took advantage of the possibility to use up to 8 fibres from [flames]{} to obtain spectra with the [uves]{} echelle spectrograph [@2000SPIE.4008..534D] at the same time as the [giraffe]{} observations. We used the CD\#4 echelle grating to obtain spectra at a resolution of $R\approx 47000$ covering the same spectral range as the [giraffe]{} spectra for one or two bright, early-type stars in the field. These spectra were used to calibrate the telluric absorption in the [giraffe]{} spectra. Fibres not allocated to bright stars were used to obtain simultaneous spectra of the night sky. Reduction of the spectra ------------------------ There is strong fringing in the images produced by the [giraffe]{} spectrograph when operated at the wavelengths we have used for our observations. This makes the extraction of the spectra problematic. For brighter stars it is possible to use the spectra and their associated errors extracted from the data automatically by the ESO pipeline provided by the observatory. These spectra are extracted from the images by summing the pixels in each row within a given range around the central position of each fibre. The disadvantages of using these spectra are that there is no cosmic-ray rejection included in the extraction process and that the signal-to-noise ratio drops rapidly as the spectra become fainter. For fainter stars we tried using the [giraffe]{} Base-Line Data Reduction Software (girBLDRS) version 1.13.1 [@2000SPIE.4008..467B] to perform optimal extraction of the spectra. This maximizes the signal-to-noise in the resulting spectra by weighting the pixels according to the variance of each pixel and a model of the spatial profile [@1986PASP...98..609H]. Some sort of weighted extraction is essential to produce usable spectra for the faintest stars in our dataset. It is also possible to identify pixels affected by cosmic-rays in the images and exclude them from the extracted spectra by comparing the spatial profile of the spectra with a model. Unfortunately, we were not able to achieve the same level of stability in the radial velocities for data extracted with girBLDRS as the normal pipeline extraction. We believe that this is due to girBLDRS separately optimizing the profile used to extract the object spectra, the flat-field spectra and the thorium-argon arc spectra. The result is that the profiles used to extract the object, arc and flat spectra are slightly different. This optimization is used to account for small shifts in the positions of the spectra on the detector. The fringe-pattern can change the effective detector efficiency by about 20percent over a spatial scale of only a few pixels and it does not move relative to the detector. Thus, using different profiles for the object, arc and flat spectra results in inaccurate flat-fielding and the introduction of spurious high-frequency noise into the spectra. This appears to be enough to reduce the precision of the radial velocities that can be measured from these spectra to 2–3. For these reasons we decided to develop our own method for extracting the spectra. The key feature of the method is to use the flat-field images to create an empirical model of the spatial profile for the spectra. This spatial profile can then be used to perform a weighted extraction of the spectra and to identify pixels affected by cosmic-rays in the images. The same weights can be used for the object frames, flat-field frames and arc frames, so the flat-fielding process does not introduce high-frequency noise into the spectra, as is the case for optimal extraction with girBLDRS. One disadvantage of this method is that small shifts in position between the flat-field images and the object frames mean that the weights applied in the extraction are not quite optimal. This results in a small reduction in the signal-to-noise ratio of the extracted spectra. A more important effect is that the flux at each wavelength is slightly underestimated and that this flux-deficit varies with wavelength and between spectra. For our [giraffe]{} spectra we find that the flux-deficit is a few percent and that it varies smoothly wavelength. This only becomes a problem during the sky-subtraction phase of the data reduction. The first step in sky subtraction is to form an average sky spectrum from those fibres that were pointing at blank areas of the sky during the exposure. It is normally possible to simply subtract this average sky spectrum from the spectra of targets obtained in the same pointing. In our case, we first had to calculate an optimum scaling factor for each spectrum to be applied to the average sky spectrum prior to subtraction. This was done by finding the scaling factor that minimized the root-mean square (RMS) difference between the sky-subtracted spectrum and a smoothed version of this spectrum. We calculated our own dispersion solution for the spectra from the ThAr spectra obtained on the same day as the actual observations. Typical shifts over a 12 hour timescale for the [giraffe]{} spectrograph are less than 0.2 pixels [@2004SPIE.5492..136P]. We used a 6-th order polynomial fit to the positions of 22 unsaturated arc lines in the arc spectra. The worst RMS residual was 0.092Å, the median RMS residual was 0.0045Å. The mean dispersion of the spectra is approximately 4.9pixel$^{-1}$. In order to correct the spectra for telluric absorption we used synthetic absorption spectra from a 6-layer model of the Earth’s atmosphere [@1988JQSRT..40..275N] and the [hitran]{} molecular database [@2005JQSRT..96..139R]. The parameters of the model were optimized by fitting a [uves]{} spectrum obtained at the same time as the [giraffe]{} observations. The optimum fit was achieved by minimizing the mean absolute deviation from a low-order polynomial fit to the [uves]{} spectrum after dividing through by the model spectrum. The fit to the telluric absorption was very good (Fig. \[TelluricFig\]). This synthetic telluric spectrum was convolved with a Gaussian profile to match the resolution of the [giraffe]{} spectra. The synthetic telluric spectrum was then divided into the target spectra. This removed all visible traces of telluric contamination. All [giraffe]{} spectra were interpolated onto a uniform velocity scale of 4.93pixel$^{-1}$ using 3200 pixels covering the wavelength range 8061.1–8496.6Å. We excluded spectra with a mean signal-to-noise ratio less than 5 from our analysis and also excluded stars with fewer than two such spectra. Examples of the resulting spectra for several targets are shown in Fig. \[ExampleSpectraFig\]. Analysis {#Analysis} ======== Stars are identified in this paper by the J2000 coordinates as listed in @2005MNRAS.356...89K or truncated to one decimal place in right ascension and truncated to the nearest arcsecond in declination. Radial velocity measurements ---------------------------- Radial velocities for all targets were measured using cross-correlation against a template spectrum of the brown dwarf star UScoCTIO055. This star is a visual binary with two similar components separated by only 0.12arcsec and a combined spectral type of M5.5 [@2005ApJ...633..452K]. We obtained 5 [uves]{} spectra of this star from the ESO archive and formed the median average spectrum. We re-binned this spectrum onto the same wavelength scale as the [giraffe]{} spectra and convolved this spectrum with a Gaussian with full-width at half-maximum (FWHM) of 3 pixels to match approximately the resolution of these spectra. Regions of the spectra affected by strong sky line emission were excluded from the cross-correlation. The radial velocity (RV) derived from the position of the peak of cross-correlation function (CCF) and its error was measured using a parabolic fit to the three points at the top of the CCF. The RV of UScoCTIO055 was taken to be $-6.38{\mbox{${\rm km\,s}^{-1}$}}$ [@2006MNRAS.372.1879K]. ------------------ ----------- ------------------ ----- J053557.0+094652 53657.309 28.89 $\pm$ 0.67 63 J053557.0+094652 53691.200 28.52 $\pm$ 0.60 63 J053557.0+094652 53734.145 28.74 $\pm$ 0.68 63 J053539.4+095032 53657.309 27.22 $\pm$ 0.68 69 J053539.4+095032 53691.200 27.31 $\pm$ 0.61 70 J053539.4+095032 53734.145 27.82 $\pm$ 0.69 68 J053530.4+095034 53657.309 28.70 $\pm$ 0.71 116 J053530.4+095034 53691.200 28.09 $\pm$ 0.66 116 J053530.4+095034 53734.145 28.77 $\pm$ 0.72 113 ------------------ ----------- ------------------ ----- : Radial velocities, , of stars in  and . The date of observation is given as modified heliocentric Julian date (MHJD). The standard error of the RV given here includes the “external error” and zero-point correction measured from the sky lines described in the text. The full-width at half-maximum of the cross-correlation function is given in the final column. [*The full version of this table is only available in the on-line version of this paper.*]{} \[RVTable\] ------------------ ------- --- ------------------- ---------- ------------------- ------------------------ J053557.0+094652 14.02 3 $ 28.70 \pm 0.37$ $-0.04 $ $ 2.34 \pm 0.01 $ $ 0.677 \pm 0.001 $ J053539.4+095032 14.06 3 $ 27.44 \pm 0.38$ $-0.10 $ $ 2.34 \pm 0.02 $ $ 0.667 \pm 0.002 $ J053530.4+095034 14.10 7 $ 28.98 \pm 0.44$ $-1.76 $ $ 2.33 \pm 0.01 $ $ 0.668 \pm 0.001 $ J053502.7+095647 14.16 4 $ 30.06 \pm 1.01$ $-4.57 $ $ 2.33 \pm 0.02 $ $ 0.663 \pm 0.001 $ J053408.4+095125 14.17 4 $ 6.52 \pm 0.34$ $-0.05 $ $ 2.61 \pm 0.02 $ $ 0.637 \pm 0.001 $ J053426.0+095149 14.36 4 $ -0.65 \pm 0.34$ $-0.08 $ $ 2.85 \pm 0.01 $ $ 0.657 \pm 0.001 $ J053555.6+095053 14.38 3 $ 27.26 \pm 0.39$ $-0.08 $ $ 2.53 \pm 0.02 $ $ 0.682 \pm 0.002 $ ------------------ ------- --- ------------------- ---------- ------------------- ------------------------ The precision with which the peak of the CCF can be measured is unlikely to be a true reflection of the accuracy with which we can measure the radial velocities from our spectra. In order to quantify the accuracy of our radial velocities we measured the radial velocities of the night sky emission lines in our spectra. We used the night sky emission line spectrum observed with the [uves]{} spectrograph as a template in the cross-correlation . We measured all the spectra observed at a given pointing, including object spectra prior to sky-subtraction, and calculated the mean and standard deviation of the resulting radial velocities. The standard deviation, $\sigma_{\rm sky}$, is given in Table \[ObsDateTable\]. The mean RV shift of the sky-line spectra for a given pointing ranges from $-1.28$ to $-0.25$. We have subtracted this mean radial velocity shift from the measured radial velocities of our targets, i.e., the night sky emission line spectrum is used to define the zero-point of the stellar radial velocities at each pointing. We have also added the value of $\sigma_{\rm sky}$ for each frame in quadrature to the precision of the stellar RV measured from the cross correlation function. For 85percent of our RV measurements $\sigma_{\rm sky}$ is the dominant term in the uncertainty. The resulting RV measurements with their standard deviations are given in Table \[RVTable\]. ![\[RVPlotFig\]Example radial velocity measurements as a function of time. The star identifier and value of $\log(p)$ are indicated in each panel. The weighted mean radial velocity is indicated by a dotted line. ](rvplot.eps){width="47.00000%"} ![\[logp\_cdf\]Left panel: Cumulative distribution of $p$. The difference between this distribution and a uniform distribution (dashed line) is not significant. ](logp_cdf.eps){width="47.00000%"} ![\[ZFig\] The ratio of the standard errors of the mean radial velocities, $\sigma_{\rm ext}$ and $\sigma_{\rm int}$, as a function of   magnitude. The width of the CCFs is indicated as follows: crosses, FWHM $<$ 80; diamonds, 80$<$ FWHM $<$ 100; asterisks, FWHM $>$ 100. Stars with significantly variable radial velocities are highlighted (squares). ](z.eps){width="47.00000%"} Variability criterion for radial velocities ------------------------------------------- For each star we have $N_{\rm rv}$ RV measurements $V_{{\rm r},i}$ each with standard error $\sigma_i$. We calculate the weighted mean RV for each star, ${\mbox{$\bar{{\rm V}}_{\rm r}$}}$, and then calculate the chi-squared statistic for ${\mbox{$\bar{{\rm V}}_{\rm r}$}}$ as a model for the observed radial velocities, i.e., $$\chi^2 = \sum^{N_{\rm rv}}_{i=1} \frac{(V_{{\rm r},i} - {\mbox{$\bar{{\rm V}}_{\rm r}$}})^2} {\sigma_i^2}$$ In order to identify stars with variable radial velocities we calculate the probability $p$ of observing this value of $\chi^2$ or greater from a sample of normally distributed random observations with mean  and standard errors $\sigma_i$. Our criterion for identifying stars with variable radial velocities is $\log(p)< -4$. The probability that one or more stars are incorrectly identified as having variable radial velocities by chance due to statistical fluctuations (assumed to be normally distributed) in our sample of [218]{} stars is about 2percent. The values of $N_{\rm rv}$, $\log(p)$ and  for each target are given in Table \[SummaryTable\]. There are two ways to calculate the standard error of the weighted mean, the external error based on the scatter of the data, $$\sigma_{\rm ext} = \sqrt{\frac{\chi^2}{(N_{\rm rv}-1)\sum 1/\sigma_i^2}},$$ and the internal error based on the standard errors only, $$\sigma_{\rm int} = \sqrt{\frac{1}{\sum 1/\sigma_i^2}}.$$ The value given in Table \[SummaryTable\] is the larger of these two values. There are [12]{} stars in our sample that have variable radial velocities according to our criterion $\log(p) < -4$. The properties of these stars are listed in Table \[BinaryTable\]. Stars in which both components are visible in the spectra or CCFs are identified as SB2, those showing only a single spectrum are identified as SB1. Examples of radial velocities for stars with $\log(p)< -4$ and $\log(p)> -4$ are shown in Fig. \[RVPlotFig\]. If our values of $\sigma_i$ are good estimates of the true uncertainty on each RV measurement and the majority of stars in our sample are non-variable, then we would expect that the distribution of $p$ will be uniform. The cumulative distribution of our measured $p$ values is compared to a uniform distribution in Fig. \[logp\_cdf\]. We have tested the hypothesis that these two distribution are equal using the Kolmogorov-Smirnov test and find that there is no evidence for any significant difference between them. We have also tested the reliability of our $\sigma_i$ values by considering the ratio $Z=\sigma_{\rm ext}/\sigma_{\rm int}$. If the values of $\sigma_i$ are reliable then we expect $Z\approx 1$. More precisely, for samples drawn from normal distributions the mean value of Z is 1 with standard error $1/\sqrt{2(N_{\rm rv}-1)}$ [@Topping]. The values of $Z$ for our RV measurements are shown as a function of  magnitude in Fig. \[ZFig\]. It can be seen that the values of $Z$ are indeed close to 1 (with the exception of the spectroscopic binaries, of course) and that there is no significant trend of $Z$ with . Stars with broad spectral lines are highlighted in this figure so that it can also be seen that the values of $Z$ are not significantly different for rapidly rotating stars compared to other stars in the sample. Binary stars identified from variable line width\[fwhmsec\] ----------------------------------------------------------- In Fig. \[FWHMFig\] we show the range in the FWHM of the CCFs against magnitude. We have used this plot to identify potential SB2 binary stars in which blending of the spectra from two similar components results in variations in the width of the spectral lines with little change in the radial velocity measured from the peak of the CCF. Several of the SB2 binaries identified above are recovered by this method. The star J054001.0$-$021959 has a large range in FWHM compared to other stars of the same  magnitude. The four spectra of this star all have high signal-to-noise. There is a definite broadening and asymmetry in one spectrum (Fig. \[SpecFig\]). In addition, the value of $\log(p)$ for this star is close to our criterion for variable radial velocities. This star appears to be an SB2 binary. One other star (J053522.5+094501) has a range in FWHM $> 20$, but this is a result of two spectra of the seven obtained having low signal-to-noise. There is no sign of asymmetry in the CCF and the radial velocities of this star show no hint of variation ($\log(p) = -0.09$). Despite the large range of FWHM measured for this star, there is no strong evidence that it an SB2 binary star. ![\[FWHMFig\]The range in full-width at half maximum (FWHM) of the cross-correlation function (CCF) for each target as a function of  magnitude. Stars with variable radial velocities are highlighted (squares). The SB2 binary J054001.0$-$021959 discussed in the text is marked with an asterisk.](fwhmrange.eps){width="47.00000%"} -------------------- ------- ------ --- -------- ------ --------- ----------------- ------------------- ---------- J053502.7$+$095649 14.16 1.30 4 30.06 1.01 $-4.6$ 2.33 $\pm$ 0.02 0.663 $\pm$ 0.001 SB2 J053456.3$+$095503 14.54 1.36 4 26.20 2.32 $-28.8$ 2.21 $\pm$ 0.01 0.656 $\pm$ 0.001 SB1 J053612.1$+$100056 14.60 1.31 3 28.66 2.68 $-8.9$ 2.23 $\pm$ 0.10 0.666 $\pm$ 0.008 SB1 J053443.9$+$094835 15.20 1.69 4 22.56 3.48 $<-42$ 2.66 $\pm$ 0.02 0.732 $\pm$ 0.002 SB2 J053455.2$+$100034 15.23 1.72 4 27.92 2.20 $-15.6$ 2.37 $\pm$ 0.02 0.710 $\pm$ 0.002 SB1 [^1] J053845.6$-$021157 14.48 1.18 4 4.11 3.52 $<-42$ 1.99 $\pm$ 0.02 0.633 $\pm$ 0.001 SB1 J053801.0$-$024537 14.46 1.57 4 46.12 9.20 $<-42$ 2.09 $\pm$ 0.01 0.653 $\pm$ 0.001 SB2 J054001.0$-$021959 15.02 1.90 4 30.80 0.90 $-3.5 $ 2.38 $\pm$ 0.02 0.726 $\pm$ 0.003 SB2 [^2] J053823.5$-$024131 15.15 1.69 4 30.23 4.26 $<-42$ 2.71 $\pm$ 0.02 0.690 $\pm$ 0.002 SB2 J054052.5$-$021653 14.30 1.13 4 152.28 8.25 $<-42$ 0.52 $\pm$ 0.02 0.628 $\pm$ 0.001 SB1, nm J053743.5$-$020905 14.48 1.36 4 51.02 4.76 $<-42$ 2.65 $\pm$ 0.02 0.630 $\pm$ 0.001 SB2 J053838.1$-$023202 16.59 1.73 4 30.79 2.65 $-19.5$ 2.05 $\pm$ 0.07 0.659 $\pm$ 0.006 SB1 J053805.6$-$024019 14.13 1.21 4 7.68 2.53 $-41.1$ 2.43 $\pm$ 0.01 0.644 $\pm$ 0.001 SB1 -------------------- ------- ------ --- -------- ------ --------- ----------------- ------------------- ---------- Membership criteria {#membership} ------------------- \[RVSelection\] ![\[RVDistFig\]The distributions of weighted mean radial velocity for our targets in  and . Dashed lines shows the selection criteria for assigning stars to Group 1 or Group 2 for  and for selecting non-members for both clusters, as described in section \[RVSelection\].](RVDistFig.eps){width="47.00000%"} ![\[EWTiOFig\] The value of EW(NaI) versus the TiO(8442) spectral index for our targets. The plotting symbols used indicate binarity or membership of the  or  clusters or non-membership of these clusters based on the radial velocity (if non-variable), as indicated in the legend. Stars below the dashed line are considered to be non-members. Error bars are only shown in cases where they are larger than the plotting symbol used.](ewtio.eps){width="47.00000%"} ![\[TiOMagFig\] The TiO(8442) spectral index versus I-band magnitude for our targets. The plotting symbols used indicate binarity or membership of the  or  clusters or non-membership of these clusters based on the radial velocity or spectral type (see Fig. \[EWTiOFig\]), as indicated in the legend. Error bars are only shown in cases where they are larger than the plotting symbol used.](magtio.eps){width="47.00000%"} ![\[EWFig\]The equivalent width of the NaI doublet for our targets as a function of the I-band magnitude. The plotting symbols used indicate binarity or membership of the  or  clusters or non-membership of these clusters based on the radial velocity or spectral type (see Fig. \[EWTiOFig\]), as indicated in the legend. Error bars are only shown in cases where they are larger than the plotting symbol used.](ew.eps){width="47.00000%"} In this section we describe the criteria we have used to identify members of the  and  associations. The principal means of identifying members of these clusters is the mean RV of the star, but we have also used the equivalent width of the NaI doublet, EW(NaI), and the strength of the TiO band at 8442Å, TiO(8442), as additional membership criteria. The distributions of weighted mean radial velocities, , are shown separately for stars near   and  in Fig. \[RVDistFig\]. The bi-modal distribution of radial velocities for stars near discussed by @2006MNRAS.371L...6J is apparent. We follow the convention in that paper of assigning stars with ${\mbox{$\bar{{\rm V}}_{\rm r}$}}< 27$ to Group 1 and stars with ${\mbox{$\bar{{\rm V}}_{\rm r}$}}\ge 27$ to Group 2. The interpretation of these groups favoured by @2006MNRAS.371L...6J is that Group 1 are members of either the Orion OB1a or OB1b association while Group 2 are a separate cluster of stars associated with the star . Group 2 are concentrated spatially around the star  and have similar mean radial velocity to it. Group 2 are younger on average than Group 1, although considerable overlap is possible. For stars near  we identify non-members using the criterion ${\mbox{$\bar{{\rm V}}_{\rm r}$}}> 35{\mbox{${\rm km\,s}^{-1}$}}$ or ${\mbox{$\bar{{\rm V}}_{\rm r}$}}< 20{\mbox{${\rm km\,s}^{-1}$}}$. For stars near $\lambda$ Ori we identify non-members using the criterion ${\mbox{$\bar{{\rm V}}_{\rm r}$}}> 32{\mbox{${\rm km\,s}^{-1}$}}$ or ${\mbox{$\bar{{\rm V}}_{\rm r}$}}< 22{\mbox{${\rm km\,s}^{-1}$}}$. These limits are indicated in Fig. \[RVDistFig\]. The application of these criteria to stars with variable radial velocities are discussed in more detail in section \[BinaryNotes\]. The TiO bands around 8450Å are good indicators of effective temperature for M-dwarfs in the sense that they increase in strength for cooler stars [@2004ApJ...609..854M] and are insensitive to reddening. We have measured the strength of the band at 8442Å in our spectra using the ratio of the counts detected in the wavelength ranges 8437–8442Å and 8442–8450Å. The measurements were made on the median average spectrum of each star. We denote the value of this ratio as TiO(8442). The equivalent width of the NaI doublet at 8190Å is sensitive to the surface gravity in M-type stars [@1997ApJ...479..902S] in that stars with higher surface gravities show stronger NaI absorption. The surface gravity of members of the  and  clusters is expected to be ${\mbox{$\log\,{\rm g}$}}= 3$–4, whereas a typical M-type giant will have ${\mbox{$\log\,{\rm g}$}}\approx 1$. Thus, the equivalent width of the NaI doublet, EW(NaI), can be used to identify background giants in our sample. It is also possible to identify contamination of the sample by dwarf stars with ${\mbox{$\log\,{\rm g}$}}\goa 4.5$. We measured the value of EW(NaI) from the median average spectrum of each star by numerically integrating the area under our spectra in two regions $\pm 120{\mbox{${\rm km\,s}^{-1}$}}$ wide around the centre of each NaI line after normalizing the spectra by the clipped mean value of the spectrum in the region 8188–8192Å. The value of TiO(8442) is plotted against EW(NaI) in Fig. \[EWTiOFig\]. There is a clear division between the bulk of our targets and the small group of stars with low EW(NaI) and TiO(8442) values. Most of these stars are non-members based on their mean RV. We therefore identify all stars below the dashed line in this figure as “spectral-type non-members”. From Figs. \[TiOMagFig\] and \[EWFig\] we see that very few stars that satisfy the RV and spectral type criteria for membership have discrepant values of EW(NaI) or TiO(8442) for their  magnitude. There are two faint stars in the sample that have large values of TiO(8442), but there is a large scatter in this index among the faint stars in our sample so we do not consider this to be a reason to exclude these stars as members of the  cluster. The binary star with a discrepant value of EW(NaI) is J054052.5$-$021653. This SB1 binary star has weak, narrow NaI absorption lines characteristic of a giant star. The mean RV has a rather large uncertainty but is also clearly inconsistent with membership of the  cluster. We conclude that this is a background giant star. The spectra of the star J054034.5$-$020606 extracted using our empirical weighting scheme do not satisfy our criterion for inclusion in the sample because the signal-to-noise is less than 5 but we did notice that this is an SB2 spectroscopic binary from an analysis of the spectra extracted using girBLDRS. These spectra are shown in Fig. \[SpecFig\]. This is the faintest binary detected in our survey (= 18.38) so the spectra are quite noisy, but the SB2 nature of this star can be seen and is very obvious in the CCF. The values EW(NaI)$=3.42 \pm 0.23$ of TiO(8442)$ = 0.593 \pm 0.026$ shows that this star has the wrong spectral type to be a member of the cluster. We have also inspected the position of this star in the V v. V$-{\rm I}_{\rm C}$ colour-magnitude diagram of the  region using the data of @2007MNRAS.375.1220M, where it is clearly below the sequence of cluster members. We conclude that this is a background dwarf binary star for which the combination of spectral type, distance and reddening place the star within the band of cluster members in the  v. ${\mbox{${\rm R}_{\rm C}$}}-{\mbox{${\rm I}_{\rm C}$}}$ colour-magnitude diagram. Having applied these membership selection criteria we find that in our sample there are 64 members of , 34 members of  Group 1 and 86 members of  Group 2, excluding stars with variable radial velocities that are discussed separately below. ![\[MagMassFig\]The I-band mass-magnitude relations interpolated from the models of for the following distances and ages: d=330pc, age=10Myr ( 1); d=440pc, age=3Myr ( 2); d=400pc, age=5Myr (). ](imag2mass.eps){width="47.00000%"} Binary stars in our sample {#BinaryNotes} ========================== In this section we discuss the stars in our sample with variable radial velocities or variable line profiles that are members of the  or  clusters. The values of , TiO(8442), etc. for these stars are given in Table \[BinaryTable\]. The spectra of these stars in the region of the NaI doublet are shown in Fig. \[SpecFig\]. We also note in Table \[BinaryTable\] whether only one set of spectral lines are visible (SB1) or that the spectra indicate the presence of two stars (SB2). A star may be an SB1 binary either because the companion is too faint to be detectable or because the lines of the stars are blended, or both. We created a set of synthetic binary star spectra using combinations of observed single star spectra over a range of luminosity ratio and radial velocity difference to estimate the lower limit to the magnitude difference in the SB1 stars. If we assume that neither star is rapidly rotating then we estimate that any companion to these stars is more than 2.5 magnitudes fainter than the primary. From the magnitude-mass relation shown in Fig. \[MagMassFig\] we estimate that the corresponding limit to the mass ratio is $q\loa 0.25$. In general, the  photometry, the values of TiO(8442), EW(NaI) and the mean RV of these stars are all consistent with membership of either the  cluster or one of the  clusters. We discuss exceptions to this general rule and any other points of interest for these stars below. [ @2004ApJ...610.1064B consider this star (LOri-CFHT 043) to be a member of the  cluster on the basis of the available R$_{\rm C}$I$_{\rm C}$JHK photometry. ]{} [ @2004ApJ...610.1064B note that the position of this star (LOri-CFHT 069) in the I$_{\rm C}$ v. I$_{\rm C}-$K$_{\rm S}$ colour-magnitude diagram is inconsistent with cluster membership. However, this is simply a result of the contribution of two stars of similar brightness to the flux at I-band in this SB2 binary (Fig. \[SpecFig\]). Correcting the I-band magnitude by 0.75magnitudes places this star in a position entirely consistent with cluster membership in this colour-magnitude diagram.]{} [ @2004ApJ...610.1064B note the presence of H$\alpha$ emission in their low resolution spectrum of this star (LOri-CFHT 075). The position of this star in the I$_{\rm C}-$K$_{\rm S}$ v. H$-$K$_{\rm S}$ colour-colour diagram and the I$_{\rm C}$ v. I$_{\rm C}-$K$_{\rm S}$ colour-magnitude diagram are also consistent with cluster membership and with the spectral type of M5 assigned by @2004ApJ...610.1064B so it is unclear why they note it as being a non-member on the basis of this information in their Table 2. The spectral lines for this star show rotational broadening. We compared the spectra of this star to those of a narrow-lined star of similar spectral type to which we had applied a rotational broadening function for various values of the projected rotational velocity, ${\mbox{${\rm V}_{\rm rot}\sin i$}}$. From this comparison we estimate that the projected rotational velocity of J053455.2$+$100035 is ${\mbox{${\rm V}_{\rm rot}\sin i$}}\approx 65{\mbox{${\rm km\,s}^{-1}$}}$. Only one set of spectral lines are visible in our spectra but there is an asymmetry in the CCF in the form of a blue-wing, particularly when the measured RV corresponds to a red-shift. The suggests that the fainter component in this binary is detected but unresolved in the I-band spectra. ]{} [The range in radial velocities we have observed for this star is $-6.9$ to $11.2{\mbox{${\rm km\,s}^{-1}$}}$. This is consistent with this star being a member of  Group 1 if it has a semi-amplitude $\approx 30{\mbox{${\rm km\,s}^{-1}$}}$. Members of Group 1 are approximately twice as common as members of Group 2 in the 25arcmin [giraffe]{} field used for the observations of this star. The orbital period is required to be rather short ($P\loa 1$day) to reconcile the observed radial velocities of this star with a mean velocity consistent with cluster membership and the expected mass ratio for an SB1 binary ($q \loa 0.25$). ]{} [The mean RV estimated from the spectra is consistent with membership of either  Group 1 or  Group 2. There are approximately 10 times as many members of Group 2 as Group 1 in this field close to so we conclude that it is approximately 10 times more likely that this star is a member of Group 2 than Group 1. The values of EW(NaI) and TiO(8442) for this star were measured from the spectrum observed near conjunction. ]{} [ @2005MNRAS.356...89K detected strong lithium absorption in the spectrum of this star (KJN2005 6), which indicates that this star is younger than 20Myr, as expected for a member of the cluster. The mean RV of this star suggests it is a more likely to be a member of  Group 2 than Group 1.]{} [ The mean RV measured from the three spectra in which the two components are unresolved is $26.5\pm0.9{\mbox{${\rm km\,s}^{-1}$}}$, which is close to the dividing line at 27 between Group 1 and Group 2. The proximity of this star to  makes it approximately 10 times more likely that this is a member of Group 2 than Group 1. This star (BMZ2001 SOri J053823.6$-$024132) was listed as a candidate member of the  cluster on the basis of IJHK photometry by @2004AN....325..705B. ]{} [We measured the value of EW(NaI) and TiO(8442) for this SB2 binary from the spectrum taken near conjunction, so the values represent an average value for the two stars, which are of comparable brightness. The RV measured from this spectrum is 42, which is outside the range we have defined for membership of the cluster, but it is difficult to establish whether this is an accurate estimate of the mean RV of this star. Further observations will be required to establish the true mean RV of this binary star in order to check that it is consistent with this star being a cluster member. Members of Group 1 are approximately twice as common in this field as Group 2 [@2006MNRAS.371L...6J]. For the purposes of this paper we assume that this star is a cluster member. ]{} [The mean RV of this SB1 binary and its proximity to  suggest that it is a member of Group 2. ]{} [ The range of radial velocities observed in this SB1 binary star is 4.4 to 14.5, which is consistent with membership of  if the semi-amplitude of the spectroscopic orbit is $K\approx 20{\mbox{${\rm km\,s}^{-1}$}}$. The proximity of this star to  makes it approximately 10 times more likely that this is a member of Group 2 than Group 1. ]{} In summary, we have identified 5 spectroscopic binary members of the cluster and 7 spectroscopic binary members of the  cluster. Among the  spectroscopic binaries, 5 stars are likely to be members of Group 2, 1 is likely to be a member of Group 1 and 1 star cannot be assigned to either group. Of the [12]{}  spectroscopic binary cluster members, 6 are SB2 binaries with stars of comparable brightness in the I-band. There is 1 SB1 binary with broad lines due to rotation so it is not clear whether this is a genuine SB1 binary or an unresolved SB2 binary, although there is a hint of the companion in the CCF for this star. There are 3 SB1 binaries with narrow lines for which we can say the companion is likely to be more than 2.5magnitudes fainter than the primary so that the mass ratio is less than about 0.25. We have also identified two spectroscopic binaries that are not members of either or . The distribution of binaries with magnitude ------------------------------------------- It is notable that all 12 spectroscopic binary stars we have identified among the [196]{} cluster members are brighter than ${\mbox{${\rm I}_{\rm C}$}}=16.6$, and 11 are brighter than ${\mbox{${\rm I}_{\rm C}$}}= 15.25$. There are 68 cluster members brighter than ${\mbox{${\rm I}_{\rm C}$}}=15.25$ including these binary stars. At face value, this suggests a close binary fraction of about 16 percent above this limit and less than a few percent below this limit. Of course, the sensitivity of our survey decreases for fainter stars because the signal-to-noise of the spectra is less and the orbital speeds decrease with mass for a given semi-major axis or orbital period. We have used the RV data for the 11 binary cluster members we detected from their RV variations to estimate the sensitivity of our survey to binaries of this type as a function of I-band magnitude, $p_{\rm empirical}$. We exclude J054001.0$-$021959 from this analysis because it is unclear how to simulate the method by which we detected this binary, i.e., from the variation of the width of its CCF. For every combination of variable and non-variable star in our survey we have created a synthetic RV dataset using the RV errors in the non-variable star dataset and the radial velocities of the binary star scaled as follows. We first subtract our best estimate of the systemic radial velocity for the binary. We then estimate the total mass of the binary, $m_{\rm T}$ assuming that the stars are identical for the SB2 binaries or that the companion is 2.5 magnitudes fainter than the primary for the SB1 binaries. The masses are estimated using the models of based on the  magnitude of the target. The relations between  and mass are shown in Fig. \[MagMassFig\]. We have assumed the following distances and ages for the clusters: d=330pc, age=10Myr ( 1); d=440pc, age=3Myr ( 2); d=400pc, age=5Myr (). We then repeat the calculation to find the total mass, $M_{\rm T}$, of a similar binary star with the same magnitude as the single star, $m_S$. We then multiply the radial velocities by $\sqrt{m_S/m_{\rm T}}$. This is equivalent to assuming that the distribution of semi-major axis is independent of mass. We then apply the same detection criterion as before ($\log(p) < -4$) to the synthetic datasets. In cases where $N_{\rm rv}$ is less for the single star than the binary star we use the average number of detections for all combinations of $N_{\rm rv}$ synthetic radial velocities from the combinations available. For cases where $N_{\rm rv}$ is larger for the single star than the binary star we use the $N_{\rm rv}$ measurements with the lowest RV errors to create the synthetic dataset. The number of synthetic RV data sets which satisfy our variability criterion then gives an estimate of the detection efficiency for each star, e.g., if 6 of the 11 synthetic RV data sets for a star satisfy our variability criterion, then the same observations of a binary star similar to those discovered in this survey with the same  magnitude would, on average, have detected a significant RV shift about 55percent of the time. The values of $p_{\rm empirical}$ calculated in this way is shown in Fig. \[pdetect\] as a function of the  magnitude. The normalized cumulative distribution function for these detection efficiencies is shown as a function of  magnitude in Fig. \[KSFig\]. Also shown is the cumulative distribution function for the  magnitude of the binary cluster members excluding J054001.0$-$021959. If the binary fraction and semi-major axis distribution for binaries is independent of mass then these two distribution should be the same. It is clear that the two distributions are not the same. The Kolmogorov-Smirnov test applied to these distributions gives a 99.7 percent significance level to the difference in these distributions. There is a chance that a few of the stars we have identified as SB1 binary stars are the result of spurious RV shifts due to errors in the analysis or instrumental effects or the result of intrinsic variability of a single star. If we take a very cautious approach and repeat this analysis using the detections of SB2 binaries only we find that Kolmogorov-Smirnov test gives significance level of 97.6 percent. The binary fraction\[BinFracSec\] --------------------------------- We used a Monte-Carlo simulation to calculate probability distribution for the binary fraction given the 11 binaries we have discovered from their variable radial velocities and the detection efficiency for each star, $p_{\rm empirical}$, calculated above. The results are shown in Fig. \[BinFracFig\] for the whole sample, a ‘bright’ sample (${\mbox{${\rm I}_{\rm C}$}}< 16.9$) and a ‘faint’ sample (${\mbox{${\rm I}_{\rm C}$}}\ge 16.9$). The division here between the bright and faint samples has been chosen to correspond to the widely accepted division between low mass stars and VLM stars at a mass of 0.1[@2007prpl.conf..427B]. The mean value of $p_{\rm empirical}$ for the 145 stars in the bright sample is 0.89. For the 51 stars in the faint sample the mean value of $p_{\rm empirical}$ is 0.53. The spectroscopic binary fraction for the bright sample is $ 9.5^{+1.2}_{-2.8}$ percent. The 90 percent confidence upper limit to the spectroscopic binary fraction for the faint sample, given the assumptions above, is 7.5 percent. These figures apply to binary stars of the type discovered by our survey only, i.e,. we have not attempted to apply a correction to this spectroscopic binary fraction for the binaries at longer orbital periods that are not detected by our survey. The hypothesis that $f_{\rm bright}$ and $f_{\rm faint}$ are equal can be rejected with 90percent confidence. The value of ${\mbox{${\rm I}_{\rm C}$}}=16.9$ that is used as the dividing line between the faint and bright samples is arbitrary. It would be possible to achieve a higher level of significance by setting the limit between our bright and faint samples closer to magnitude of the faintest binary in our sample (${\mbox{${\rm I}_{\rm C}$}}=16.5$), but this would be an equally arbitrary value and so it would be hard to justify the apparent increase in statistical significance. ![\[pdetect\] The detection efficiency as a function of magnitude for our survey based on the radial velocities of the binaries identified from their radial velocity variations.](pdetect.eps){width="47.00000%"} ![\[KSFig\] The normalized cumulative distribution of the detection efficiency of our sample as a function of  magnitude based on the measured radial velocities of all binaries detected (solid line) or the SB2 binaries only (dashed-dotted line) compared to the normalized cumulative distribution of  magnitude for all the binaries discovered by our survey (dashed line) and the SB2 binaries discovered by our survey (dotted line).](kstest.eps){width="47.00000%"} ![\[BinFracFig\] The probability distribution for the binary fraction in our whole sample (dashed line), the ‘bright’ sample (solid lines, mass $> 0.1{\mbox{${M}_{\sun}$}}$) and the ‘faint’ sample (dashed-dotted line, mass $<0.1{\mbox{${M}_{\sun}$}}$). ](binfrac.eps){width="47.00000%"} The detection efficiency as a function of binary separation {#method} ----------------------------------------------------------- We have used another Monte Carlo simulation to estimate the range of semi-major axis, $a$, over which binary stars can be detected by our survey. If binarity is the only cause of variable RVs, the probability that a given target is flagged as an RV variable is given by $\epsilon_b p_{\rm detect}+(1-\epsilon_b)10^{-4}$, where $\epsilon_b$ is the overall binary fraction and $p_{\rm detect}$ is the probability that $\log p < -4$ for the object assuming that it is a binary. We have used a Monte Carlo simulation to calculate the value of $p_{\rm detect}$ as a function of semi-major axis, $a$, for every star in our sample given various assumptions about the distribution of binary properties. The simulation generates 65536 virtual binaries and predicts the RV of the more massive component at the same times of observation as the actual observations. The eccentricity, $e$, mass ratio, $q$, and other properties of the binary star are randomly selected from the following distributions. [We have used a ‘flat’ distribution which is uniform in the range $q=0.2$–1. We did not consider the peaked mass ratio distribution we investigated in @2005MNRAS.362L..45M to be appropriate for the binaries we have discovered in this survey because that distribution is zero for $q<0.7$ whereas some of the SB1 binaries we have found must have mass ratios $q\loa 0.25$. The value of the mass ratio makes little difference to the value of $p_{\rm detect}$ in this range.]{} [We have assumed that all binaries with periods less than 10d have circular orbits [@2005ApJ...620..970M]. Above this period, we assume that the value of $e$ is uniformly distributed in the range $e=0$–$e_{\rm max}$ where $e_{\rm max}= 0.6$. We have also performed one set of simulations with $e_{\rm max}=0$ in order to investigate the effect of assuming that all orbits are circular.]{} [We have calculated the mass of the primary star based on its  magnitude and the models of . The mass-magnitude relations shown in Fig. \[MagMassFig\] are used to find a primary star mass consistent with the observed magnitude and the mass ratio of the synthetic binary. We allow for an assumed error of 0.03magnitudes in the observed  magnitude. We find that the choice of mass-magnitude relation for each star has a negligible affect on our results. For simplicity, we present the results assuming a distance of 330parsec and an age of 10Myr for all stars.]{} [The orbital phase of the binary at the date of the first observation is randomly selected from a uniform distribution in the range 0 to 1.]{} [For eccentric binaries, $\omega$ is selected from a uniform distribution in the range 0 to $2\pi$.]{} We have calibrated the extent to which blending between the components reduces the apparent amplitude of the RV variation in a spectroscopic binary. We selected 6 stars with a range of  magnitudes that had typical spectra for stars of that magnitude. For each pair of stars we created simulated binary star spectra in which the single star spectra were combined with the appropriate flux ratio for their magnitude difference and a range of velocity offsets between the stars. We then measured the radial velocity of the brighter star in the spectrum in the same way as we did for our actual observations. We used these results to calibrate the difference between the true RV and the apparent RV of the brighter star caused by blending with the spectrum of the fainter star. We used interpolation within the resulting table to adjust the RV of the more massive star in each simulated binary star to account for this blending. The radial velocities predicted by each trial of the simulation are each perturbed by a random value from a Gaussian distribution with the same standard deviation as the random error of the actual observations. We then estimated the range of inclinations over which the binary would be detected using the same criterion that we applied to our actual data. In the absence of blending this is a trivial calculation. In the presence of blending the value of $\chi^2$ may not be a monotonic function of inclination, $i$. We approximate the true shape of the relation between $\chi^2$ and $\sin i$ using a parabolic fit to the values at $\sin i = 0, 0.5$ and 1 and use this parabolic fit to estimate the range of $\sin i$ values over which the binary would be detected. The average value of $p_{\rm detect}$ calculated in this way for all the stars in the bright and faint samples are shown as a function of $\log a$ in Fig. \[efficiency\]. From that figure we see that assuming circular orbits does not make a large difference to the value of $p_{\rm detect}$, but that neglecting blending can lead to an overestimate of the detection efficiency by as much as 15 percent. We can also see that the detection efficiencies calculated above using the RV data of the binaries themselves gives a result consistent with the values of $p_{\rm detect}$ in Fig. \[efficiency\] if the semi-major axes of these binaries are in the range $-2.5 \loa \log (a/au) \loa -1.0$. The corresponding range in orbital periods is $ 0.1\,{\rm d} \loa P \loa 30\,{\rm d}$, which is in good agreement with the likely orbital periods of the binaries we have detected. ![\[efficiency\] The average detection efficiency for our survey as a function of semi-major axis ($a$) for the bright sample (upper curves) and the faint sample (lower curves). The solid line corresponds to $e_{\rm max} = 0.6$, the dashed line to $e_{\rm max} = 0$ and the dashed-dotted line shows the effect of neglecting blending in the case $e_{\rm max} = 0.6$ for the bright sample.](efactor.eps){width="47.00000%"} Comparison with @2005MNRAS.356...89K \[kenyon\] ----------------------------------------------- We combined our radial velocity data with those of @2005MNRAS.356...89K to see if the combined data sets would yield any further spectroscopic binaries. We did not find any new binaries among the 45 stars in common between the two surveys. This is, perhaps, not surprising given the much higher radial velocity accuracy of our data compared to @2005MNRAS.356...89K. This is as a result of the higher signal-to-noise, higher resolution and superior telluric subtraction achievable with [giraffe]{} spectrograph compared with the [wyffos]{} spectra available to Kenyon et al. The mean difference in the measured radial velocity between the two sets of data is $0.72\pm 0.44\,{\mbox{${\rm km\,s}^{-1}$}}$. The star KJN2005 72 (J053739.6$-$021826) was identified as a possible spectroscopic binary by @2005MNRAS.356...89K based on a RV shift of $35 \pm 4$ between two spectra obtained on consecutive nights. The radial velocities for this star measured from our data are constant to within a 2  over a baseline of 63 days. We conclude that the radial velocity shift measured by @2005MNRAS.356...89K for this star is likely to be spurious. Similarly, @2005MNRAS.356...89K claim a RV shift of $13\pm 6$ for the star KJN2005 74 (J053926.8$-$023656) between two spectra obtained on consecutive nights. We find the RV for this star is constant to within 2 from 3 spectra with a baseline of 61 days. @2005MNRAS.356...89K note that the star KJN2005 46 (J054000.1$-$025159) appears to be a member based on the presence of the LiI 6707Å line in the spectrum and the equivalent width of the NaI doublet, but the RV they measure ($17\pm 3$) is inconsistent with cluster membership. They suggest that this may be due to this star being spectroscopic binary. However, we find that this star has a mean radial velocity of 30.5 which is consistent with cluster membership and is constant to within 1 from 3 spectra with a baseline of 61 days. In summary, it appears that the radial velocities measured by @2005MNRAS.356...89K sometimes show spurious shifts of approximately 10. Discussion ========== There are several examples of VLMS and BDs that are clearly spectroscopic binaries. @2006Natur.440..311S measured accurate masses and radii for the brown dwarf pair 2MASS J05352184$-$0546085 which is an eclipsing spectroscopic binary with an orbital period of 9.8days and a total mass of 0.088 in the Orion Nebula cluster (ONC). PPl 15 in the Pleiades was the first brown dwarf confirmed by the detection of lithium and is an SB2 spectroscopic binary containing two brown dwarfs with masses of 60–70 Jupiter masses, an orbital period of 5.8days and an eccentricity of 0.4 [@1999AJ....118.2460B]. There is no large RV survey for spectroscopic binary VLMS and BDs in the Pleiades so we do not know whether PPl 15 is representative of the binary population in this cluster. Similar arguments apply to 2MASS J05352184$-$0546085 and the ONC. Intriguingly, there is a very well populated binary sequence in the colour-magnitude diagram for the Pleiades [@2007arXiv0706.2234L], which suggests a binary frequency of 28–44 percent in the 0.075–0.030 mass range. Surveys for spectroscopic binaries among late-M and T dwarfs in the field and in nearby clusters have discovered a few other SB2 binaries and several stars that show RV shifts of a few  or less. @2006AJ....132..663B summarize the results of these surveys and present the results of their own survey of 53 VLMS and BDs. From their own sample they estimate a binary frequency of 11 percent in the separation range 0–6au. This binary frequency may be consistent with our results for VLM binaries (binary fraction $<7$ percent for mass $\loa 0.1$) if the distribution of $a$ is not strongly biased towards small $a$. This is reasonable given that none of the binaries detected by @2006AJ....132..663B were SB2 binaries and that the typical radial velocity shifts detected were small (few ). It is much harder to interpret the results of previous surveys summarized by @2006AJ....132..663B, not only because of the small number of stars observed but also because there is a bias in these surveys due to preferentially observing brighter stars or imposing a magnitude limit. This tends to increase the number of binaries in these samples, particularly SB2 binaries. Our survey is much less strongly affected by this type of bias. Some SB2 stars will be missing from our sample because they exceed our bright magnitude cut-off. The lack of faint binaries in our sample means there is no compensating gain in SB2 binaries at the faint end of the sample. This bias can only increase the statistical significance of the change in binary properties near ${\mbox{${\rm I}_{\rm C}$}}\approx 16.9$ that we have discovered. It is difficult to estimate the compensating bias introduced by excluding stars below the cluster sequence in the  v. ${\mbox{${\rm R}_{\rm C}$}}-{\mbox{${\rm I}_{\rm C}$}}$ colour magnitude diagram but this effect is expected to be small. It is also harder to characterize the sources of noise in small surveys. This is apparent from the results of @2005MNRAS.356...89K in which the few RV shifts they measured of about 10  were ascribed to binary motion, both by @2005MNRAS.356...89K and by @2005MNRAS.362L..45M. In that case it appears that there is some problem with obtaining reliable RV measurements from spectra affected by telluric absorption with low resolution and low signal-to-noise. A similar problem applies to the interpretation of radial velocity shifts $\loa 1\,{\mbox{${\rm km\,s}^{-1}$}}$, even for spectra of the highest quality. At this level, line profile variations due to chromospheric activity and star spots (“jitter”) may become important, but the extent of this effect has not been well characterized. Our survey may also suffer from this problem in the case of some of the SB1 binaries we have detected. For stars with narrow lines and large RV shifts such as J053805.6$-$024019 this is unlikely to be an issue. For example, @2007arXiv0710.2437J have shown that the star   is a binary star with an eccentric orbit ($e=0.49$) and a low mass companion in a wide orbit ($a\approx$ 1 au). The semi-amplitude of the orbit is low ($K=1.6{\mbox{${\rm km\,s}^{-1}$}}$) but can be measured reliably because the level of jitter in this young, narrow-lined VLM star is much less than $ 1\,{\mbox{${\rm km\,s}^{-1}$}}$. It is less certain that stars like J053455.2$+$100035 with broad spectral lines and small RV shifts are genuine spectroscopic binaries. One mitigating factor in our survey is that there are several other stars in our survey with broad lines that are not detected as binary stars, so it does appear that we can measure radial velocities accurate to about 1 in this type of star. Nevertheless, it remains to be established that any of the single-lined stars showing RV shifts from this survey, the survey by @2006AJ....132..663B or the survey by are genuine SB1 spectroscopic binaries with Keplerian orbits. Note, however, that the statistical significance of the change in binary properties for very low mass stars in this survey we have noted remains high ($>97$ percent) even if we consider only the SB2 binaries. It is harder to detect binary stars among rapidly rotating stars because the RV measurements from broader CCFs is less precise than for narrow CCFs. This does not affect our conclusion that there is a change in binary properties for very low mass stars compared to low-mass stars because rapidly rotating stars are evenly distributed in magnitude within our sample and the standard errors of the radial velocities we assign to these stars accurately accounts for the effects of line broadening. A rapid change in close binary fraction at a given magnitude is only approximation to what is, in reality, likely to be a gradual change in binary properties with mass. We do not have enough data to be able to say with any accuracy at which mass this change occurs or whether the change occurs over a small or large range of masses. This is why we have presented results with the sample divided into bright and faint sub-samples at ${\mbox{${\rm I}_{\rm C}$}}=16.9$, a magnitude that corresponds to the commonly accepted dividing line between VLMS/BDs and low-mass stars at 0.1. With only 11 binaries and 3 or 4 RV measurements per star we cannot say a great deal about the distribution of the binaries properties in low mass star with the data available so far. However, we do have estimates of the mass ratio in these binaries. The mass ratio is clearly $q\approx 1$ in the case of the SB2 binaries with nearly equal components (“twins”). For the stars where no companion is visible and the spectral lines are narrow we estimate that the companion is likely to be at least 2.5magnitudes fainter than the primary and so $q\loa 0.25$. The star J053455.2$+$100035 lies somewhere between these extremes. The overall picture, then, is of a peak in the mass ratio distribution near $q=1$ due to twins over-laid on a broader distribution increasing towards small values of $q$. This is qualitatively similar to the mass ratio distribution found for low mass Population I stars by @2003ApJ...591..397G and the mass ratio distribution for solar-type stars measured by for solar type stars in the solar neighbourhood, Pleiades and Praesepe. @2003ApJ...591..397G find that the peak in the $q$ distribution due to twins is not present for Pop I binaries more massive than 0.67 or for halo stars. find that the peak for $q > 0.8$ gradually decreases when long-period binaries are considered. We have estimated the number of binaries we would detect if the stars brighter than =16.9 (mass $\approx 0.1{\mbox{${M}_{\sun}$}}$) in our sample have the same binary frequency and orbital period distribution as nearby M-dwarfs. We used the detection efficiency for each star as a function of $\log P$ convolved with the Gaussian distribution for $\log P$ from together with the binary fraction for M-dwarfs of 42percent from @1992ApJ...396..178F to find that we would have detected 6.5 spectroscopic binaries an average in our survey given these assumptions. At face value, this suggests that the frequency of short period binaries among low mass stars in  and  is slightly larger to that for nearby M-dwarfs. However, the assumption that the distribution of $\log P$ established by for solar-type stars is appropriate for low mass stars may not be correct. @1992ApJ...396..178F find that the period distribution for M-dwarfs binaries is similar to that for solar type stars but there is only one short period M-dwarf binary in the sample so the period distribution is poorly characterized at the short period end. note that @1985ib...proc....1G measured a much higher frequency of short period binaries for solar type stars in the Hyades than is seen in their field star sample. The same effect was seen to a lesser extent by , so both environment and primary mass may be important factors in determining the frequency of short period binaries in our sample. We do not have sufficient data in our survey to measure the orbital periods and semi-amplitudes of the spectroscopic binaries we have detected. Given the uncertainty in the orbital period distribution of low mass stars it would clearly be worthwhile to obtain complete orbits for the binary stars we have identified. The maximum separations of the lines observed in the SB2 binaries and the timescale and amplitude of the RV variations for the SB1 binary stars suggest that the orbital periods of these binaries are likely to be in the range from several hours to several days. An anonymous referee has raised the possibility that the difference in binary fraction we have observed in the bright and faint samples is due to a distribution of $\log a$ strongly biased towards $\log(a/au) \goa -0.5$. It can be seen from Fig. \[efficiency\] that we have almost no sensitivity to binaries within the faint sample but good sensitivity to binaries within the bright sample in this $\log a$ range. In principle, it may be possible to find a distribution of $\log a$ that can explain the numbers of binaries detected in the bright and faint samples for a single value of the binary fraction. However, such a distribution would be incompatible with the observation that several of the binaries we have discovered have short orbital periods corresponding to values of $\log (a/au) \ll -0.5$. If it were the case that the binaries we have detected are biased towards $\log(a/au) \goa -0.5$ then the values of p$_{\rm empirical}$ we calculated for stars in the faint sample would have been very small. In this case, the probability distribution for the binary fraction calculated in section \[BinFracSec\] would have been consistent with the hypothesis $f_{\rm faint} = f_{\rm bright}$. In fact, this hypothesis can be rejected with 90 percent confidence, so the binaries we have detected are not biased towards $\log(a/au) \goa -0.5$. The lack of binaries in our sample with masses $\loa 0.1$ is consistent with the overall picture established from existing surveys that the binary fraction for VLMS and BDs across the full range of orbital separations is 20–25 percent (@2007ApJ...668..492A; @2006AJ....132..663B). However, the binary properties for VLMS and BDs is poorly characterized in the range $a \loa 2\,{\rm au}$ that is below the detection limits of high angular resolution imaging. Some VLMS and BDs show apparent RV shifts comparable to the orbital speeds expected at these separations, and a complete spectroscopic orbits has now been published for one such star (, @2007arXiv0710.2437J). It is still possible given all the observations to-date that there is a large population of VLMS/BD binaries with $a\approx 1$au. If such a population exists then it may be possible to reconcile the binary fraction in existing surveys with the high binary fraction inferred for the Pleiades by @2007arXiv0706.2234L. The spectra we have used for this study were obtained with the aim of measuring the binary properties of the low mass stars and brown dwarfs in the  and  clusters but there is clearly useful information to be obtained about the properties of the clusters themselves from the spectra we have obtained. For example, the distribution of EW(NaI) shown in Fig. \[EWFig\] clearly shows that the stars in  have higher surface gravities on average than stars in Group 2 of the  cluster. This shows that the  cluster is older than the Group 2 of the  cluster and, perhaps, has a similar age to Group 1 of the  cluster. This is in general agreement with the ages we have adopted for these populations. A more detailed analysis is beyond the scope of this paper. Conclusions =========== We have conducted a large radial velocity survey for spectroscopic binary stars among low mass stars and brown dwarfs in the young clusters around  and . We have identified [196]{} members of these clusters based on their radial velocity and spectral type. Of these, 6 are SB2 binaries and 6 are SB1 binaries. All the spectroscopic binaries we have detected are brighter than =16.6 (mass $\approx$ 0.12). We conclude that the frequency of spectroscopic binaries in these clusters among very low mass stars (mass $<0.1{\mbox{${M}_{\sun}$}}$) and brown dwarfs is significantly lower ($<7.5$ percent) than that for more massive stars ($9\pm 2$ percent). The change in binary properties with mass that we have discovered may be due to a change in the total binary frequency with mass or a change in the period distribution of the binaries with mass or both. The number of SB2 binaries in this sample suggests there may be a peak in the mass ratio distribution for spectroscopic binaries in these clusters near $q\approx 1$. There is also clear evidence from the properties of the SB1 binaries that the mass ratio distribution for spectroscopic binaries in these clusters is broad, extending down to $q\approx 0.25$. Acknowledgments {#acknowledgments .unnumbered} =============== Based on observations collected at the European Southern Observatory, Chile (Programme ID: 076.C\_145). RJJ was supported by a Nuffield Undergraduate Research Bursary. This research is partially funded by a Science & Technology Facilities Council research grant (formerly PPARC). We thank the referee for comments that helped to improve the clarity of this paper. \[lastpage\] [^1]: Hint of companion in CCF. [^2]: Identified from variation in width of CCF.
--- abstract: 'We consider the N-body Schrödinger dynamics of bosons in the mean field limit with a bounded pair-interaction potential. According to the previous work [@AmNi], the mean field limit is translated into a semiclassical problem with a small parameter $\varepsilon\to 0$, after introducing an $\varepsilon$-dependent bosonic quantization. The limit is expressed as a push-forward by a nonlinear flow (e.g. Hartree) of the associated Wigner measures. These object and their basic properties were introduced in [@AmNi] in the infinite dimensional setting. The additional result presented here states that the transport by the nonlinear flow holds for rather general class of quantum states in their mean field limit.' author: - 'Z. Ammari[^1] F. Nier [^2]' title: Mean field limit for bosons and propagation of Wigner measures --- [: 81S30, 81S05, 81T10, 35Q55 ]{} Introduction ============ The mathematical analysis of the mean field limit of the $N$-body quantum dynamics of bosons started with the work of [@Hep] and [@GiVe]. Since, the problem has experienced intensive investigations using mainly the so-called BBGKY hierarchy method explained in [@Spo]. Interest was focused on studying the cases of singular interaction potential (see for example [@BGM], [@EY], [@BEGMY], [@ESY]). Recently, a new method was given in [@FGS] (see also [@FKP]) for a scalar bounded potential which inspires this work. The convergence of the quantum dynamics are typically tested on the above quoted articles, either on coherent states or on Hermite states. Even when such specific choices are avoided, the convergence on arbitrary states still has to be studied. In the work [@AmNi], Wigner measures were extended to the infinite dimensional setting, as Borel probability measures under general assumptions. It was also explained how previous weak formulations of the mean field limit are contained in the definition of these asymptotic Wigner measures, after a reformulation of the $N$-body problem as a semiclassical problem with the small parameter $\varepsilon=\frac{1}{N}\to 0$. The basic properties of these Wigner measures were considered and they were used to check that the mean field dynamics for the coherent states and Hermite states are essentially equivalent. In this paper, the problem of the mean field dynamics is considered under some restrictive assumptions on the initial data. The convergence of N-body Schrödinger dynamics of bosons in the mean field limit will be proved for a class of density operator sequences, which contains all the common examples. Remember that contrary to the finite dimensional case no natural pseudodifferential calculus can be deformed by arbitrary nonlinear flows, and the propagation of Wigner measures as dual objects cannot be straightforward in the infinite dimensional case. The limit is expressed as push-forward by a nonlinear flow (e.g. Hartree) of Wigner measures associated with the sequence of density operators. The result holds here when the pair interaction potential is bounded on $L^{2}(\mathbb{R}^{2d}_{x,y})$. This can be considered as a regular case and subsequent work will be devoted to more singular cases like in [@FKS] with a Coulombic interaction $V(x-y)=\frac{1}{|x-y|}$ or in the derivation of cubic nonlinear Schrödinger equations with $V(x-y)=\delta(x-y)$ like in the [@ESY]. Since in the literature the non relativistic and the semi-relativistic dynamics of bosons were both studied (see [@ElSc]), an abstract setting for the linear part of the flow seems relevant. Examples are reviewed in the last section. We keep the same notations as in [@AmNi]. The phase-space, a complex separable Hilbert space, is denoted by $\mathcal{Z}$ with the scalar product $\la .,.\ra$. The symmetric Fock space on $\Z$ is denoted by $\H$ and $\bigvee^n \mathcal{Z}$ is the $n$-fold symmetric (Hilbert) tensor product, so that $\H=\oplus_{n\in \mathbb{N}} \bigvee^n \mathcal{Z}$ as a Hilbert direct sum. Algebraic direct sums or tensor products are denoted with a $~{alg}~$ superscript. Hence $\H_0=\mathop{\oplus}_{n\in \mathbb{N}}^{alg}\bigvee^n \mathcal{Z}$ denotes the subspace of vectors with a finite number of particles. For any $p,q\in \mathbb{N}$, the space $\P_{p,q}(\Z)$ of complex-valued polynomials on $\Z$ is defined with the following continuity condition: $b\in\P_{p,q}(\Z)$ iff there exists a unique $\tilde b\in\L(\bigvee^p\Z,\bigvee^q\Z)$ such that: $$b(z)=\la z^{\otimes q}, \tilde{b} z^{\otimes p}\ra\,.$$ The subspace of $\P_{p,q}(\Z)$ made of polynomials $b$ such that $\tilde{b}$ is a compact operator is denoted by $\mathcal{P}^{\infty}_{p,q}(\Z)$. The [*Wick monomial*]{} of symbol $b\in \P_{p,q}(\Z)$ is the linear operator $b^{Wick}:\H_0\to\H_0$ defined as follows: $$\begin{aligned} b^{Wick}_{|\bigvee^n \Z}=1_{[p,+\infty)}(n)\frac{\sqrt{n! (n+q-p)!}}{(n-p)!} \;\hbarr^{\frac{p+q}{2}} \;\S_{n-p+q}\left(\tilde{b}\otimes I_{\bigvee^{n-p} \Z}\right)\,,\end{aligned}$$ where $\S_{n}$ is the symmetrization orthogonal projection from $\otimes^{n}\Z$ onto $\bigvee^n\Z$. Remark that $b^{Wick}$ depends on the scaling parameter $\hbarr$. Consider a polynomial $Q\in\P_{2,2}(\Z)$ such that $\tilde{Q}\in\L(\bigvee^2\Z)$ is bounded symmetric. The many-body quantum Hamiltonian of bosons is a self-adjoint operator on $\H$ having the general shape: $$\begin{aligned} \label{hamiltonian} H_\hbarr=\d\Gamma(A)+ Q^{Wick},\end{aligned}$$ where $A$ is a given self-adjoint operator on $\Z$. The time evolution of the quantum system is given by $U_{\hbarr}(t)=e^{-i\frac{t}{\hbarr} H_{\hbarr}}$ and $U^0_{\hbarr}(t)= e^{-i\frac{t}{\hbarr} \d\Gamma(A)}$ for the free motion. The commutation $[Q^{Wick}, N]=0$ with the number operator $N=\d\Gamma(1)=\left(|z|^{2}\right)^{Wick}$, ensures the essential self-adjointness of $H_{\hbarr}$ on $\D(\d\Gamma(A))\cap\H_0$ and the fact that both dynamics preserve the number. Now we turn to the description of the nonlinear classical dynamics analogues of (\[hamiltonian\]).\ Let us first recall some notations from [@AmNi]. Polynomials in $\P_{p,q}(\Z)$ admit Fréchet differentials. For $b\in\P_{p,q}(\Z)$, set $$\begin{aligned} \partial_{\overline{z}} b(z)[u]=\bar\partial_r b(z+ r u)_{|r=0} ,&& \partial_{z} b(z)[u]= \partial_{r} b(z+ r u)_{|r=0}\,,\end{aligned}$$ where $\bar\partial_r, \partial_r$ are the usual derivatives over $\mathbb{C}$. Moreover, $\partial_{z}^{k}b(z)$ naturally belongs to $(\bigvee^{k}\Z)^{*}$ (i.e.: $k$-linear symmetric functionals) while $ \partial_{\overline{z}}^{j}b(z)$ is identified via the scalar product with an element of $\bigvee^{j}\Z$, for any fixed $z\in \Z$. For $b_{i}\in \P_{p_{i},q_{i}}(\Z)$, $i=1,2$ and $k\in\mathbb{N}$, set $$\partial_z^k b_1 \; .\;\partial_{\bar z}^k b_2 (z) =\la \partial_z^k b_1(z), \partial_{\bar z}^k b_2(z)\ra_{(\bigvee^k \Z)^{*},\bigvee^{k}\Z}\; \in\P_{p_1+p_2-k, q_1+q_2-k}(\Z)\quad.$$ The multiple [*Poisson brackets*]{} are defined by $$\begin{aligned} \{b_1,b_2\}^{(k)}=\partial^k_z b_1 .\partial^k_{\bar z} b_2 -\; \partial^k_z b_2 .\partial^k_{\bar z} b_1, && \{b_1,b_2\}=\{b_1,b_2\}^{(1)}.\end{aligned}$$ The energy functional $$\label{eq.enfunct} h(z)=\la z,Az\ra+ Q(z)\,,\;\;\;z\in\D(A),$$ has the associated vector field $X:\D(A)\to \Z$, $ X(z)=Az+\partial_{\bar z} Q(z)$ and the nonlinear field equation $$\begin{aligned} \label{hartree} i \partial_t z_t=X(z_t)\end{aligned}$$ with initial condition $z_0=z\in \D(A)$. For our purpose, we only need the integral form of the later equation $$\begin{aligned} \label{hartree.int} z_t=e^{-i t A} z-i\int_0^t e^{-i (t-s) A} \,\; \partial_{\bar z}Q(z_s) \, ds, \;\mbox{ for } \;z\in\Z.\end{aligned}$$ The standard fixed point argument implies that (\[hartree.int\]) admits a unique global $C^0$-flow on $\Z$ which is denoted by $\mathbf{F}:{{\mathbb{R}}}\times \Z\to \Z$ (i.e.: $\mathbf{F}$ is a $C^0$-map satisfying $\mathbf{F}_{t+s}(z)= \mathbf{F}_{t}\circ\mathbf{F}_{s}(z)$ and $\mathbf{F}_t(z)$ solves (\[hartree.int\]) for any $z\in\Z$). While considering the evolution of the Wick symbols, the action of the free flow $e^{-itA}$ will be summarized by the next notation : $$\label{eq.wickfree} b_{t}=b\circ e^{-itA}~:~\Z\ni z \mapsto b_{t}(z)=b(e^{-itA}z)\,, \qquad b_{t}\in\oplus^{\rm alg}_{p,q\in{{\mathbb{N}}}} \P_{p,q}(\Z)\,,$$ for any $b\in\oplus^{\rm alg}_{p,q\in{{\mathbb{N}}}} \P_{p,q}(\Z)$ and any $t\in {{\mathbb{R}}}$.\ Moreover, if $z_t$ solves $(\ref{hartree.int})$, and $Q_{t}$ is defined according to (\[eq.wickfree\]), then $w_t=e^{i tA} z_t$ solves the differential equation $$\frac{d}{dt}\, w_t=-i \partial_{\bar z} Q_t(w_t)\,.$$ Therefore for any $b\in\P_{p,q}(\Z)$, the following identity holds b(w\_t)&=&\_[|z]{} b(w\_t)\[-i \_[|z]{}Q\_t(w\_t)\]+ \_[z]{} b(w\_t)\[-i \_[|z]{}Q\_t(w\_t)\]\ &=&i {Q\_t,b}(w\_t). This yields for any $z\in \Z$ and $b\in\oplus^{\rm alg}_{p,q\in{{\mathbb{N}}}} \P_{p,q}(\Z)$, the Duhamel formula \[class-integ-form\] b\_t(z)=b\_t(z)+i\_[0]{}\^[t]{}{Q,b\_[t-t\_1]{}}\_[t\_1]{}(z) dt\_[1]{}, by observing that $\{Q_{t_1},b\}(w_{t_1})=\{Q,b_{-t_1}\}(z_{t_1})$. Results ======= While introducing or using Wigner measures, all the arguments are carried out with extracted sequences (or subsequences) $(\varepsilon_{n})_{n\in {{\mathbb{N}}}}$ such that $\lim_{n\to \infty}\varepsilon_{n}=0$, instead of considering a non countable range $(0, \overline\hbarr)$, $\overline\hbarr>0$, of values for the small parameter $\varepsilon$. Without loss of generality (see [@AmNi]) one can consider a countable family $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ of density matrices, $\varrho_{\varepsilon_{n}}\geq 0$, $\Tr\left[\varrho_{\varepsilon_{n}}\right]=1$, and test them with $\varepsilon_{n}$-quantized (Wick, Weyl or anti-Wick) observables before taking the limit $\varepsilon_{n}\to 0$. For the sake of conciseness, the $\varepsilon$ or $\varepsilon_{n}$ parameter does not appear in the notations of quantized observables. The first condition which characterizes our class of $\varepsilon_n$-dependent density matrices reads: $$\exists \lambda >0 \;:\; \forall k\in{{\mathbb{N}}}, \;\;{\rm Tr}[ N^k \rho_{\hbarr_n}]\leq \lambda^k \; \; \mbox{ uniformly in }\, n\in{{\mathbb{N}}}\,, (N=N_{\varepsilon_{n}})\,. \hspace{.4in} (H0)$$ Wigner measures were constructed in [@AmNi Corollary 6.14] for the sequence $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$. Possibly extracting a subsequence still denoted $(\varepsilon_{n})_{n\in{{\mathbb{N}}}}$, there exists a Borel probability measure $\mu$ called [*Wigner measure*]{} such that: $$\begin{aligned} \label{wigner} \lim_{\hbarr_{n}\to 0} \Tr[\rho_{\hbarr_{n}} \, b^{Wick}]=\int_{\Z} b(z) \; d\mu(z)\,, \mbox{ for any } \,b\in \oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P^\infty_{\alpha,\beta}(\Z)\,,\end{aligned}$$ with again $b^{Wick}=b^{Wick}_{\varepsilon_{n}}$.\ The statement (\[wigner\]) does not hold in general for all $b\in \oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P_{\alpha,\beta}(\Z)$ and counterexamples exhibiting the phenomenon of dimensional defect of compactness were given in [@AmNi]. The extension of (\[wigner\]) to the larger class of symbols $\oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P_{\alpha,\beta}(\Z)$ depends on the sequence $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ and it turns out to be an important fact when studying the mean field limit. In the following, a sequence $(\rho_{\hbarr_{n}})_{n\in{{\mathbb{N}}}}$ with a single Wigner measure $\mu$ will have the property $(P)$ when: $$\begin{aligned} \lim_{\hbarr_n\to 0} \Tr[\rho_{\hbarr_{n}} \, b^{Wick}]=\int_{\Z} b(z) \; d\mu(z)\,, \mbox{ for any } \;b\in \oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P_{\alpha,\beta}(\Z)\,. \hspace{.2in} (P)\end{aligned}$$ Here is the main theorem. \[main-1\] Let the sequence $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ of density matrices, $\varrho_{\varepsilon_{n}}\geq 0$, $\Tr\left[\varrho_{\varepsilon_{n}}\right]=1$, $\lim_{n\to\infty}\varepsilon_{n}=0$, satisfy $(H0)$ and $(P)$. Then the limit \[formula-m\] \_[n ]{} [Tr]{}\[\_[\_[n]{}]{} e\^[i H\_[\_n]{}]{} b\^[Wick]{} e\^[-i H\_[\_n]{}]{}\]= \_(b\_t) (z) d, holds for any $t\in{{\mathbb{R}}}$ and any $\,b\in\oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P_{\alpha,\beta}(\Z)$ with $b^{Wick}=b^{Wick}_{\varepsilon_{n}}$. \[trans-measure\] Since $\mathbf{F}$ is a $C^0$-map the r.h.s. of (\[formula-m\]) can be written as \_(b\_t)(z) d=\_b(z) d\_t, where $\mu_t$ is a push-forward measure defined by $\mu_t(B)=\mu(\mathbf{F}_{-t}(B))$, for any Borel set $B$. We refer the reader to [@AmNi] for the definition of Weyl observables and the Schwartz class of cylindrical functions $\S_{cyl}(\Z)$. \[wigner-measure-id\] Let the sequence $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ of density matrices, $\varrho_{\varepsilon_{n}}\geq 0$, $\Tr\left[\varrho_{\varepsilon_{n}}\right]=1$, $\lim_{n\to\infty}\varepsilon_{n}=0$, satisfy $(H0)$ and $(P)$. Then the limit \[formula-weyl\] \_[\_n 0]{} [Tr]{}\[\_[\_[n]{}]{} e\^[i H\_[\_n]{}]{} b\^[Weyl]{} e\^[-i H\_[\_n]{}]{}\]= \_b\_t(z) d, holds for any $b\in\S_{cyl}(\Z)$ and any $t\in{{\mathbb{R}}}$. A consequence of Thm. \[main-1\] and [@AmNi Prop. 6.15] is that the sequence $$\rho_{\hbarr_{n}}(t) =U_{\varepsilon_n}(t) \rho_{\hbarr_{n}} U_{\varepsilon_n}(t)^*$$ admits a single Wigner measure given by $\mu_t$. Hence, by definition \_[\_n 0]{} [Tr]{}\[\_[\_[n]{}]{}(t) b\^[Weyl]{}\]&=& \_[\_n 0]{} \_[p]{} () \[\_[\_[n]{}]{}(t) W()\] L\_p(d)\ &=& \_[p]{} () \_e\^[2i[Re]{}(z,)]{} d\_t(z)L\_p(d).  \ Another formulation states that the Wigner measure $\mu_t$ satisfies a transport equation in an integral form. Let $(\rho_{\hbarr_n}(t))_{n\in{{\mathbb{N}}}}$ be as above and let $\mu_t$ denote its Wigner measure. Then $t\in\mathbb{R}\mapsto\mu_t$ is a solution to the transport equation: \[transport.int\] \_t(b)=\^0\_t(b)+ i\_0\^t \_s({Q,b\_[t-s]{}})ds, for any $b\in\oplus_{p,q\in{{\mathbb{N}}}}^{\rm alg}\P(\Z)$ and where $\mu_t^0(B)=\mu(e^{-it A} B)$ for any borel set $B\,$. The relation (\[transport.int\]) is given by testing (\[class-integ-form\]) on $\mu=\mu_{0}$. Criteria for the property $(P)$ =============================== In the following, two conditions which ensure the property $(P)$ are formulated. Recall that for any $P\in {\cal L}(\Z)$ the operator $\Gamma(P)$ acting on $\H$ is defined by $$\Gamma(P)_{|\bigvee^{n}\Z}=P\otimes P\cdots\otimes P\,$$ and $\Gamma(P)$ is an orthogonal projector if $P$ is too. The first criterion is a ’tightness’ assumption with respect to the trace norm of the state $$\forall \eta>0, \exists P\in\L(\Z) \mbox{ finite rank orthogonal projector }, \; \forall n\in{{\mathbb{N}}}: \;{\rm Tr}[ (1-\Gamma(P)) \rho_{\hbarr_n}] < \eta \; \;\;(T)\,.$$ The dual version is formulated as an equicontinuity assumption with respect to the Wick symbols: $$\forall p,q\in{{\mathbb{N}}},\, \forall\eta>0, \exists \W_0\subset\L(\bigvee^p\Z,\bigvee^q\Z) \;\; \forall \tilde b\in\W_0, \forall n\in{{\mathbb{N}}}\;:\; \left|{\rm Tr}[\rho_{\hbarr_n} b^{Wick}]\right|<\eta \,, \;\;\; (E)$$ where $\W_0$ is a neighborhood of zero in $\L(\bigvee^p\Z,\bigvee^q \Z)$ endowed with the $\sigma$-weak topology. Assume that $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ satisfies $(H0)$. Then\ (i) $(T)\Rightarrow (P)$,\ (ii) $(E)\Rightarrow (P)$. We aim to prove $(P)$ for $b\in\P_{p,q}(\Z)$.\ (i) Start with \[\_[\_n]{} b\^[Wick]{} \]&=&[Tr]{}\[\_[\_n]{} (P)b\^[Wick]{} (P) \]+[Tr]{}\[\_[\_n]{} (1-(P)) b\^[Wick]{} (P) \]\ &+&[Tr]{}\[\_[\_n]{} (P) b\^[Wick]{} (1-(P)) \]+[Tr]{}\[\_[\_n]{} (1-(P)) b\^[Wick]{} (1-(P)) \] Estimate all the terms containing $(1-\Gamma(P))$ in a similar way. For example, we have |[Tr]{}\[\_[\_n]{} (1-(P)) b\^[Wick]{} (P) \]|&= & |[Tr]{}\[N\^ \_[\_n]{} (1-(P)) b\^[Wick]{}N\^[-]{} (P) \]| \[b1\]\ && C\_[p,q]{}(b) N\^ \_[\_n]{}\^[1/2]{}\_[\_n]{}\^[1/2]{} (1-(P))\_[1]{}\[b2\]\ && C\_[p,q]{}(b)N\^ \_[\_n]{}N\^\_1\^[1/2]{} (1-(P))\_[\_n]{}(1-(P))\_[1]{}\^[1/2]{}\[b3\]\ && \_[p,q]{}(b) [Tr]{}\[\_[\_n]{} (1-(P))\]\^[1/2]{}\[b4\]. First (\[b2\]) comes from the number estimate $\left\|b^{Wick}\la N\ra^{-\frac{p+q}{2}}\right\|\leq C_{p,q}(b)$ then Cauchy-Schwarz inequality yield (\[b3\]). The last estimate (\[b4\]) is possible with $(H0)$. Remark that $\Gamma(P) b^{Wick} \Gamma(P)=\Gamma(P) b(Pz)^{Wick} \Gamma(P)$ and that the polynomial $b(Pz)\in\P_{p,q}^{\infty}(\Z)$ when $P$ is finite rank orthogonal projector. The hypothesis $(T)$ and the above argument allow to approximate ${\rm Tr}[\rho_{\hbarr_n} \, b^{Wick} ]$ by the quantity ${\rm Tr}[\rho_{\hbarr_n} \, b(Pz)^{Wick} ]$ using $\eta/3$ argument.\ Now, write |[Tr]{}\[\_[\_n]{} b\^[Wick]{} \]-\_b(z) d|&& |[Tr]{}\[\_[\_n]{} (b\^[Wick]{} -b(Pz)\^[Wick]{}) \]+[Tr]{}\[\_[\_n]{} b(Pz)\^[Wick]{}\] -\_b(Pz) d.\ & &+.\_d|. So, the property $(T)$ and $(H0)$ implies $(P)$.\ (ii) There exists a sequence $b_\kappa\in\P_{p,q}^\infty(\Z)$ such that $\tilde b_\kappa$ converges in the $\sigma$-weak topology to $\tilde b$. We have \[p-e\] |[Tr]{}\[\_[\_n]{} b\^[Wick]{} \]-\_b(z) d|&& |[Tr]{}\[\_[\_n]{} (b\^[Wick]{} -b\_\^[Wick]{}) \]+( [Tr]{}\[\_[\_n]{} b\_(z)\^[Wick]{}\] -\_b\_(z) d) .\ & &+.\_d|. So, $(P)$ holds by an $\eta/3$ argument and using respectively $(E)$, (\[wigner\]) and dominated convergence for each term in the (r.h.s.) of (\[p-e\]).  \ 1) The space of bounded operators $\L(\bigvee^p\Z,\bigvee^q \Z)$ endowed with the $\sigma$-weak topology is not a Baire space when $\Z$ is infinite dimensional. Otherwise, $(E)$ and hence $(P)$ would be fulfilled by any sequence $(\rho_{\hbarr_n})_{n\in {{\mathbb{N}}}}$ satisfying $(H0)$, according to Banach-Steinhaus Theorem (Uniform Boundedness Principle).\ 2) The hypothesis $(H0)$ in the above lemma, can be replaced by the weaker statement (see [@AmNi Prop.6.15]) $$\exists C>0 : \; \forall k\in{{\mathbb{N}}}, \; {\rm Tr}[N^{k} \rho_{\hbarr_n}N^{k}]\leq C(Ck)^{k}$$ uniformly in $\hbarr_n$. This can be interpreted as an analyticity property of $t\to {\rm Tr}[e^{itN^{2}}\varrho_{\varepsilon_{n}}e^{itN^{2}}]$ in $\left\{\left|t\right|<1/C\right\}$, uniformly w.r.t $\varepsilon_{n}$. Proof of Thm. \[main-1\] ======================== For $m\in {{\mathbb{N}}}$, $r\in\{0,\cdots,m\}$ and $t_1,\cdots,t_m,t\in\mathbb{R}$, associate with any $b\in \P_{p,q}(\Z)$ the polynomial: $$\begin{aligned} \label{cnr} C^{(m)}_r(t_m,\cdots,t_1,t)=\frac{1}{2^r}\; \sum_{\sharp \{i:\; \gamma_i=2\}=r}\; \{Q_{t_m}, \cdots, \{Q_{t_1},b_t \underbrace{\}^{(\gamma_1)}\cdots\}^{(\gamma_n)}}_{ \gamma_i\in\{1,2\}} \in\P_{p-r+m,q-r+m}(\Z)\,.\end{aligned}$$ Note that for shortness the dependence of $C_r^{(m)}(t_m,\cdots,t_1,t)$ on $b$ is not made explicit on the notation and even sometimes we will write $C_r^{(m)}$. By convention we set $C^{(0)}_0(t)=b_t$. We collect some statements from [@AmNi]. Remember that $\tilde{b}$ denotes the operator $\tilde{b}=\frac{\partial_{\bar z}^{q}\partial_{z}^p}{q!p!} b(z)\in \mathcal{L}(\bigvee^{p}\Z,\bigvee^{p}\Z)$ associated with $b\in \P_{p,q}(\Z)$. \[tech.lem.1\] Let $b\in\P_{p,q}(\Z)$.\ (i) The following inequality holds true $$\begin{aligned} \left|\widetilde{\{Q_s,b_t\}^{(2)}}\right|_{\mathcal{L}(\bigvee^{p}\Z,\bigvee^{q}\Z)} \leq \; 2[p(p-1)+q(q-1)] \;|\tilde Q| \;|\tilde b|_{\mathcal{L}( \bigvee^p\Z, \bigvee^q\Z)}\,.\end{aligned}$$ (ii) For any $m\in {{\mathbb{N}}}$ and $r\in \left\{0,1,\ldots,m\right\}$, we have $$\begin{aligned} \left|\widetilde{C^{(m)}_r}\right|_{\mathcal{L}(\bigvee^{p+m-r}\Z, \bigvee^{q+m-r}\Z)}\leq 2^{2m-r} \;\ds\left(^m_r\right) \; (p+m-r)^{2r} \;\frac{(p+m-r-1)!}{(p-1)!} \;|\tilde Q|^m \;|\tilde b|_{\mathcal{L}(\bigvee^p\Z, \bigvee^q\Z)}\,,\end{aligned}$$ when $p\geq q$ with a similar expression when $q\geq p$ (replace $(p+m-r, p-1)$ with $(q+m-r, q-1)$). See [@AmNi Lemma 5.8, 5.9]. \[tech.lem.2\] For any $\delta>0$ there exists $ T>0$ such that for all $0<t<T\,$: $$\begin{aligned} \label{poisson-brack-conv} \sum_{m=0}^{\infty}\delta^m \;\int_{0}^{t}d{t_{1}}\cdots\int_{0}^{t_{m-1}}dt_{m}\;\; \left|\widetilde{C^{(m)}_{0}}(t_{m},\ldots,t_{1},t)\right|_{\L(\bigvee^{p+m}\Z,\bigvee^{q+m}\Z)}\,< \infty\end{aligned}$$ It is enough to bound (\[poisson-brack-conv\]) in the case $p\geq q$. Using Lemma \[tech.lem.1\] (iii) with $r=0$, we obtain \_[m=0]{}\^\^m \_[0]{}\^[t]{}d[t\_[1]{}]{}\_[0]{}\^[t\_[m-1]{}]{}dt\_[m]{} |(t\_[m]{},…,t\_[1]{},t)|2\^[p-1]{} |b| \_[m=0]{}\^ (2\^3 t |Q|)\^m. The r.h.s. is finite whenever $ \; 0<t<T=(2^3 \;\delta\; |\tilde Q|)^{-1}$. [**Proof of Thm. \[main-1\]**]{} \ First consider the following expansion proved in [@AmNi (50)-(52)] for any positive integer $M$: $$\begin{aligned} U_\hbarr (t)^* b^{Wick}U_\hbarr (t) &=&\sum_{m=0}^{M-1} i^m \;\; \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m \; \left[C_0^{(m)}(t_m,\cdots,t_1,t)\right]^{Wick} \\ &&\hspace{-1.7in} + \frac{\hbarr}{2} \sum_{m=1}^{M} i^{m} \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m\; U_\varepsilon(t_m)^* U^0_\varepsilon(t_m) \left[\{Q_{t_{m}},C^{(m-1)}_{0}(t_{m-1},\cdots,t_1,t) \}^{(2)}\right]^{Wick} U_\varepsilon^0(t_m)^* U_\varepsilon(t_m) \\ && \hspace{-1.7in} + i^{M} \int_0^t dt_1\cdots\int_0^{t_{M-1}} dt_M \; U_\varepsilon(t_M)^* U^0_\varepsilon(t_M) \left[C^{(M)}_0(t_M,\cdots,t_1,t)\right]^{Wick} U_\varepsilon^0(t_M)^* U_\varepsilon(t_M)\,,\end{aligned}$$ where the equality holds in $\L(\bigvee^s\Z,\bigvee^{s+q-p}\Z)$ for any $s\in{{\mathbb{N}}}$, $s\geq q-p$. Multiplying on the left the above identity by $\rho_{\hbarr_n}$ and then using number estimates with the help of $(H0)$, yields an identity on $\L_1(\H)$ on which we take the trace. This leads to \[p.1\] &&\[\_[\_[n]{}]{} U\_[\_n]{} (t)\^\* b\^[Wick]{}U\_[\_n]{} (t)\]= \_[m=0]{}\^[M-1]{} i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m\ && + \_[m=1]{}\^[M]{} i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m\ \[p.2\] &&\ \[p.3\]&& + i\^M \_0\^t dt\_1\_0\^[t\_[M-1]{}]{} dt\_M . The interchange of trace and integrals on the r.h.s. is justified by the bounds on Lemma \[tech.lem.1\]. Lemma \[tech.lem.2\] implies that the term of (\[p.1\]) and (\[p.2\]) are bounded by A\_[m]{}&=& \^[m+]{}(t)\^m\_0\^[t]{} dt\_1\_0\^[t\_[m-1]{}]{} dt\_m ||\ B\_[m]{}&=& \_[n]{}|| (p+q+m-1)\^2 \^[m-1+]{} (t)\^m\_0\^[t]{} dt\_1\_0\^[t\_[m-1]{}]{} dt\_m || while the remainder (\[p.3\]) is estimated by $$\left|(\ref{p.3})\right|\leq {\rm sign}(t)^M \int_0^{t} dt_1\cdots\int_0^{t_{M-1}} dt_M \; \left|\widetilde{C_0^{(M)}}\right|=C_{M}.$$ By Lemma \[tech.lem.1\], the series $\sum_{m=0}^{\infty}A_{m}$ and $\sum_{m=0}^{\infty}B_{m}$ converge as soon as $|t|<T_0=(2^3 \lambda |\tilde Q|)^{-1}$ while $\lim_{M\to \infty}C_{M}=0$. Hence the relation (\[p.1\])(\[p.2\])(\[p.3\]) holds with $M=\infty$ with a vanishing third term and a second term bounded by $\sum_{m=0}^{\infty}B_{m}=\mathcal{O}(\hbarr_{n})$. Therefore, we obtain $$\lim_{\hbarr_{n}\to 0}{\rm Tr}[\rho_{\hbarr_n} U_{\hbarr_n} (t)^* b^{Wick}U_{\hbarr_n} (t)]-\sum_{m=0}^{\infty} i^m \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m {\rm Tr}\left[ \rho_{\hbarr_n} \left(C_0^{(m)}(t_m,\cdots,t_1,t)\right)^{Wick} \right]=0.$$ Owing to the condition $(P)$ which provides the pointwise convergence and the uniform bound of $\sum_{m=0}^{\infty}A_{m}$, the Lebesgue’s convergence theorem implies \[convergence\] \_[\_n0]{} \_[m=0]{}\^ i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m && =\ &&\_[m=0]{}\^ i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m \_ C\_0\^[(m)]{}(t\_m,,t\_1,t;z) d. Now, we interchange the sum over $m$ and the integrals on $(t_1,\cdots,t_m,t)$ with the integral over $\Z$ on (\[convergence\]) simply with a Fubini argument based on the absolute convergence (written here for $t>0$): \_[m=0]{}\^ \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m && \_| C\_0\^[(m)]{}(t\_m,,t\_1,t;z)| d\ &&\_[m=0]{}\^ (\_|z|\^[p+q+2m]{} d) \_0\^t dt\_1\_0\^[t\_[m-1]{}]{}dt\_m |(t\_m,,t\_1,t)| . Again $(H0)$ and $(P)$ imply that for all $k\in{{\mathbb{N}}}$ there exists $\lambda >0$ such that $$\ds\int_\Z \;|z|^{2k}\;d\mu=\lim_{\hbarr_n \to 0} {\rm Tr}[\rho_{\hbarr_n} \;(|z|^{2k})^{Wick}] =\lim_{\hbarr_n \to 0} {\rm Tr}[\rho_{\hbarr_n} N^{k}] \leq \lambda^k.$$ Hence, Lemma \[tech.lem.2\] yields for $|t|<T_0$: \_[\_n0]{}[Tr]{}\[\_[\_n]{} U\_[\_n]{} (t)\^\* b\^[Wick]{}U\_[\_n]{} (t)\]&=& \_[m=0]{}\^ i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m \_ C\_0\^[(m)]{}(t\_m,,t\_1,t;z) d\ &=&\_ \_[m=0]{}\^ i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m C\_0\^[(m)]{}(t\_m,,t\_1,t;z) d, where the integrand $\ds\sum_{m=0}^{\infty} i^m \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m \; C_0^{(m)}(z)$ is a convergent series in $L^1(\mu)$.\ The last step is the identification of the limit with the r.h.s. of (\[formula-m\]). Indeed, an iteration of (\[class-integ-form\]) reads $$\begin{aligned} b(z_{t}) =b_t(z)+ i \int_0^t \; \{Q_{t_1},b_t\}(z)\;dt_1+ i^2 \int_0^t dt_1\int_0^{t_{1}} dt_2 \; \{Q_{t_2},\{Q_{t_1},b_t\}\}(e^{i t_2 A} z_{t_2})\,,\end{aligned}$$ after setting $z_{t}=\mathbf{F}_{t}(z)$ and defining the Wick symbols $b_{t}$ and $Q_{t}$ according to . By induction we obtain for any $M>1$: $$\begin{aligned} b\circ{\bf F}_t(z) &=&b_t(z)+\sum_{m=1}^{M-1} i^m \;\; \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m \; \;C_0^{(m)}(t_m,\cdots,t_1,t;z) \label{idf.1}\\ &+& i^{M} \int_0^t dt_1\cdots\int_0^{t_{M-1}} dt_M \; \;C^{(M)}_0(t_M,\cdots,t_1,t;e^{i t_M A} z_{t_M}) \,.\label{idf.2}\end{aligned}$$ An integration with respect to the measure $\mu$ leads to \_ b\_t(z) d&=& \_[n=0]{}\^[M-1]{} i\^n \_0\^t dt\_1\_0\^[t\_[n-1]{}]{} dt\_n \_ C\_0\^[(n)]{}(t\_n,,t\_1,t;z) d\ &+& i\^[M]{} \_0\^t dt\_1\_0\^[t\_[M-1]{}]{} dt\_M \_ C\^[(M)]{}\_0(t\_M,,t\_1,t;e\^[i t\_M A]{} z\_[t\_M]{}) d. Again the uniform estimate $\sum_{m=0}^{\infty}A_{m}$ when $|t|<T_{0}$ and $\lim_{M\to \infty}C_{M}=0$, allow to take the limit as $M\to\infty$. This implies for $|t|<T_0$ $$\int_{\Z} \,b\circ{\bf F}_t(z) \;d\mu= \sum_{m=0}^{\infty} i^m \;\; \int_0^t dt_1\cdots\int_0^{t_{m-1}} dt_m \;\int_{\Z} C_0^{(m)}(t_m,\cdots,t_1,t;z) \;d\mu.$$ This proves the result for $|t|<T_0$ and it is extended to any time by the next iteration argument. Indeed, it is clear that $\rho_{\hbarr_{n}}(t)=U_{\hbarr_n} (t)\rho_{\hbarr_{n}} U_{\hbarr_n} (t)^*$ satisfies $(H0)$ since $U_{\hbarr_n} (t)$ commute with $N$. The property $(P)$ holds for $\rho_{\hbarr_{n}}(t)$ when $|t|<T_0$ by Remark \[trans-measure\] and Corollary \[wigner-measure-id\]. For $t,s$ such that $|t|,|s|<T_0$, the sequence $(\rho_{\hbarr_{n}}(t))_{n\in{{\mathbb{N}}}}$ satisfies $(H0)$ and $(P)$. Therefore, the result for short times yields \_[\_n0]{}[Tr]{}\[\_[\_n]{}(t) U\_[\_n]{} (s)\^\* b\^[Wick]{}U\_[\_n]{} (s)\] &=& \_ b\_s(z)d\_t=\_ b\_[t+s]{}(z)d. As by product we have for any $b\in \oplus_{\alpha,\beta\in{{\mathbb{N}}}}^{\rm alg}\P_{\alpha,\beta}(\Z)$ \[conv-lmu\] b\_t(z)=L\^1()-\_[m=0]{}\^ i\^m \_0\^t dt\_1\_0\^[t\_[m-1]{}]{} dt\_m C\_0\^[(m)]{}(t\_m,,t\_1,t;z). Moreover, the arguments used in the proof of Thm. \[main-1\] can not ensure the pointwise absolute convergence of the r.h.s. (\[conv-lmu\]) for all $z\in\Z$. Examples ======== M1) Let $\Z=L^2({{\mathbb{R}}}^d,dx)$, $A=D_x^2+U(x)$ self-adjoint and $Q$ is a multiplication operator by $\frac{1}{2} V(x-y)$ with $V\in L^\infty({{\mathbb{R}}}^d)$.\ M2) Let $\Z=L^2({{\mathbb{R}}}^d,dx)$, $A=\sqrt{D_x^2+m^2}+U(x)$ self-adjoint and $Q$ as above.\ M3) When $\Z={{\mathbb{C}}}^{d}\sim {{\mathbb{R}}}^{2d}_{x,\xi}$, one recovers the standard semiclassical limit problem and the condition $(P)$ is always satisfied if $(H0)$ is satisfied. We refer for example the reader to [@CRR] [@Ger] [@GMMP] [@HMR] [@LiPa] [@Mar] [@Rob] for various results about this topic. 1\) Every sequence $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ valued in a compact set of the Banach space of trace class operators has the Wigner measure $\delta_0$. If in addition $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ satisfies $(H0)$ then $(P)$ holds true.\ 2) Let $(\rho_{\hbarr_n})_{n\in{{\mathbb{N}}}}$ as in 1) and satisfying $(H0)$ and let $(z_n)_{n\in{{\mathbb{N}}}}$ be a sequence of $\Z$ such that $\lim_{n\to \infty}$ $\left|z_{n}-z\right|=0$. Then $\tilde\rho_{\hbarr_n}=W(\frac{\sqrt{2}}{{i\hbarr}}z_{n}) \rho_{\hbarr_n} W(-\frac{\sqrt{2}}{{i\hbarr}}z_{n})$ admits the unique Wigner measure $\mu=\delta_{z}$ where $z$ and (P) holds true. The push-forward measure is $\mu_t=\delta_{z_t}$.\ 3) Let $(z_n)_{n\in{{\mathbb{N}}}}$ be a sequence valued in a compact set of $\Z$. So $\rho_{\hbarr_n}=|z_n^{\otimes [\hbarr_n^{-1}]}\ra \la z_n^{\otimes [\hbarr_n^{-1}]}| $ satisfies $(H0)$ and the property $(P)$ and admits the Wigner measures $\frac{1}{2\pi} \int_0^{\pi} \delta_{e^{i\theta}z} d\theta$ where $z$ is any cluster point of $(z_n)_{n\in{{\mathbb{N}}}}$. Several other examples can be obtained by superposition, see [@AmNi].\ 4) Let $(z_n)_{n\in{{\mathbb{N}}}}$ be a sequence such that $|z_n|=1$ in $\Z$ converging weakly to $0$. Then $(P)$ fails for $\rho_{\hbarr_n}= |E(z_{n})\ra\la E(z_{n})|$ with $E(z_{n})=W(\frac{\sqrt{2}}{{i\hbarr}}z_n)|\Omega\ra $, although $(H0)$ holds. [999]{} Z. Ammari, F. Nier. Mean field limit for bosons and infinite dimensional phase-space analysis. http://arxiv.org/abs/0711.4128 C. Bardos, F. Golse, N. Mauser Weak coupling limit of the n-particle Schrödinger equation. Methods Appl. Anal. 2, 275–293 (2000) C. Bardos, L. Erdös, F. Golse, N. Mauser, H-T. Yau Derivation of the Schrödinger-Poisson equation from the quantum N-body problem. C.R. Acad. Sci. Paris 334, 515–520 (2002) M. Combescure, J. Ralston, D. Robert. A proof of the Gutzwiller semiclassical trace formula using coherent states decomposition. Comm. Math. Phys. 202 (1999), no. 2, 463–480. A. Elgart, B. Schlein. Mean field dynamics of boson stars Comm. Pure and Appl. Math. Vol. 60, (2005) 500–545 L. Erd[ö]{}s, H.T. Yau. Derivation of the nonlinear Schrödinger equation from a many body coulomb system. Adv. Theor. Math. Phys. 5, 1169–2005 (2001) L. Erd[ö]{}s, B. Schlein, H.T. Yau. Derivation of the cubic non-linear Schr[ö]{}dinger equation from quantum dynamics of many-body systems. Invent. Math. 167 (2007), no. 3, 515–614. J. Fröhlich, S. Graffi, S. Schwarz. Mean-field- and classical limit of many-body Schrödinger dynamics for bosons. Comm. Math. Phys. 271, No. 3 (2007), 681–697. J. Fr[ö]{}hlich, A. Knowles, A. Pizzo. Atomism and quantization. J. Phys. A: Math. Theor. 40 (2007) 3033–3045. J. Fröhlich, A. Knowles, S. Schwarz. On the Mean-field limit of bosons with Coulomb two-body interaction http://arxiv.org/abs/0805.4299 P. G[é]{}rard. Mesures semi-classiques et ondes de Bloch. S[é]{}minaire sur les [É]{}quations aux D[é]{}riv[é]{}es Partielles, 1990–1991, Exp. No. XVI, 19 pp., [É]{}cole Polytech., Palaiseau, 1991. P. G[é]{}rard, P.A. Markowich, N.J. Mauser, F. Poupaud. Homogenization limits and Wigner transforms. Comm. Pure Appl. Math. 50 (1997), no. 4, 323–379. J. Ginibre, G. Velo. The classical field limit of scattering theory for nonrelativistic many-boson systems. I. Comm. Math. Phys. 66, No. 1 (1979), 37–76 B. Helffer, A. Martinez, D. Robert. Ergodicit[é]{} et limite semi-classique. Comm. Math. Phys. 109 (1987), no. 2, 313–326. K. Hepp. The classical limit for quantum mechanical correlation functions. Comm. Math. Phys. 35 (1974), 265–277 P.L. Lions, T. Paul. Sur les mesures de Wigner. Rev. Mat. Iberoamericana 9 (1993), no. 3, 553–618. A. Martinez. An introduction to semiclassical analysis and microlocal analysis. Universitext, Springer-Verlag, (2002). D. Robert. Autour de l’approximation semi-classique. Progress in Mathematics, 68. Birkh[ä]{}user Boston, 1987. H. Spohn. Kinetic equations from Hamiltonian dynamics. Rev. Mod. Phys. 52, No. 3 (1980), 569–615 [^1]: Département de Mathématiques, Universit[é]{} de Cergy-Pontoise UMR-CNRS 8088, 2, avenue Adolphe Chauvin 95302 Cergy-Pontoise Cedex France. Email: zied.ammari@u-cergy.fr [^2]: IRMAR, UMR-CNRS 6625, Université de Rennes I, campus de Beaulieu, 35042 Rennes Cedex, France. Email: francis.nier@univ-rennes1.fr
[**Damping signatures in future neutrino oscillation experiments** ]{} **** Abstract We discuss the phenomenology of damping signatures in the neutrino oscillation probabilities, where either the oscillating terms or the probabilities can be damped. This approach is a possibility for tests of non-oscillation effects in future neutrino oscillation experiments, where we mainly focus on reactor and long-baseline experiments. We extensively motivate different damping signatures due to small corrections by neutrino decoherence, neutrino decay, oscillations into sterile neutrinos, or other mechanisms, and classify these signatures according to their energy (spectral) dependencies. We demonstrate, at the example of short baseline reactor experiments, that damping can severely alter the interpretation of results, [[*e.g.*]{}]{}, it could fake a value of ${\sin^2(2 \theta_{13})}$ smaller than the one provided by Nature. In addition, we demonstrate how a neutrino factory could constrain different damping models with emphasis on how these different models could be distinguished, [[*i.e.*]{}]{}, how easily the actual non-oscillation effects could be identified. We find that the damping models cluster in different categories, which can be much better distinguished from each other than models within the same cluster. Introduction ============ Neutrino oscillations are by far the most plausible description of transitions among different neutrino flavor eigenstates [@Fukuda:1998mi; @Ahmad:2002jz; @Ahmed:2003kj; @Ahn:2002up; @Eguchi:2002dm; @Araki:2004mb; @Ashie:2004mr]. However, there have historically been other attempts in the literature to describe these transitions with other mechanisms as well as neutrino oscillations combined with such other mechanisms. These scenarios include neutrino wave packet decoherence [@Giunti:1998wq; @Giunti:2003ax; @Giunti:1992sx; @Grimus:1998uh; @Cardall:1999ze], neutrino decay [@Bahcall:1972my; @Barger:1982vd; @Valle:1983ua; @Barger:1998xk; @Pakvasa:1999ta; @Barger:1999bg; @Lindner:2001fx; @Lindner:2001th], oscillations into sterile neutrinos [@Strumia:2002fw; @Maltoni:2004ei], neutrino absorption (see, [[*e.g.*]{}]{}, [Ref.]{} [@DeRujula:1983ya]), and neutrino quantum decoherence [@Lisi:2000zt; @Benatti:2000ph; @Adler:2000vf; @Ohlsson:2000mj; @Benatti:2001fa; @Gago:2002na; @Barenboim:2004wu; @Barenboim:2004ev; @Morgan:2004vv]. A combined scenario is, for example, the combination of neutrino oscillations and neutrino decay (see, [[*e.g.*]{}]{}, [Refs.]{} [@Lindner:2001fx; @Lindner:2001th]). Although these other mechanisms, leading to “non-standard effects”, are not such successful descriptions for flavor transitions as neutrino oscillations are (in fact, they are strongly disfavored [@Ashie:2004mr; @Araki:2004mb]), they could still give rise to small corrections to the neutrino oscillations. These non-standard effects need to be described in a framework together with neutrino oscillations and can be constrained by current and future experiments (see, [[*e.g.*]{}]{}, [Ref.]{} [@Valle:2003uv] for a recent review). Thus, we will assume that the leading order effect in neutrino flavor transitions is due to neutrino oscillations, whereas the next-to-leading order effects are described by different “damping mechanisms” of the neutrino oscillations. Since any non-standard effect may point towards new interesting physics beyond the standard model, the test of small corrections due to these effects should be one of the main objectives in future high-precision neutrino oscillation physics. The assumption of standard three-flavor neutrino oscillations will inevitably lead to an erroneous derivation of the elements of the mixing matrix $U$ or the mass squared differences. We therefore define “non-oscillation effects” as any modification of the three-flavor neutrino oscillation probabilities in vacuum as well as in matter. For example, the LSND anomaly [@Aguilar:2001ty] could be an indication of non-oscillation effects according to this definition. Since future reactor and long-baseline neutrino oscillation experiments are expected to have high-precisions to the subleading neutrino oscillation parameters ${\sin^2(2 \theta_{13})}$ and ${\delta_{\mathrm{CP}}}$, we mainly discuss the impact of non-oscillation effects or possible constraints on the non-oscillation effects in the context of these experiments. In principle, one could think of many different approaches to test non-oscillation effects with future long-baseline experiments: Neutral-currents : can be used to test the conservation of probability, [[*i.e.*]{}]{}, $P_{\alpha e} + P_{\alpha \mu} + P_{\alpha \tau} =1$ (see, [[*e.g.*]{}]{}, [Ref.]{} [@Barger:2004db]). However, at long-baseline experiments, uncertainties in the neutral-current cross-sections and the charged-current contamination lead to a precision of only about $10~\%-15~\%$ [@Barger:2004db]. In addition, even if some non-oscillation effects are found, there will be no information on the nature of the effects, whereas effects conserving the overall probability cannot be detected at all. The detection of $\boldsymbol{\nu_\tau}$ : can complement the information on $P_{\alpha e}$ and $P_{\alpha \mu}$ to test the conservation of probability (see, [[*e.g.*]{}]{}, [Ref.]{} [@Donini:2002rm]). Since $\nu_\tau$ detection is much more sophisticated and less efficient than the detection of $\nu_e$ and $\nu_\mu$ due to the higher $\tau$ production threshold, this is also a non-trivial test. If there are non-oscillation effects, then the information will be better than in the preceding case, since one will know which neutrino oscillation probabilities are affected. Unitarity triangles : for the lepton sector can be constructed [@Farzan:2002ct; @Zhang:2004hf]. However, since there is no simple relationship between the quantities of the unitarity triangles and the neutrino oscillation observables, this approach may not be the most feasible for the lepton sector. Tests of distinctive signatures, : [[*i.e.*]{}]{}, spectral (energy) dependent effects, could directly identify certain classes of non-oscillation effects [@Valle:2003uv; @Huber:2001zw; @Huber:2001de; @Huber:2002bi]. The advantage of such tests is that the effect could be directly identified if it produces a unique signature in the energy spectrum. In addition, this test does not depend upon normalization errors of the event rates, which are likely to constrain the first two measurements. However, there might be strong correlations with the neutrino oscillation parameters. In addition, in the future, it may be possible to resolve the line width and shape of the ${}^7$Be solar neutrino line [@Bahcall:1993ej; @Bahcall:1994cf] and extract the temperature distribution as well as the modulation of this line, which could be caused by next-to-leading order effects. Thus, performing very high-energy resolution measurements of the ${}^7$Be line may be an idea how to determine these next-to-leading order effects. Such possible precision neutrino experiments include, for example, a bromine cryogenic thermal detector proposed in [Refs.]{} [@Fiorini:1991; @Alessandrello:1995ih]. In this study, we will focus on the tests of distinctive signatures in which we introduce “damping signatures” as an abstract concept for a class of possible effects entering at probability level.[^1] In general, small Hamiltonian effects, see, [[*e.g.*]{}]{}, [Ref.]{} [@Benatti:2001fa], may be as important as the kind of damping effects that we will describe in this study. Such Hamiltonian effects could lead to direct changes in the effective neutrino oscillation parameters. Nevertheless, those effects cannot be treated in the framework presented here. We will use the observation that mechanisms, such as decoherence or decay, lead to exponential damping in the neutrino oscillation probabilities. However, the effect might be stronger for low or high energies, [[*i.e.*]{}]{}, the spectral (energy) dependence for the damping might be different. A common feature for many of the discussed models is that they will lead to less neutrinos (of all active flavors) being detected than what is expected from the three-flavor neutrino oscillations. For all other models, only the oscillating terms of the neutrino oscillation probabilities will be damped, while the total number of active neutrinos remains constant. Note that the damping signature approach does not cover all possible models, but many models can, at least in the limit of small corrections, lead to some exponential damping effect. Our study is organized as follows. In [Sec.]{} \[sec:phenomenology\], we will present and classify different forms of the damping signatures. For the reader who is not interested in different models for damping signatures, at least [Sec.]{} \[sec:gendescription\] and the examples in [Table]{} \[tab:models\] should be read to be able to follow the rest of the study. Next, in [Sec.]{} \[sec:dampedprob\], we will give and discuss the damped neutrino oscillation probabilities arising from the effects described by their signatures. For the reader, who is most interested in possible experimental implications, [Sec.]{} \[sec:dampedtwoflavor\] should summarize the most relevant features, whereas the rest of this section deals with the more technical three-flavor cases. Then, in [Secs.]{} \[sec:appl1\] and \[sec:appl2\], we will discuss the physics of these damping signatures and give two different applications in the framework of a complete experiment simulation. Especially, in [Sec.]{} \[sec:appl1\], we demonstrate how such damping signatures can modify the interpretation of physical results for future reactor experiments, whereas in [Sec.]{} \[sec:appl2\], we discuss how a neutrino factory could constrain different damping signatures and how these different signatures could be distinguished. Finally, in [Sec.]{} \[sec:summary\], we will summarize our work and present our conclusions. Phenomenology of damping signatures {#sec:phenomenology} =================================== In this section, we motivate, in a phenomenological manner, the form of the damping signatures used for the rest of this study. General description of damped neutrino oscillations in vacuum {#sec:gendescription} ------------------------------------------------------------- We start with three-flavor neutrino oscillations in vacuum, which can be described by the (undamped) vacuum oscillation probabilities $$\begin{aligned} P_{\alpha \beta} \equiv P(\nu_\alpha \rightarrow \nu_\beta) & = & \left| \langle \nu_\beta | U \, \operatorname{diag}\left( 1 , \exp \left( -{\rm i}\frac{{\Delta m_{21}^2}L}{2 E} \right), \exp \left( -{\rm i}\frac{{\Delta m_{31}^2}L}{2 E} \right) \right) \, U^\dagger | \nu_\alpha \rangle \right|^2 \nonumber \\ & = & \sum\limits_{i,j=1}^{3} U_{\alpha j} \, U_{\beta j}^* \, U_{\alpha i}^* \, U_{\beta i} \, \exp(- {\rm i} \Phi_{ij} ).\end{aligned}$$ Here $U$ is the leptonic mixing matrix in vacuum, $\Delta m_{ij}^2 \equiv m_i^2 - m_j^2$ the mass squared difference, and $\Phi_{ij} \equiv \Delta m_{ij}^2 L/(2 E)$ the oscillation phase. By defining $$J_{ij}^{\alpha\beta} \equiv U_{\alpha j} U_{\beta j}^* U_{\alpha i}^* U_{\beta i} \quad {\rm and} \quad \Delta_{ij} \equiv \frac{\Delta m_{ij}^2L}{4E} \equiv \frac{m_i^2-m_j^2}{4E}L = \frac{\Phi_{ij}}2,$$ the oscillation probabilities may be written as $$\begin{aligned} P_{\alpha\beta} &=& \sum_{i,j = 1}^3 \operatorname{Re}(J_{ij}^{\alpha\beta}) - 4 \sum_{1\leq i<j \leq 3} \operatorname{Re}(J_{ij}^{\alpha\beta})\sin^2 (\Delta_{ij}) - 2 \sum_{1\leq i<j \leq 3} \operatorname{Im}(J_{ij}^{\alpha\beta})\sin (2\Delta_{ij}) \nonumber \\ \label{equ:vacprob} &=&\sum_{i=1}^3 J_{ii}^{\alpha\beta} + 2 \sum_{1\leq i < j \leq 3} |J_{ij}^{\alpha\beta}| \cos(2\Delta_{ij}+\arg J_{ij}^{\alpha\beta}),\end{aligned}$$ where, in the first line of the equation, the first two terms are [*CP*]{}-conserving and the third term is the source of any [*CP*]{} violation, this corresponds to $\arg J_{ij}^{\alpha\beta}$ being the source of any [*CP*]{} violation in the second line. As will be discussed, there may be reasons to assume that [[Eq.]{} (\[equ:vacprob\])]{} does not give the correct neutrino oscillation probabilities. Effects that might spoil this approach of calculating neutrino oscillations probabilities include loss of wave packet coherence and neutrino decay. The effective result of such processes is to introduce damping factors to the oscillating terms of the neutrino oscillation probabilities. We define a general damping effect to be an effect that alters the neutrino oscillation probabilities to the form $$\begin{aligned} P_{\alpha\beta} &=& \sum\limits_{i,j=1}^{3} U_{\alpha j} \, U_{\beta j}^* \, U_{\alpha i}^* \, U_{\beta i} \, \exp(- {\rm i} \Phi_{ij} ) D_{ij} \nonumber \\ &=& \sum_{i,j = 1}^3 \operatorname{Re}(J_{ij}^{\alpha\beta})D_{ij} - 4 \sum_{1\leq i<j \leq 3} \operatorname{Re}(J_{ij}^{\alpha\beta})D_{ij}\sin^2 (\Delta_{ij}) - 2 \sum_{1\leq i<j \leq 3} \operatorname{Im}(J_{ij}^{\alpha\beta})D_{ij}\sin (2\Delta_{ij}) \nonumber \\ &=&\sum_{i=1}^3 J_{ii}^{\alpha\beta} D_{ii} + 2 \sum_{1\leq i < j \leq 3} |J_{ij}^{\alpha\beta}| D_{ij}\cos(2\Delta_{ij}+\arg J_{ij}^{\alpha\beta}), \label{equ:damping}\end{aligned}$$ where the damping factors $$\label{equ:dfactor} D_{ij} = \exp\left(-\alpha_{ij}\frac{|\Delta m_{ij}^2|^\xi L^\beta} {E^\gamma}\right)$$ have been introduced and we have assumed that $D_{ij} = D_{ji}$. Obviously, as $D_{ij} \rightarrow 1$, we regain the undamped oscillation probabilities given in [[Eq.]{} (\[equ:vacprob\])]{}. In [[Eq.]{} (\[equ:dfactor\])]{}, $\alpha_{ij} \ge 0$ is a non-negative damping coefficient matrix, and $\beta$, $\gamma$, and $\xi$ are numbers that describe the “signature”, [[*i.e.*]{}]{}, the $L$ ($\beta$) and $E$ ($\gamma$) dependencies as well as the dependence on the mass squared differences. In addition, the parameter $\xi$ implies two interesting cases: $\boldsymbol{\xi>0}$: : In this case, only the oscillating terms will be damped, since $\Delta m_{ii}^2 = 0$ by definition. $\boldsymbol{\xi=0}$: : The whole oscillation probability can be damped (depending on $\alpha_{ij}$), since also the terms which are independent of the oscillation phases are affected. Therefore, we expect two completely different results for these two cases. In general, [[Eq.]{} (\[equ:dfactor\])]{} introduces twelve new parameters, which can be used to model many non-standard contributions that enter on the oscillation probability (not Hamiltonian) level. We will give some examples of such contributions below. Although we expect these contributions to be small, it is rather impractical to deal with that many new parameters, which means that some simplifications need to be made. First of all, note that the parameter $\beta$ is not measurable if only one baseline is considered and can therefore be absorbed in $\alpha_{ij}$. For two baselines, it can, in principle, be resolved if all the other parameters are known. Second, for a specific model, there may be relations among different $\alpha_{ij}$’s that actually imply much fewer independent parameters. For a very simple model, the number of parameters can even reduce to one. Since we are mainly interested in the spectral signatures, [[*i.e.*]{}]{}, $\gamma$, we will often use $\alpha_{ij} \equiv \alpha$ to estimate the magnitude of different effects. Third, it will turn out that the parameter $\xi$ is strongly dependent on the model, since, as discussed above, it describes two completely different classes of models. Hence, we will finally end up with one free parameter $\alpha$ and several fixed model dependent parameters $\beta$, $\gamma$, and $\xi$. A model for damped neutrino oscillations in matter -------------------------------------------------- In some cases, we will use neutrino propagation in matter, since, for instance, neutrino factories operate at very long-baselines for which matter effects become important. We use an approach similar to [[Eq.]{} (\[equ:damping\])]{}, which should describe the damping signatures as minor perturbations to neutrino oscillations in (constant) matter as long as they are small enough: $$P_{\alpha\beta} = \sum_{i,j = 1}^3 \operatorname{Re}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij} - 4 \sum_{1\leq i<j \leq 3} \operatorname{Re}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij}\sin^2 (\tilde \Delta_{ij}) - 2 \sum_{1\leq i<j \leq 3} \operatorname{Im}(\tilde J_{ij}^{\alpha\beta})\tilde D_{ij}\sin (2\tilde\Delta_{ij}), \label{equ:dampingmatter}$$ where the tildes denote the effective parameters for neutrinos propagating in matter (for instance, $\tilde J_{ij}^{\alpha\beta} = \tilde{U}_{\alpha j} \, \tilde{U}_{\beta j}^* \, \tilde{U}_{\alpha i}^* \, \tilde{U}_{\beta i}$, where $\tilde U$ is the effective leptonic mixing matrix in matter, [[*i.e.*]{}]{}, the matrix re-diagonalizing the Hamiltonian with the matter potential included). In general, the damping effects may not enter directly as multiplicative factors in the interference terms among different matter eigenstates.[^2] However, in this study, we assume small damping effects that should act as perturbations which, to leading order, give rise to neutrino oscillation probabilities in matter of the same form as the ones in vacuum. Thus, we use the propagation in constant matter and apply the damping signatures to the mass eigenstates in matter. This means that we discuss signatures which depend on the mass eigenstates in matter. They may come from wave packet decoherence, neutrino decay, neutrino oscillations into sterile neutrinos, neutrino absorption, quantum decoherence, or other mechanisms. Strictly speaking, this model does not describe many of these mechanisms exactly, since a complete re-diagonalization of the Hamiltonian might be necessary (such as for Majoron decay in matter; see, [[*e.g.*]{}]{}, [Refs.]{} [@Berezhiani:1987gf; @Giunti:1992sy]). However, we treat only small effects in matter acting as a perturbation to the neutrino oscillation mechanism and do not consider transitions from active into active neutrinos, which would require a more complicated treatment (such as decay into other active neutrino states). Therefore, this model should be sufficient as a first approximation, since we will later on use either short baselines or mainly discuss effects in the $P_{\mu \mu}$ channel, which are not affected by matter effects to first order in the ratio of the mass squared differences $\Delta m_{21}^2/\Delta m_{31}^2$ and the mixing parameter $s_{13} \equiv \sin(\theta_{13})$ [@Akhmedov:2004ny]. Examples of different damping signatures {#sec:examples} ---------------------------------------- [p[3.0cm]{}ccccc]{} -------------- Damping type -------------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & Signature $D_{ij}$ & Unit for $\alpha$ & $\beta$ & $\gamma$ & $\xi$\ ------------- Wave packet decoherence ------------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $\exp \left( - \sigma_E^2 \frac{(\Delta m_{ij}^2)^2 L^2}{8E^4} \right)$ & $\mathrm{MeV}^2$ or $\mathrm{GeV}^2$ & 2 & 4 & 2\ ------- Decay ------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $\exp \left( - \alpha \frac{L}{E} \right)$ & $\mathrm{GeV \cdot km^{-1}}$ & 1 & 1 & 0\ ------------------------- Oscillations to $\nu_s$ ------------------------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $ \exp \left( - \epsilon \frac{L^2}{(2E)^2} \right)$ & $\mathrm{eV}^4$ & 2 & 2 & 0\ ------------ Absorption ------------ : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $\exp \left( - \alpha L E \right)$ & $\mathrm{GeV}^{-1} \cdot \mathrm{km}^{-1}$ & 1 & $-1$ & 0\ --------------- Quantum decoherence I --------------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $\exp \left( - \alpha L E^2 \right)$ & $\mathrm{GeV}^{-2} \cdot \mathrm{km}^{-1}$ & 1 & $-2$ & 0\ ---------------- Quantum decoherence II ---------------- : [\[tab:models\] Different examples for damping signatures considered in this study. The parameter $\gamma$ represents the spectral (energy) dependence of the signature. The parameter $\alpha$ has in some places been re-defined for convenience (see main text) unless it corresponds exactly to our definition of $\alpha$. The quantum decoherence models I and II are two examples of signatures motivated by quantum decoherence (see [Table]{} \[tab:qdecoherence\]). The quantum decoherence model II absorbs $\beta$ in the definition of $\kappa \equiv \alpha L^\beta$ in order to describe two of the models from [Table]{} \[tab:qdecoherence\]. Note that another commonly used quantum decoherence signature is the same as the decay signature.]{} & $\exp \left( - \kappa \frac{(\Delta m_{ij}^2)^2}{E^2} \right)$ & $\mathrm{eV}^{-2} $ & 1 or 2 & 2 & 2\ The general damping signature in [[Eq.]{} (\[equ:dfactor\])]{} seems to be very abstract. Therefore, let us now give some motivations for such damping signatures by different mechanisms, which are summarized in [Table]{} \[tab:models\]. ### Intrinsic wave packet decoherence {#intrinsic-wave-packet-decoherence .unnumbered} Intrinsic wave packet decoherence is an effect that appears even in standard neutrino oscillation treatments [@Giunti:1998wq; @Giunti:2003ax; @Giunti:1992sx; @Grimus:1998uh; @Cardall:1999ze]. It naturally emerges from any quantum mechanical model that does not assume neutrino mass eigenstates propagating as plane waves or from any quantum field theoretical treatment. In principle, intrinsic decoherence may not be distinguishable from a macroscopic energy averaging (see, [[*e.g.*]{}]{}, discussions in [Refs.]{} [@Kiers:1996zj; @Giunti:2003mv; @Lipkin:2003st]). Therefore, it is natural to expect that the test of this signature could be limited by the knowledge on the energy resolution of the detector. We adopt the treatment in [Ref.]{} [@Giunti:1998wq], which uses averaging over Gaussian wave packets. In this approach, the loss of coherence can only be described at probability level. It leads to factors $\exp \left[-(L/L_{ij}^{\mathrm{coh}})^2 \right]$ in [[Eq.]{} (\[equ:dfactor\])]{}, where $L_{ij}^{\mathrm{coh}} = 4 \sqrt{2} \sigma_x E^2/|\Delta m_{ij}^2|$ and $\sigma_x$ is the spatial wave packet width. In this case, the damping descriptions in vacuum and matter using [Eqs.]{} (\[equ:damping\]), (\[equ:dfactor\]), and (\[equ:dampingmatter\]) are accurate. For the damping signature, we obtain $$D_{ij} = \exp \left[ - \left( \frac{L}{L_{ij}^{\mathrm{coh}}} \right)^2 \right] = \exp \left[ - \left( \frac{\sqrt{2}\sigma_E}{E} \frac{\Delta m_{ij}^2 L}{4 E} \right)^2 \right] = \exp \left( - \sigma_E^2 \frac{(\Delta m_{ij}^2)^2 L^2}{8 E^4} \right) \label{equ:coherence}$$ in vacuum and the analogous signature $\tilde{D}$ in matter. Here we have introduced a wave packet spread in energy $\sigma_E \equiv 1/(2 \sigma_x)$, since we later will derive an upper bound for this quantity and directly compare it to the energy resolution of a detector. The typical units of $\sigma_E$ will be $\mathrm{MeV}$ or $\mathrm{GeV}$. By comparing [Eqs.]{} (\[equ:dfactor\]) and (\[equ:coherence\]), we can identify $\alpha_{ij} = \sigma_E^2/8$, $\beta=2$, $\gamma=4$, and $\xi=2$. Note that, in this case, the $\alpha_{ij}$’s do not depend on the indices $i$ and $j$. In order to better understand [[Eq.]{} (\[equ:coherence\])]{}, we note that $\Delta m_{ij}^2 L/(4 E)$ is of order unity for the first oscillation maximum: $$D_{ij} = \exp \left[ - \left( \frac{\sqrt{2}\sigma_E}{E} \frac{\Delta m_{ij}^2 L}{4 E} \right)^2 \right] = \exp \left[ - \left( \frac{\sigma_E}{\sqrt{2} E} \Phi_{ij} \right)^2 \right] \simeq \underbrace{ \exp \left[ - \left( \frac{1}{\sqrt{2}\sigma_xE} \mathcal{O}(1) \right)^2 \right]}_{\mathrm{value \, at \, oscillation \, maximum}} \, . \label{equ:coherence2}$$ From [[Eq.]{} (\[equ:coherence2\])]{}, we find three major implications: First, it means that no effect will be observed if $\sigma_E \ll E$, because the oscillation phase is usually of order unity (or less). Second, since the decoherence damping factor always comes together with an oscillation phase factor with the same $\Delta m_{ij}^2$ \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:damping\])]{}\], it will equally damp the solar and atmospheric oscillating terms in one probability formula. This means for the atmospheric oscillation experiments that if the solar contribution cannot be neglected, its damping factor can also not be neglected. Third, one expects the largest suppression for low energies independent of the type of oscillation experiment (solar or atmospheric), since in either case the experiment will be operated close to the oscillation maximum. Eventually, it is important to keep in mind that this decoherence signature is not an intrinsic property of the neutrinos, but an effect related to the production and detection processes. Therefore, the parameter $\sigma_E$ could be different for different classes of experiments. ### Invisible neutrino decay {#invisible-neutrino-decay .unnumbered} Another example of a damping signature is neutrino decay (see, [[*e.g.*]{}]{}, [Refs.]{} [@Bahcall:1972my; @Barger:1982vd; @Valle:1983ua; @Barger:1998xk; @Pakvasa:1999ta; @Barger:1999bg]). In particular, invisible decay, [[*i.e.*]{}]{}, decay into particles invisible for the detector, leads to a loss of three-flavor unitarity. In this case, the neutrino evolution is given by an effective Hamiltonian $$H_{\rm eff} = H - {\rm i}\Gamma,$$ where $\Gamma \equiv \operatorname{diag}(a_1,a_2,a_3)/2$ in the neutrino mass eigenstate basis, $a_i \equiv \Gamma_i/\gamma_i$, $\Gamma_i$ is the inverse life-time of a neutrino of mass eigenstate $i$ in its own rest frame, and $\gamma_i \equiv E/m_i$ is the time dilation factor. We note that $H$ and $\Gamma$ are both diagonal in the neutrino mass eigenstate basis. The neutrino oscillation probabilities may now be calculated as usual with the exception that, in addition to the phase factor $\exp[-{\rm i}m_i^2L/(2E)]$, a factor of $\exp[-\Gamma_i m_i L/(2 E)]$ is obtained when evolving the neutrino mass eigenstate $\nu_i$. The resulting neutrino oscillation probabilities are of the form of [[Eq.]{} (\[equ:damping\])]{} with $$\label{equ:decay} D_{ij} = \exp\left(-\frac{\alpha_i + \alpha_j}{2E} L\right),$$ where $\alpha_i = \Gamma_i m_i$, in accordance with [Refs.]{} [@Lindner:2001fx; @Lindner:2001th]. Thus, for neutrino decay, the characteristic signature is $\alpha_{ij} = (\alpha_i + \alpha_j)/2$, $\beta = \gamma = 1$, and $\xi = 0$. An example of the above decay is Majoron decay into lighter sterile neutrinos. In this case, it is plausible to assume a quasi-degenerate neutrino mass scheme for the active neutrinos with approximately equal decay rates for all mass eigenstates, since the decay products all have to be considerably lighter than the active neutrinos to obtain fast decay rates due to phase space. The decay rates of the $\alpha_i$’s will then be approximately equal ($\alpha_i = \alpha$ for all $i$) and will typically be given in units of $\mathrm{GeV}/\mathrm{km}$. Note that the decay rate is an intrinsic neutrino property, not an experiment-dependent quantity such as the wave packet decoherence. We identify by the comparison of [[Eq.]{} (\[equ:decay\])]{} with [[Eq.]{} (\[equ:dfactor\])]{} that $\alpha$ is the same quantity[^3], $\beta=\gamma=1$, and $\xi=0$. In matter, we use the analogous signature, [[*i.e.*]{}]{}, we let the mass eigenstates in matter decay. In general, this is only a first approximation, since, for example for Majoron decay in matter, a re-diagonalization of the complete Hamiltonian may be necessary; see, [[*e.g.*]{}]{}, [Refs.]{} [@Berezhiani:1987gf; @Giunti:1992sy]. However, as we have assumed equal decay rates for all eigenstates, it should describe the problem exactly, since the mass eigenstates in matter will also decay with equal rates. In different decay models, the $\alpha_{ij}$’s may not be identical anymore. For example, for a hierarchical mass scheme with a normal hierarchy, the mass eigenstate $m_3$ decays much faster than the other two. In this case, the observed effects in atmospheric oscillations would qualitatively be similar, but about a factor of two smaller (since mainly $m_2$ and $m_3$ participate in the oscillation and only one of them decays). However, in matter such a model is much more difficult to treat, since it is not easy to identify the mass eigenstate in matter after the diagonalization of the Hamiltonian. This problem does not occur with equal decay rates. ### Oscillations into sterile neutrinos {#sec:oscsteriles .unnumbered} A natural description for the LSND result [@Aguilar:2001ty] is a light sterile neutrino ([[*i.e.*]{}]{}, not a weakly interacting neutrino) that is mixing with the active neutrinos. This description is now disfavored for the LSND experiment [@Strumia:2002fw; @Maltoni:2004ei], but small admixtures of light sterile neutrinos cannot be entirely excluded. In particular for slow enough oscillations into sterile neutrinos, the oscillation signature $\sin^2 \Delta_{4i}$ with $\Delta_{ij} \equiv \Delta m_{ij}^2 L/(4E)$ translates into damping signatures: $$1 - \epsilon \, \sin^2 \left(\frac{\Delta m_{4i}^2 L}{4E} \right) \simeq 1 - \epsilon \left( \frac{\Delta m_{4i}^2 L}{4 E} \right)^2 \simeq \exp \left[ - \epsilon \left( \frac{\Delta m_{4i}^2 L}{4 E} \right)^2 \right] \, ,$$ where $\epsilon$ represents the magnitude of the mixing. Thus, the damping coefficient $\alpha$ will (in this case) be determined by the sizes of the mixing and the mass squared differences $\Delta m_{4i}^2$. We use as a model in vacuum (and the same form in matter) $$D_{ij} = \exp \left( - \alpha_{ij} \frac{L^2}{(2E)^2} \right) = \exp \left( - \epsilon \frac{L^2}{(2E)^2} \right) \, , \label{equ:oscillations}$$ where $\epsilon$ contains the information on mixing and $\Delta m^2$ and will be given in units of $\mathrm{eV}^4$ (the mixing factor is dimensionless). Thus, we identify by comparison of [[Eq.]{} (\[equ:oscillations\])]{} with [[Eq.]{} (\[equ:dfactor\])]{} that $\alpha_{ij} = \epsilon/4$, $\beta=\gamma=2$, and $\xi=0$. Note that we only discuss effects independent of $i$ and $j$, which simplifies the problem, but restricts the number of applications tremendously. In addition, although the coefficient $\epsilon$ is not experiment dependent (since it is an intrinsic neutrino property here), it may (partly because of the independence on $i$ and $j$) depend on the oscillation channel and mass scheme. As an example, let us consider $P_{\mu\mu}$ and a mass scheme with $\Delta m_{21}^2 \ll \Delta m_{43}^2 < \Delta m_{31}^2$, [[*i.e.*]{}]{}, ${\Delta m_{31}^2}$ is the largest mass squared difference. In this case, one can show that to first approximation $\epsilon \simeq U_{\mu 4}^2 \, U_{\mu 3}^2 ( \Delta m_{43}^2)^2$ (for [*CP*]{} conservation). Thus, $\epsilon$ is suppressed by the flavor content of $\nu_4$ in $\nu_\mu$ and the extra mass squared difference, since all the other mass squared differences with the sterile state are absorbed into the atmospheric oscillation terms. In general, it should be noted that sterile neutrinos are not affected in the same way as active neutrinos when propagating through matter ([[*i.e.*]{}]{}, there is a phase difference due to the neutral-current interactions between matter and the active neutrino flavors). However, the exponential damping signature for oscillations into sterile neutrinos presented here is only valid for short baselines, where matter effects have not yet developed. ### Neutrino absorption {#neutrino-absorption .unnumbered} When neutrinos propagate through matter, there is a small chance of absorption. Neutrino absorption can be described in a fashion similar to neutrino decay. In this case, we assume that an effective Hamiltonian is given by $$H_{\rm eff} = H - {\rm i}\Gamma,$$ where $H$ is the usual neutrino Hamiltonian in matter, $\Gamma$ is given by $$\Gamma = \rho \operatorname{diag}(\sigma_e, \sigma_\mu, \sigma_\tau)/2$$ in the flavor eigenstate basis, $\rho$ is the matter density, and $\sigma_\alpha$ is the absorption cross-section for a neutrino of flavor $\alpha$. If we assume the cross-sections to be relatively small, then the eigenstates of $H_{\rm eff}$ will not differ significantly from the orthogonal eigenstates of $H$. Thus, the first order corrections to the eigenvalues of the effective Hamiltonian will be $$\delta E_i^{(1)} = -{\rm i} \Gamma_{ii} = -{\rm i} \frac \rho 2 \sum_\alpha |U_{\alpha i}|^2 \sigma_\alpha \equiv -{\rm i}\frac \rho 2 \sigma_i,$$ where $\sigma_i$ is an effective cross-section for a neutrino of mass eigenstate $i$. The neutrino oscillation probability is now given by an expression of the form of [[Eq.]{} (\[equ:damping\])]{} with $$D_{ij} = \exp\left( -\frac{\sigma_i + \sigma_j}{2} \rho L \right)= \exp\left( -\frac{\sigma_i(E) + \sigma_j(E)}{2} \rho L \right) \, ,$$ where we have assumed a constant matter density $\rho$. The signature of this scenario is given by $\beta = 1$ and $\gamma$ is equal to minus the power of the energy dependence of the cross-sections. It should be observed that, since the cross-sections increase with energy, $\gamma$ will be a negative number. If all neutrino flavor cross-sections were equal (or approximately equal), then the effective matter eigenstate cross-sections would also be equal.[^4] For the neutrino energies relevant to a neutrino factory, the neutrino-nucleon cross-sections are approximately linear in energy [@Gandhi:1998ri]. Thus, in this energy range, the damping signature is given by $\alpha = \rho \sigma(E_0)/E_0$, $\beta = 1$, $\gamma = -1$, and $\xi = 0$, where $\sigma(E_0)$ is the cross-section at energy $E_0$. At higher energies, the cross-sections increase at a slower rate and if damping effects are studied at these energies, then the effective damping parameter $\gamma$ lies in the interval $-1 < \gamma < 0$. It should be noted that the standard neutrino absorption effects (by weak interactions) are very small for energies typical for neutrino oscillation experiments. However, there could be non-standard absorption effects and the cross-sections of these effects should behave in a manner similar to the standard absorption. ### Quantum decoherence {#quantum-decoherence .unnumbered} It has been argued that quantum decoherence could be an alternative description of neutrino flavor transitions. Fits to data by different collaborations ([[*e.g.*]{}]{}, Super-Kamiokande [@Ashie:2004mr] and KamLAND [@Araki:2004mb]) have been performed and these clearly disfavor a decoherence explanation for neutrino flavor transitions. However, quantum decoherence may still be a marginal effect in addition to neutrino oscillations and could give rise to damping factors of the type given in [[Eq.]{} (\[equ:dfactor\])]{}. Quantum decoherence arises when a neutrino system is coupled to an environment (or a reservoir or a bath), which could consist of, for example, a space-time “foam” [@Lisi:2000zt] leading to new physics beyond the standard model. Thus, quantum decoherence may be a feature of quantum gravity. In order to find the formulas describing quantum decoherence, it is necessary to use the Liouville equation with decoherence effects of the Lindblad form [@Lindblad:1975ef]. Throughout the literature [@Lisi:2000zt; @Benatti:2000ph; @Adler:2000vf; @Gago:2000qc; @Gago:2000nv; @Ohlsson:2000mj; @Benatti:2001fa; @Gago:2002na; @Barenboim:2004wu; @Barenboim:2004ev; @Morgan:2004vv], the effects of loss of quantum coherence in neutrino oscillations have been studied. Although the signatures derived by different authors seem to vary, the decoherence effects are of the same form as [[Eq.]{} (\[equ:dfactor\])]{}. However, there might be additional effects on the oscillation phases. In [Table]{} \[tab:qdecoherence\], we give a brief summary of some of the signatures that are present in the literature, these examples could be used to motivate the numerical testing of such signatures. Reference Signature $D_{ij}$ Unit for $\alpha$ $\beta$ $\gamma$ $\xi$ --------------------------------------------------------------------------- -------------------------------------------------------------------- -------------------------------------------- --------- ---------- ------- Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha L \right)$ $\mathrm{km}^{-1}$ 1 0 0 Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha \frac{L}{E} \right)$ $\mathrm{GeV} \cdot \mathrm{km}^{-1}$ 1 1 0 Lisi [*et al.*]{} [@Lisi:2000zt] and Morgan [*et al.*]{} [@Morgan:2004vv] $\exp \left( - \alpha L E^2 \right)$ $\mathrm{GeV}^{-2} \cdot \mathrm{km}^{-1}$ 1 $-2$ 0 Adler [@Adler:2000vf] $\exp \left( - \alpha \frac{(\Delta m_{ij}^2)^2 L}{E^2} \right)$ $\mathrm{GeV}^{-1} $ 1 2 2 Ohlsson [@Ohlsson:2000mj] $\exp \left( - \alpha \frac{(\Delta m_{ij}^2)^2 L^2}{E^2} \right)$ dimensionless 2 2 2 : [\[tab:qdecoherence\] Different signatures that might arise from quantum decoherence and the references in which they are motivated.]{} ### Other signatures {#other-signatures .unnumbered} In principle, what we have presented above is just a collection of interesting signatures that could be responsible for damping of neutrino oscillations. However, there are also other possibilities, which we have decided not to investigate further in this study. These signatures include, for example, heavy isosinglet neutrinos [@Schechter:1980gr; @Schechter:1980gk] and neutrino oscillations in different extra dimension scenarios [@Dvali:1999cn; @Mohapatra:1999zd; @Barbieri:2000mg; @Mohapatra:2000wn; @Morgan:2004vv; @Hallgren:2004mw]. ### Combined signatures {#combined-signatures .unnumbered} In most cases, if there is a damping effect, then it would be natural (and easy) to assume that one type of effect is giving a clearly dominating contribution. However, if an experiment is carried out with some specific setup, then contributions from different scenarios might be of the same order. In such a case, the form of [[Eq.]{} (\[equ:dfactor\])]{} is spoiled. For example, in the case of neutrino decay combined with neutrino absorption, the matrices $\Gamma$ are just added which results in the damping signatures $$D_{ij} = \exp\left[ -\left(\frac{\alpha_{ij}^{\rm decay}}{E} + \alpha_{ij}^{\rm abs}E\right)L \right].$$ In general, just multiplying the damping factors (which is the result of the above treatment) might not give the correct damping and different combined cases might behave in other ways. However, since there are different energy dependencies in the different damping signatures, there will only be a limited energy range where a combined treatment is necessary. In this study, we do not consider combined signatures. Damped neutrino oscillation probabilities {#sec:dampedprob} ========================================= In this section, we investigate the effects of damping on specific neutrino oscillation probabilities interesting for future reactor and long-baseline experiments, where we restrict the analytical discussion to the vacuum case. The damped two-flavor neutrino scenario {#sec:dampedtwoflavor} --------------------------------------- In a simple two-flavor scenario, the damped neutrino oscillation probabilities take particularly simple forms (just as in the non-damped case). From the two-flavor equivalent of [[Eq.]{} (\[equ:damping\])]{}, we obtain $$\begin{aligned} \label{equ:2flavs} P_{\alpha\alpha} &=& D_{11} c^4 + D_{22} s^4 + \frac{1}{2} D_{21} \sin^2(2\theta) \cos(2\Delta), \\ P_{\beta\beta} &=& D_{11} s^4 + D_{22} c^4 + \frac{1}{2} D_{21} \sin^2(2\theta) \cos(2\Delta)\end{aligned}$$ for the neutrino survival probabilities and $$\label{equ:2flavo} P_{\alpha\beta} = P_{\beta\alpha} = \frac{1}{4} \sin^2(2\theta)[D_{11}+D_{22} - 2D_{21}\cos(2\Delta)]$$ for the neutrino transition probability, where $\nu_\alpha$ is the linear combination $\nu_\alpha = c \nu_1 + s \nu_2$, $\nu_\beta$ is the linear combination that is orthogonal to $\nu_\alpha$, $\Delta \equiv \Delta_{21}$, $s \equiv \sin(\theta)$, $c \equiv \cos(\theta)$, and $\theta$ is the mixing angle between the two neutrino flavors. Let us first discuss the case $\xi > 0$ or all $\alpha_{ii} = 0$, which means that all $D_{ii}$ are equal to unity. We refer to this case as “decoherence-like” (probability conserving) damping. The two-flavor formulas then become $$\label{equ:2flavd} P_{\alpha\beta} = \delta_{\alpha\beta} + \frac{1}{2}(1-2\delta_{\alpha\beta})\sin^2(2\theta) [1-D\cos(2\Delta)],$$ where $D \equiv D_{21}$. Below, we will show that expressions reminding of these two-flavor formulas will be quite common in the three-flavor counterparts. In the limit $D \rightarrow 0$ (maximal damping), the oscillations are averaged out, [[*i.e.*]{}]{}, $$P_{\alpha\beta} \rightarrow \delta_{\alpha\beta} [1 - \sin^2 (2 \theta)] + \frac{1}{2} \sin^2 (2 \theta),$$ where the factor $1/2$ is typical for an averaged $\sin^2 (x)$ term. It is also of interest to note, from the form of [[Eq.]{} (\[equ:2flavd\])]{}, that the neutrino transition probabilities can either be smaller or larger than the undamped probabilities depending on the sign of $\cos(2\Delta)$. For instance, the neutrino survival probability $$P_{\alpha\alpha} = 1 - \frac{1}{2}\sin^2 (2 \theta) [1 - D \cos(2\Delta)]. \label{equ:faketheta13}$$ is smaller than the corresponding undamped probability if $\cos(2\Delta)$ is positive and vice versa. Close to the oscillation maximum $\Delta \sim \pi/2$, the factor $\cos(2\Delta)$ will be negative, [[*i.e.*]{}]{}, the damped neutrino survival probability will be larger than the undamped probability, since the oscillations will be partially averaged out. This behavior changes as a function of the neutrino energy at points where $\cos(2\Delta)$ changes sign, [[*i.e.*]{}]{}, at $2\Delta = (n+1) \pi/2$, $n = 0,1,\hdots$. As a rule of thumb, the damping will lead to larger probabilities close to the oscillation maximum $E_{\mathrm{max}} = \Delta m^2 L/(2 \pi)$ and to smaller probabilities for $E<2 E_{\mathrm{max}}/3$ and $E>2 E_{\mathrm{max}}$. This result will be valid for any survival probability discussed in this study. From the form of [[Eq.]{} (\[equ:2flavd\])]{}, it is apparent that if only a small range of $\Delta$’s is studied, then a damping factor may mimic an oscillation signal. The worst such case would be if the damping signature had $\gamma = 2$. This would mean that if one makes a series expansion of $\cos(2\Delta)$ and the exponential of the damping factor, then the energy dependence will be the same to lowest order in the expansion parameters, [[*i.e.*]{}]{}, we will have $$D\cos(2\Delta) = \left[1-\alpha|\Delta m^2|^\xi\frac{L^\beta}{E^2} + \ldots \right] \left[1-\left(\frac{\Delta m^2L}{4E}\right)^2 + \ldots \right].$$ This effect is also present in a general case with any number of neutrino flavors. Another interesting case is when $\alpha_{ij} = \alpha_i + \alpha_j$ and $\xi = 0$, which is expected for the neutrino decay and neutrino absorption scenarios. This assumption results in the fact that the damping factor $D_{ij}$ can be written as a product $$\label{equ:pbviol} D_{ij} = A_i A_j,$$ where $A_i \equiv \exp(-\alpha_i L^\beta/E^\gamma)$ is only dependent on the $i$th mass eigenstate. Then, the neutrino oscillation probabilities are given by $$\begin{aligned} P_{\alpha\alpha} &=& A^2\left[(c^2+\kappa s^2)^2 - \kappa \sin^2(2\theta) \sin^2(\Delta)\right], \\ P_{\beta\beta} &=& A^2\left[(\kappa c^2+s^2)^2 - \kappa \sin^2(2\theta) \sin^2(\Delta)\right], \\ P_{\alpha\beta} &=& \frac{1}{4} A^2 \sin^2(2\theta)[1+\kappa^2 - 2\kappa \cos(2\Delta)],\end{aligned}$$ where $A \equiv A_1$ and $\kappa \equiv A_2/A_1$. It is important to note that, for example, the total probability $P_{\alpha\alpha} + P_{\alpha\beta}$ is not conserved in this case, in fact, we obtain $$\label{equ:dprobtot} P_{\alpha\alpha} + P_{\alpha\beta} = A^2\left[c^4 + \kappa^2 s^4 + \frac{1}{4}\sin^2(2\theta)(1+\kappa)^2\right] \leq 1,$$ where the equality holds if and only if $A = \kappa = 1$ (because of the form of the $A_i$’s, $A \leq 1$, $\kappa A \leq 1$, and that all terms in [[Eq.]{} (\[equ:dprobtot\])]{} are positive, the terms will attain their maximum value when $A = \kappa A = 1$, in which case the entire expression simplifies to one). Thus, we will introduce the term “decay-like” for effects giving rise to damping terms of the form given in [[Eq.]{} (\[equ:pbviol\])]{}. In the case of a decay-like signature, there are two special cases which are of particular interest. First, if both mass eigenstates are affected in the same way, [[*i.e.*]{}]{}, $\kappa = 1$, then the resulting neutrino transition probabilities will reduce to the undamped standard neutrino oscillation probabilities suppressed by a factor of $A^2$. This means that all damped probabilities will be smaller than their undamped counterparts. Second, if only one of the mass eigenstates is affected, [[*i.e.*]{}]{}, $A = 1$, then the difference in the $\nu_\alpha$ survival probability compared to the undamped case will be given by $$\Delta P_{\alpha\alpha} \equiv P_{\alpha\alpha}^{\rm damped} - P_{\alpha\alpha}^{\rm undamped} = (\kappa-1) s^2 [(1+\kappa)s^2 + 2 c^2 \cos(2\Delta)].$$ Thus, this survival probability will actually increase if $$\label{equ:decincr} - 2 \cos(2\Delta) > (1+\kappa)\tan^2(\theta).$$ Note that for the first part of the neutrino propagation (for $L < \pi E/\Delta m^2$), the term $\cos(2\Delta)$ is positive, and thus, the inequality of [[Eq.]{} (\[equ:decincr\])]{} cannot be satisfied in this region, since the right-hand side is always positive. From the comparison with the discussion after [[Eq.]{} (\[equ:faketheta13\])]{}, this condition is equivalent to $E>2 E_{\mathrm{max}}$. For example, for a neutrino factory, which can be operated far away from the oscillation maximum, this implies that the relevant part of the spectrum will be suppressed by this form of damping. For the neutrino oscillation probability difference $\Delta P_{\alpha\beta}$, we obtain $$\Delta P_{\alpha\beta} = \frac 14 \sin^2(2\theta)(\kappa-1) [1+\kappa - 2\cos(2\Delta)],$$ that is, the damped $P_{\alpha\beta}$ is larger than the undamped $P_{\alpha\beta}$ if $$\label{equ:decincr2} 2\cos(2\Delta) > 1 + \kappa.$$ Note that if $\tan(\theta) = 1$, then [Eqs.]{} (\[equ:decincr\]) and (\[equ:decincr2\]) will have the same form except for the sign of the left-hand side. In [Fig.]{} \[fig:illustration\], the qualitative effects of neutrino wave packet decoherence and neutrino decay on the neutrino survival probability are shown. ![[\[fig:illustration\] The qualitative effect of different damping signatures on the two-flavor neutrino survival probability as a function of the oscillation phase $\Delta$. The mixing used in this plot is maximal ($\theta = \pi/2$) and the damping parameters have been highly exaggerated. The scenario “Oscillation + decay I” corresponds to decay of both mass eigenstates with equal rates, whereas “Oscillation + decay II” corresponds to the second mass eigenstate decaying while the first mass eigenstate is stable.]{}](illustration.eps){width="10cm"} From this figure, we clearly see how the wave packet decoherence simply corresponds to a damping of the oscillating term and the decay of all mass eigenstates corresponds to an overall damping of the undamped neutrino survival probability. For the case of only one decaying mass eigenstate, the probability converges towards the square of the content of the stable mass eigenstate in the initial neutrino flavor eigenstate. Three-flavor electron-muon neutrino transitions ----------------------------------------------- For a fixed neutrino oscillation channel, the damped neutrino oscillation probability [[Eq.]{} (\[equ:damping\])]{} can be written more explicitly in terms of the mixing parameters and the mass squared differences. Below, we will use the standard notation for the leptonic mixing angles, [[*i.e.*]{}]{}, $s_{ij} = \sin(\theta_{ij})$ and $c_{ij} = \cos(\theta_{ij})$. Then, for example, the $\nu_e$ survival probability $P_{ee}$ is given by $$\begin{aligned} P_{ee} &=& c_{13}^4\left[ D_{11}c_{12}^4 + D_{22} s_{12}^4 + \frac{1}{2} D_{21}\sin^2(2\theta_{12}) \cos(2\Delta_{21})\right] \nonumber \\ &&+ \frac{1}{2} {\sin^2(2 \theta_{13})}[D_{31}c_{12}^2 \cos(2\Delta_{31}) + D_{32} s_{12}^2 \cos(2\Delta_{32})] + D_{33} s_{13}^4, \label{equ:Pee}\end{aligned}$$ which is dependent on all neutrino oscillation parameters except for $\theta_{23}$ and $\delta_{CP}$, while the probability $P_{e\mu}$ of oscillations into $\nu_\mu$ is given by $$\begin{aligned} P_{e\mu} &=& \frac{1}{4} \sin^2(2\theta_{12}) c_{23}^2[(D_{11}+D_{22})-2D_{21}\cos(2\Delta_{21})] \nonumber \\ && + \frac{1}{2}\sin(2\theta_{12})\sin(2\theta_{23}) \{c_\delta[D_{11}c_{12}^2-D_{22}s_{12}^2 - D_{21} \cos(2\theta_{12})\cos(2\Delta_{21})] \nonumber \\ && -D_{21}s_\delta \sin(2\Delta_{21}) +D_{32}\cos(2\Delta_{32} - \delta_{CP}) - D_{31} \cos(2\Delta_{31} - \delta_{CP})\} \, s_{13} \nonumber \\ && +s_{23}^2[D_{11} c_{12}^4 + D_{22} s_{12}^4 + D_{33} - 2D_{31} s_{12}^2 \cos(2\Delta_{31}) - 2 D_{32} c_{12}^2 \cos(2\Delta_{32})] \, s_{13}^2 \nonumber \\ && + \frac{1}{4} \sin^2(2\theta_{12}) [2 D_{21} \cos(2\Delta_{21})- c_{23}^2(D_{11}+D_{22})] \, s_{13}^2 + \mathcal O(s_{13}^3), \label{equ:Pemu}\end{aligned}$$ where $s_\delta \equiv \sin(\delta_{CP})$ and $c_\delta \equiv \cos(\delta_{CP})$. Furthermore, the $\nu_\mu$ survival probability can be computed to be of the form $$\begin{aligned} P_{\mu\mu} &=& \frac{1}{2} \sin^2(2\theta_{23})[D_{32}c_{12}^2 \cos(2\Delta_{32}) +D_{31}s_{12}^2 \cos(2\Delta_{31})] \nonumber \\ &&+c_{23}^4 \left[D_{11}s_{12}^4 + D_{22}c_{12}^4 + \frac{1}{2} D_{21} \sin^2(2\theta_{12}) \cos(2\Delta_{21})\right] + D_{33}s_{23}^4 \nonumber \\ &&+ c_\delta \sin(2\theta_{12})\sin(2\theta_{23})\left\{ c_{23}^2\left[ D_{11}s_{12}^2-D_{22}c_{12}^2+ {D_{21}}\cos(2\theta_{12})\cos(2\Delta_{21}) \right]\right. \nonumber \\ && \left. + s_{23}^2 [D_{31}\cos(2\Delta_{31}) - D_{32}\cos(2\Delta_{32})] \right\} \, s_{13} +\mathcal O(s_{13}^2). \label{equ:Pmumu}\end{aligned}$$ Note that the probabilities $P_{e\mu}$ and $P_{\mu\mu}$ are series expansions in $s_{13}$, whereas the probability $P_{ee}$ is valid to all orders in $s_{13}$. The reason to use these expressions rather than the exact expressions is that, unless some further assumptions are made, the formulas for $P_{e\mu}$ and $P_{\mu\mu}$ are quite cumbersome. The probability $P_{\mu e}$ can be obtained by making the transformation $\delta_{CP} \rightarrow -\delta_{CP}$ in the probability $P_{e\mu}$, [[*i.e.*]{}]{}, $P_{\mu e} = P_{e\mu}(\delta_{CP} \rightarrow -\delta_{CP})$. Furthermore, in vacuum, the anti-neutrino oscillation probabilities can be obtain from the neutrino oscillation probabilities through the same transformation as above. Note that this is not true for neutrinos propagating in matter. Probabilities for decoherence-like effects in experiments --------------------------------------------------------- For a decoherence-like damping effect, $D_{ii} = 1$ for all $i$ and the relations $$\label{equ:probconservation} \sum_{\alpha = e,\mu,\tau} P_{\alpha\beta} = 1 \qquad {\rm and} \qquad \sum_{\beta = e,\mu,\tau} P_{\alpha\beta} = 1$$ are still valid despite the presence of damping factors ([[*i.e.*]{}]{}, no neutrinos are lost due to effects such as invisible decay, absorption, [[*etc.*]{}]{}). Note that, in the case of a decoherence-like damping effect, all neutrino oscillation probabilities can be constructed from $P_{ee}$, $P_{e\mu}$, and $P_{\mu\mu}$ due to the conservation of total probability given in [[Eq.]{} (\[equ:probconservation\])]{}. It is interesting to observe what effect a decoherence-like damping could have on the neutrino oscillation probabilities for different experiments. Therefore, we will now study different kinds of neutrino oscillation experiments and make different approximations depending on the type of experiment to investigate what the main damping effects are. ### Short-baseline reactor experiments {#sec:sblreactor .unnumbered} Short-baseline experiments, such as CHOOZ [@Apollonio:1999ae; @Apollonio:2002gd] and Double-CHOOZ [@Ardellier:2004ui], are operated at the atmospheric oscillation maximum $\Delta_{31} \simeq \Delta_{32} = \mathcal{O}(1)$ in order to be sensitive to ${\sin^2(2 \theta_{13})}$. The most interesting quantity is the $\bar{\nu}_e$ survival probability $P_{\bar{e}\bar{e}}$. For these experiments, it turns out (see [Sec.]{} \[sec:appl1\]) that it is important to keep all damping factors. As a result, the $\bar \nu_e$ survival probability is given by $$\begin{aligned} P_{\bar{e}\bar{e}} &=& c_{13}^4 \left\{1-\frac{1}{2}\sin^2(2\theta_{12})[1-D_{21}\cos(2\Delta_{21})]\right\} \nonumber \\ && \label{equ:sblPee} + \frac{1}{2}\sin^2(2\theta_{13}) [D_{31}c_{12}^2 \cos(2\Delta_{31}) + D_{32} s_{12}^2 \cos(2\Delta_{32})] +s_{13}^4.\end{aligned}$$ The most apparent feature of this equation is the term within the curly brackets, which has the form of the survival probability for a two-flavor neutrino damping scenario with $\theta = \theta_{12}$ and $\Delta = \Delta_{21}$. Therefore, even in the limit $\theta_{13} \rightarrow 0$ \[close to the ${\sin^2(2 \theta_{13})}$ sensitivity limit\], the damping factor $D_{21}$ might be constrained by the contribution of the solar oscillation at low energies. Furthermore, in the limit $\Delta_{21} \rightarrow 0$ (or large $\theta_{13}$), $D_{21}$ is close to unity \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:coherence2\])]{}\] and $D_{31} \simeq D_{32}$ (this could be expected if $\Delta_{21}/\Delta_{31} \rightarrow 0$), then this expression will exactly mimic the two-flavor neutrino damping scenario with $\theta = \theta_{13}$ and $\Delta = \Delta_{31} = \Delta_{32}$. Thus, depending on which small number (the ratio of the mass squared differences or $s_{13}$) is the largest, two different two-flavor neutrino scenarios are obtained as expected from the non-damped case. If $\theta_{13}$ is relatively large (compared to the ratio of the mass squared differences), then the latter two-flavor case will apply. It is then interesting to note that the damping factor $D_{31}$, the neutrino source energy spectrum, and the cross-sections all have some energy dependence, which means that they can “emphasize” certain regions in the energy spectrum which are most sensitive to damping effects. If we assume that the total impact is strongest close to the oscillation maximum, then the damping effect will be misinterpreted as a smaller value of ${\sin^2(2 \theta_{13})}$ \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:faketheta13\])]{}, which will in both cases be closer to unity\]. Therefore, as we will demonstrate, any such damping can fake a value of ${\sin^2(2 \theta_{13})}$ which is smaller than the one that is provided by Nature. Note that, for the case of wave packet decoherence, $D_{21}$, $D_{32}$, and $D_{31}$ are not independent \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:coherence2\])]{}\], which means that any of the terms in [[Eq.]{} (\[equ:sblPee\])]{} could lead to information on the parameter $\sigma_E$. ### Long-baseline reactor experiments {#long-baseline-reactor-experiments .unnumbered} For long-baseline reactor experiments operated at the solar oscillation maximum $\Delta_{21} = \mathcal{O}(1)$, such as the KamLAND experiment [@Eguchi:2002dm; @Araki:2004mb], the damping factors $D_{31}$ and $D_{32}$ of a decoherence-like scenario with $\xi > 0$ are small, since the large mass squared difference makes the argument of the exponential functions in [[Eq.]{} (\[equ:dfactor\])]{} large and negative. In addition, these two damping factors are attached to neutrino oscillations associated with the large phases $\Delta_{31}$ and $\Delta_{32}$ \[see [Eqs.]{} (\[equ:Pee\])-(\[equ:Pmumu\])\], which effectively average out. As a result of these two effects, the oscillating terms involving the third mass eigenstate can be safely set to zero. After some simplifications, the $\bar{\nu}_e$ survival probability $P_{\bar{e}\bar{e}}$ is found to be $$P_{\bar{e}\bar{e}} = c_{13}^4 \left\{1-\frac{1}{2}\sin^2(2\theta_{12})[1-D_{21}\cos(2\Delta_{21})]\right\} +s_{13}^4. \label{equ:lblPee}$$ This expression is clearly of the familiar form $P_{\bar{e}\bar{e}} = c_{13}^4 P_{\bar{e}\bar{e}}^{\rm 2f} + s_{13}^4$, where $P_{\bar{e}\bar{e}}^{\rm 2f}$ is the damped two-flavor $\bar\nu_e$ survival probability with $\theta = \theta_{12}$ and $\Delta = \Delta_{21}$, which is also obtained in the non-damped case when averaging over the fast oscillations \[[[*cf.*]{}]{} [[Eq.]{} (\[equ:sblPee\])]{}\]. For the case of wave packet decoherence, we know from [[Eq.]{} (\[equ:coherence2\])]{} that the parameter $\sigma_E$ could be constrained by either of these two equations. Since this parameter is experiment dependent, one could argue that one should obtain some limits from the KamLAND experiment, because the reactor experiments are very similar in source and detector (see, [[*e.g.*]{}]{}, [Ref.]{} [@Schwetz:2003se]). However, it should be noted that KamLAND has a rather weak precision on the corresponding $\theta_{12}$ measurement because of normalization uncertainties. Since a decoherence contribution would appear at low energies, the data set in [Ref.]{} [@Araki:2004mb] does not seem to be very restrictive for the parameter $\sigma_E$. ### Beam experiments {#beam-experiments .unnumbered} For beam experiments, such as superbeams, beta-beams or neutrino factories, one may assume $\Delta_{21} \simeq 0$ as a first approximation if one wants to be sensitive to ${\sin^2(2 \theta_{13})}$, since, at the energies and baseline lengths involved, the low-frequency neutrino oscillations do not have enough time to evolve. In the case of $\xi > 0$, this also implies that $D_{12} = 1$ and $D \equiv D_{32} = D_{31}$ to a good approximation. From these assumptions, it follows that $$\begin{aligned} P_{e\mu} &=& 2 s_{23}^2 [1 - D \cos(2\Delta)] \, s_{13}^2 + \mathcal O(s_{13}^3), \\ P_{\mu\mu} &=& 1 - \frac{1}{2}\sin^2(2\theta_{23}) [1 - D \cos(2\Delta)] + \mathcal O(s_{13}^2), \label{equ:pmumudamped}\end{aligned}$$ where $\Delta \equiv \Delta_{32} = \Delta_{31}$. Note that the probability $P_{e\mu}$ is correct up to $\mathcal O(s_{13}^3)$ \[as compared with [[Eq.]{} (\[equ:Pemu\])]{}, which is only valid up to $\mathcal O(s_{13}^2)$\], this is one of the cases where the assumptions made simplifies the $s_{13}^2$ term in this probability. Both of the above equations show obvious similarities with the cases of damped two-flavor neutrino oscillations. For $P_{e\mu}$ we have an approximate two-flavor neutrino scenario with $s^2c^2 = s_{23}^2 s_{13}^2$ and $P_{\mu\mu}$ is a pure two-flavor neutrino formula with $\theta = \theta_{23}$ up to the corrections of order $s_{13}^2$. Since the disappearance channel $P_{\mu\mu}$ at a beam experiment is supposed to have extremely good statistics, $D$ will be strongly constrained by this channel. Note that the damping in $P_{\mu\mu}$ qualitatively behaves as the one in [[Eq.]{} (\[equ:faketheta13\])]{}, [[*i.e.*]{}]{}, the damped probability might be larger or smaller than the undamped probability depending on the position relative to the oscillation maximum $E_{\mathrm{max}}$. Probabilities for decay-like effects in experiments --------------------------------------------------- If $\xi = 0$ and $\alpha_{ii} \neq 0$, then $D_{ii} \neq 1$ and [[Eq.]{} (\[equ:probconservation\])]{} will not hold. We define any effect of this kind to be “probability violating”. As mentioned in the two-flavor neutrino discussion, a very interesting special case of the probability violating effects is the case of a decay-like effect. The neutrino oscillation probabilities for decay-like effects corresponding to the ones given for decoherence-like effects are listed below. ### Short-baseline reactor experiments {#short-baseline-reactor-experiments .unnumbered} For the short-baseline reactor experiments, we obtain the $\bar\nu_e$ survival probability as $$\begin{aligned} P_{\bar e\bar e} &=& c_{13}^4 \left\{ (A_1 c_{12}^2 + A_2 s_{12}^2)^2 - A_1 A_2 \sin^2(2\theta_{12})\sin^2(\Delta_{21}) \right\} \nonumber \\ && +A_3 s_{13}^2\{A_3 s_{13}^2 + 2 c_{13}^2[A_1 c_{12}^2 \cos(2\Delta_{31})+A_2s_{12}^2\cos(2\Delta_{32})]\}.\end{aligned}$$ Again, as in the case of decoherence-like damping, the expression within the curly brackets is of a two-flavor form with $\theta = \theta_{12}$ and $\Delta = \Delta_{12}$. In the limit when ${\sin^2(2 \theta_{13})}$ is large and we ignore the solar oscillations, we obtain the two-flavor neutrino scenario $$P_{\bar e\bar e} = A^2\left\{ (c_{13}^2+\kappa s_{13}^2)^2 - \kappa {\sin^2(2 \theta_{13})}\sin^2(2\Delta) \right\}$$ only if we assume that $A_1 = A_2 = A$, where $\Delta = \Delta_{31} = \Delta_{32}$ and $\kappa = A_3/A$. ### Long-baseline reactor experiments {#long-baseline-reactor-experiments-1 .unnumbered} Assuming that the fast neutrino oscillations average out, the $\bar\nu_e$ survival probability is given by $$P_{\bar e\bar e} = c_{13}^4 P_{\bar e\bar e}^{\rm 2f} + A_3^2 s_{13}^4,$$ where $P_{\bar e\bar e}^{\rm 2f}$ is the two-flavor decay-like $\bar\nu_e$ survival probability with $\theta = \theta_{12}$ and $\Delta = \Delta_{21}$ \[[[*cf.*]{}]{}, [Eq.]{} (\[equ:lblPee\])\]. In this expression, the $s_{13}^4$ term is also damped, which does not apply in a decoherence-like scenario. ### Beam experiments {#beam-experiments-1 .unnumbered} When the assumptions $\Delta_{21} \simeq 0$ and $A = A_1 = A_2$ (which could be expected in a decay scenario where $m_1 = m_2$) are made, the neutrino oscillation probabilities that are relevant for beam experiments become $$\begin{aligned} P_{e\mu} &=& A^2 s_{23}^2 [1 + \kappa^2 - 2\kappa \cos(2\Delta)] \, s_{13}^2 + \mathcal O(s_{13}^3), \\ P_{\mu\mu} &=& A^2\left[(c_{23}^2 + \kappa s_{23}^2)^2 - \kappa\sin^2(2\theta_{23}) \sin^2(\Delta)\right] + \mathcal O(s_{13}^2),\end{aligned}$$ where $\kappa \equiv A_3/A$ and $\Delta \equiv \Delta_{32} = \Delta_{31}$. These probabilities mimic decay-like two-flavor probabilities just as the corresponding decoherence-like effects mimic decoherence-like two-flavor probabilities to leading order in $s_{13}$. Application I: Faking a small $\boldsymbol{{\sin^2(2 \theta_{13})}}$ at reactor experiments by decoherence-like effects {#sec:appl1} ======================================================================================================================= In this section, we demonstrate the possible effects of damping at a simple example using a full numerical simulation. Let us only consider the case of intrinsic wave packet decoherence, which is very interesting from the point of view that it is a “standard” effect in any realistic neutrino oscillation treatment. However, similar effects could occur from related signatures, such as quantum decoherence. As experiments, one could, in principle, consider all classes of experiments in order to investigate decoherence signals. New reactor experiments with near and far detectors [@Minakata:2002jv; @Huber:2003pm] are candidates for “clean” measurements of ${\sin^2(2 \theta_{13})}$, [[*i.e.*]{}]{}, they are specifically designed to search for a ${\sin^2(2 \theta_{13})}$ signal. As we have discussed in [Sec.]{} \[sec:sblreactor\], an interesting decoherence-like effect at such an experiment would be a derived value of ${\sin^2(2 \theta_{13})}$ which is smaller than the value provided by Nature. In this case, the CHOOZ bound might actually be too strong and the interpretation of new reactor experiments might be wrong. If we assume that there is an intrinsic loss of coherence, then the reactor $\bar \nu_e$ survival probability $P_{\bar{e} \bar{e}}$ will be given by [[Eq.]{} (\[equ:sblPee\])]{}. In order to illustrate the decoherence effect, we show in [[Fig.]{} \[fig:reactorprobs\]]{} $P_{\bar e \bar e}$ and the corresponding event rates for the experiment [[Reactor-I]{}]{} from [Ref.]{} [@Huber:2003pm] (full analysis range shown). ![[\[fig:reactorprobs\] The neutrino oscillation probability $P_{\bar{e} \bar{e}}$ (left) and event rates (right) for the experiment [[Reactor-I]{}]{} from [Ref.]{} [@Huber:2003pm] in the analysis range. For the simulated parameter values, we use ${\Delta m_{31}^2}= 2.5 \cdot 10^{-3} \, \mathrm{eV}^2$, ${\Delta m_{21}^2}= 8.2 \cdot 10^{-5} \, \mathrm{eV}^2$, $\sin^2 2 \theta_{12}=0.83$, $\sin^2 2 \theta_{23}=1$, $\delta_{CP}=0$ [@Fogli:2003th; @Bahcall:2004ut; @Bandyopadhyay:2004da; @Maltoni:2004ei] and the values for ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ as given in the left plot.]{}](probsrates){width="\textwidth"} The different curves correspond to the non-oscillatory case as well as different combinations of ${\sin^2(2 \theta_{13})}$ and $\sigma_E$. As one can observe, the two cases ${\sin^2(2 \theta_{13})}$ large and decoherence \[${\sin^2(2 \theta_{13})}=0.05$ and $\sigma_E=2 \, \mathrm{MeV}$\] and ${\sin^2(2 \theta_{13})}$ small and no decoherence \[${\sin^2(2 \theta_{13})}=0.03$ and $\sigma_E=0$\] correspond, especially in the event rate plot, very well to each other \[as compared to the other two cases of no oscillations and large ${\sin^2(2 \theta_{13})}$ only\]. This means that the decoherence effect can mimic a smaller value of ${\sin^2(2 \theta_{13})}$ than what is provided by Nature. Note that in the probability plot, there is a significant contribution from loss of coherence in the solar terms for low energies. As we will see later, this contribution can limit the decoherence effects even for no ${\sin^2(2 \theta_{13})}$ signal. In addition, the damped neutrino oscillation probability is larger than the undamped one in the range discussed after [[Eq.]{} (\[equ:faketheta13\])]{}, where the oscillation maximum is here at about $E_{\mathrm{max}} \simeq 3.4 \, \mathrm{MeV}$. ![[\[fig:reactorcorr\] Simultaneous sensitivity to ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ for the experiments [[Reactor-I]{}]{} (left) and [[Reactor-II]{}]{} (right) from [Ref.]{} [@Huber:2003pm] (curves shown for 1 d.o.f). For the simulated parameter values, we use ${\sin^2(2 \theta_{13})}=0$, $\sigma_E=0$, and the other values as in [[Fig.]{} \[fig:reactorprobs\]]{}. For the thick solid curves, the unshown fit parameter values are marginalized over, where post-KamLAND external precisions of 10 $\theta_{12}$ [@Gonzalez-Garcia:2001zy; @Barger:2000hy] are imposed along with an external error of 10 superbeams is assumed. For the thin dashed curves, the unshown fit parameter values are fixed (no correlations). For the numerical analysis, an extended version of the GLoBES software [@Huber:2004ka] is used. The arrows indicate the shift of the ${\sin^2(2 \theta_{13})}$ sensitivity limit if one assumes $\sigma_E$ as a free parameter.]{}](reactorcorr){width="\textwidth"} In order to illustrate the effect for a complete analysis, we show in [[Fig.]{} \[fig:reactorcorr\]]{} the simultaneous sensitivity to ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ for [[Reactor-I]{}]{} ($\mathcal{L} = 400 \, \mathrm{t \, GW \, yr}$) and [[Reactor-II]{}]{} ($\mathcal{L} = 8 \, 000 \, \mathrm{t \, GW \, yr}$) from [Ref.]{} [@Huber:2003pm] (1 d.o.f.) using an extended version of the GLoBES software [@Huber:2004ka]. In this figure, $\sigma_E$ is assumed to be a free (fit) parameter that has to be measured by the experiment. Therefore, without additional knowledge, the ${\sin^2(2 \theta_{13})}$ sensitivity limit is obtained as a projection of the curves onto the ${\sin^2(2 \theta_{13})}$-axis. Since the ${\sin^2(2 \theta_{13})}$ sensitivity limit for no decoherence effects is the one for $\sigma_E=0$, the arrows indicate the shift of this limit by the unknown $\sigma_E$. This means, for example, that the sensitivity limit becomes about 50 % to 100 % worse than that for the actual $\sigma_E \equiv 0$, since the decoherence mimics a smaller value of ${\sin^2(2 \theta_{13})}$ than what is provided by Nature. Similar results to the left plot are obtained for the proposed Double-CHOOZ experiment [@Ardellier:2004ui]. Note that the correlation between ${\sin^2(2 \theta_{13})}$ and $\sigma_E$ affects the ${\sin^2(2 \theta_{13})}$ sensitivity (projection onto the horizontal axis), but not the $\sigma_E$ sensitivity (projection onto the vertical axis). The latter is correlated with the other neutrino oscillation parameters (especially the solar parameters), as one can read off from the difference between the solid and dashed curves. For the $\sigma_E$ sensitivity, one obtains $\sigma_E \lesssim 10 \, \mathrm{MeV}$ ([[Reactor-I]{}]{}) and $\sigma_E \lesssim 5 \, \mathrm{MeV}$ ([[Reactor-II]{}]{}) at the $3 \sigma$ confidence level. As one can observe from the left plot of [[Fig.]{} \[fig:reactorprobs\]]{}, there is some contribution of the solar oscillation averaging to the decoherence effect at low energies. In fact, this is the reason why one can constrain $\sigma_E$ even for ${\sin^2(2 \theta_{13})}\equiv 0$, since in the decoherence effect, the atmospheric oscillations are suppressed by the oscillation amplitude ${\sin^2(2 \theta_{13})}$. Obviously, this solar decoherence effect determines the upper bound for $\sigma_E$, which means that the $\sigma_E$ sensitivity is limited by the knowledge on the solar oscillation parameters \[[[*cf.*]{}]{}, [[Eq.]{} (\[equ:sblPee\])]{}\]. As we have discussed in [Sec.]{} \[sec:phenomenology\], $\sigma_E$ might be an experiment dependent parameter related to the production and detection processes. Instead of deriving bounds for this parameter from reactor experiments, one can estimate from [[Fig.]{} \[fig:reactorcorr\]]{} that one has to constrain $\sigma_E$ better than to about $\sigma_E \lesssim 0.5 \, \mathrm{MeV}$ in order not to have a significant deterioration of the ${\sin^2(2 \theta_{13})}$ sensitivity limit. In addition, in order to exclude an experiment dependent effect, it is highly recommendable to measure the same quantity with different techniques such as ${\sin^2(2 \theta_{13})}$ with both reactor experiments and superbeams. Application II: Testing and disentangling damping signatures at neutrino factories {#sec:appl2} ================================================================================== If we want to constrain the model parameters in [Table]{} \[tab:models\] and to test the different models against each other, then we will need to choose a high-precision instrument to test these tiny effects. Therefore, we investigate the potential of a neutrino factory. In particular, the muon neutrino disappearance channel $\nu_\mu \rightarrow \nu_\mu$ at a neutrino factory has very good statistics and the impact of neutrino oscillation parameter correlations other than with ${\Delta m_{31}^2}$ and $\theta_{23}$ is very small. Thus, we will mainly focus on this disappearance channel, but include the appearance information in the full analysis and demonstrate how the value of ${\sin^2(2 \theta_{13})}$ would influence the effects. Since our exponential damping model is not directly comparable to other approaches in the literature, we put a major emphasis on the identification problem of a non-standard contribution: If we actually observe something unexpected, how well can we determine what sort of effect this actually is? In the simplest case, this means that we test a signature against the standard (no damping) scenario giving us limits for the parameters. Since it is almost impossible to include the correlations among all parameters, we choose to use $\alpha_{ij}=\alpha$ independent of $i$ and $j$ in this section in order to drastically reduce the number of parameters. This means that we now have to deal with eight correlated parameters (six neutrino oscillation parameters, the matter density, and the parameter $\alpha$). We have motivated this choice at the end of [Sec.]{} \[sec:gendescription\] and, for individual cases, in [Sec.]{} \[sec:examples\]. ![[\[fig:allprobs\] Contributions of the first three different damping signatures from [Table]{} \[tab:models\] to the disappearance probability $P_{\mu \mu}$ as function of the neutrino energy. Here $L=3 \, 000 \, \mathrm{km}$ and the neutrino oscillation parameters as in [[Fig.]{} \[fig:reactorprobs\]]{} with ${\sin^2(2 \theta_{13})}=0$ are used. The parameters for the non-standard effects are given in the plots, where zero corresponds to the thick curves (oscillations only) and larger values correspond to curves further off the zero curve. The energy range corresponds to the analysis range of the $50 \, \mathrm{GeV}$ neutrino factory [[NuFact-II]{}]{} from [Ref.]{} [@Huber:2002mx].]{}](allprobs){width="\textwidth"} Before we come to the results of a complete simulation, let us illustrate the spectral behavior (energy dependence) of the neutrino oscillation probability $P_{\mu \mu}$ in [[Fig.]{} \[fig:allprobs\]]{} for some characteristic examples. Earlier in [Sec.]{} \[sec:dampedprob\] we have already discussed that there are two general interesting cases: Either only the oscillatory terms are damped or all terms are damped. In [[Fig.]{} \[fig:allprobs\]]{}, we can clearly identify this difference between the decoherence-like and the other two damping models (decay and oscillations). In all the shown cases (for which $\gamma>0$), the relative importance of the damping increases as the energy decreases. However, since also the neutrino oscillation probability drops with lower energies, the absolute size of the effect is determined by the ratio of signature versus probability effect for low energies. In addition, cross-section and flux will disfavor low energies, which means that the low-energy effects become even harder to identify. This makes the wave packet decoherence scenario most difficult to test, since the $E^{-4}$ dependence in the exponent strongly favors low energies. However, it might be most easily distinguished from the decay and oscillation damping scenarios because of its unique signature. As we have discussed after [[Eq.]{} (\[equ:faketheta13\])]{} \[which also holds for the similar [[Eq.]{} (\[equ:pmumudamped\])]{}\], it is a characteristic feature of decoherence-like signatures that they cross the undamped curve at $2E_{\mathrm{max}}/3$ and $2E_{\mathrm{max}}$, which here evaluate to $4 \, \mathrm{GeV}$ (outside of the analysis range) and $12 \, \mathrm{GeV}$. In [[Fig.]{} \[fig:allprobs\]]{} (left panel), this effect is hardly observable because of the $E^{-4}$ energy dependence, but the quantum decoherence motivated case “Quantum decoherence II” from [Table]{} \[tab:models\] clearly shows this behavior because of an $E^{-2}$ energy dependence. As far as the other two signatures are concerned, the decay damping has a linear energy dependence in the exponent as opposed to the quadratic one for the oscillation damping scenario. Therefore, one has the strongest high-energy effect for the decay damping scenario. -------------------- ------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- --------------------------------------------------------- Decoherence Decay Oscillations Absorption Q. decoh. I Q. decoh. II Fit signature $\frac{\sigma_E}{\mathrm{GeV}}$ $\gtrsim$ $\frac{\alpha}{10^{-5} \, \mathrm{\frac{GeV}{km}}}\gtrsim$ $\frac{\epsilon}{10^{-7} \, \mathrm{eV}^4} \gtrsim$ $\frac{\alpha}{\frac{10^{-8}}{\mathrm{GeV \, km}}} \, \gtrsim$ $\frac{\alpha}{\frac{10^{-10}}{\mathrm{GeV^2 \, km}}} \gtrsim$ $\frac{\kappa}{\frac{10^{24}}{ \mathrm{eV}^2}} \gtrsim$ \[3mm\] No damping 1.7 (2.8) 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) 2.0 (3.6) Decoherence - 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) 2.0 (3.6) Decay 1.7 (2.8) - 6.3 (10) 3.4 (5.7) 6.0 (10) 2.6 (5.1) Oscillations 1.7 (2.8) 5.8 (9.8) - 1.9 (3.2) 4.1 (6.9) 13 (17) Absorption 1.7 (2.8) 7.8 (13) 5.2 (8.5) - 24 (40) 2.1 (3.8) Q. decoh. I 1.7 (2.8) 6.3 (11) 5.1 (8.3) 11 (19) - 2.1 (3.7) Q. decoh. II 1.7 (2.8) 4.3 (7.2) 5.1 (8.3) 1.9 (3.1) 4.1 (6.8) - All models 1.7 (2.8) 7.8 (13) 6.3 (10) 11 (19) 24 (40) 13 (17) -------------------- ------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- --------------------------------------------------------- : [\[tab:results\] Parameter sensitivity limits for which the simulated models (in columns) from [Table]{} \[tab:models\] could be distinguished from the fit models (in rows) at the $3 \sigma$ ($5 \sigma$) confidence level (for the experiment simulation [[NuFact-II]{}]{}from [Ref.]{} [@Huber:2002mx]). For example, decoherence could be established against all models (including standard oscillations) for the simulated $\sigma_E \gtrsim 1.7 \, \mathrm{GeV}$. For the simulated neutrino oscillation parameter values, we use the same values as in [[Fig.]{} \[fig:reactorprobs\]]{} and ${\sin^2(2 \theta_{13})}=0$ as given in the column captions. The fit parameter values (including the model parameter $\alpha$) are marginalized over. The row “no damping” corresponds to the standard neutrino oscillation scenario, [[*i.e.*]{}]{}, it corresponds to the upper bounds for the parameters assuming that there is only one non-standard effect. The row “All models” corresponds to the most conservative case, [[*i.e.*]{}]{}, it is an estimate for how well one can establish the model against all of the other shown models.]{} In order to test the different models against each other, we use a modified version of the GLoBES software [@Huber:2004ka] and the neutrino factory setup [[NuFact-II]{}]{} from [Ref.]{} [@Huber:2002mx]. This neutrino factory uses a $50 \, \mathrm{kt}$ magnetized iron detector, $4 \, \mathrm{year}$ of running time in each polarity, and $4 \, \mathrm{MW}$ target power (corresponding to $5.3 \cdot 10^{20}$ useful muon decays per year). For a fixed set of simulated parameter values including the simulated damping parameter $\alpha$, we marginalize over the fit neutrino oscillation parameters including the fit damping parameter. Due to the complexity of the parameter space, we assume that the $\mathrm{sgn}({\Delta m_{31}^2})$-degeneracy has been resolved by this time. We define the sensitivity limit to $\alpha$ as the threshold above which the simulated damping model could be distinguished from the fit damping model. Thus, if the damping mechanism is really there, then the damping parameter $\alpha$ has to be above this threshold in order to establish the model against the fit model with the given experiment. In particular, we include the fit damping model “no damping”, which corresponds to the standard neutrino oscillation case. For the simulation, we impose external precisions of 10 % on each $\theta_{12}$ and ${\Delta m_{21}^2}$ [@Gonzalez-Garcia:2001zy; @Barger:2000hy]. In addition, we assume a constant matter density profile with 5 % uncertainty, which takes into account matter density uncertainties as well as matter density profile effects [@Geller:2001ix; @Ohlsson:2003ip; @Pana]. However, we assume that the neutrino factory itself measures ${\Delta m_{31}^2}$ and $\theta_{23}$ with its disappearance channel, [[*i.e.*]{}]{}, we do not impose an external precision on these parameters. The resulting sensitivity limits of this analysis are shown in [Table]{} \[tab:results\], where the columns correspond to the simulated models and the rows correspond to the fit models. These results are computed for ${\sin^2(2 \theta_{13})}=0$. It turns out that for a simulated value of ${\sin^2(2 \theta_{13})}$ close to the CHOOZ bound ${\sin^2(2 \theta_{13})}\simeq 0.1$, the limits on $\alpha$ would improve up to about 30 % \[depending on model and value of ${\sin^2(2 \theta_{13})}$\] because of the additional contribution from the appearance signal.[^5] Let us first of all discuss the resulting sensitivities against the standard neutrino oscillation scenario for some simple cases. For decoherence, the obtained numbers indeed correspond very well to the energy resolution of the detector, which is about 15 % of the neutrino energy, [[*i.e.*]{}]{}, $1.5 \, \mathrm{GeV}$ for a neutrino energy of $E=10 \, \mathrm{GeV}$, where the major effect takes place ([[*cf.*]{}]{}, [[Fig.]{} \[fig:allprobs\]]{}, left plot). Since the neutrino oscillation probability changes sufficiently fast in this region, the measurement is limited by the energy resolution of the detector. In the wave packet approach, the bound against the “no damping” model $\sigma_E \lesssim 1.67 \, \mathrm{GeV}$ translates into $\sigma_x \gtrsim 6 \cdot 10^{-17} \, \mathrm{m}$. This rather small number (sub-nucleon size) means that the bound is not very useful for wave packet decoherence, since it is virtually impossible to create such sharply peaked wave packets. However, there might be other energy averaging effects that can be constrained. For decay, we obtain a limit, against the standard model, which is comparable to the current neutrino lifetime limit for $m_3$. Note that we have included all correlations with the neutrino oscillation parameters in this limit. However, the limit would be a factor of two weaker if we considered only decay of $m_3$ instead of all mass eigenstates. Since there are quite strong bounds on the $m_1$ and $m_2$ lifetimes from supernova and solar neutrino observations, this factor of two difference should be a very good approximation for the actual limit. For the oscillation signature, the obtained limits are of the order of magnitude $5 \cdot 10^{-7} \, \mathrm{eV}^2$, which corresponds to $(\Delta m_{43}^2)^2$ times the active-sterile mixing in our estimate for a possible mass scheme ([[*cf.*]{}]{}, [Sec.]{} \[sec:oscsteriles\]). Considering the $\Delta m_{43}^2$ dependence, this is in fact not a very strong bound. However, note that we have taken into account the full parameter correlation, [[*i.e.*]{}]{}, this effect could not come from ${\sin^2(2 \theta_{13})}$ or any other standard parameter. ![\[fig:corrplot\] The impact of different correlations on the statistics (and systematics) sensitivity limit of the model dependent parameter $\alpha$ ($3 \sigma$), where the horizontal axis represents multiples of the statistics (and systematics) sensitivity limit. The group captions refer to the simulated models and the bar labels to the fit models, where only the fit models are shown which affect the sensitivity limit more than by 5 %. The dark bars represent the correlations with the neutrino oscillation parameters (fit parameter $\alpha=0$ fixed) and the light bars indicate the additional change if the model specific fit parameter $\alpha$ is marginalized over. The lowest light bar extends to $37$.](corrplot){width="10cm"} In order to discuss the general identification problem among different damping signatures, some information can be obtained from [Table]{} \[tab:results\]. In addition, in [[Fig.]{} \[fig:corrplot\]]{} we show the impact of the correlations with the standard neutrino oscillation parameters (dark bars) as well as the additional correlation with the fit model parameter $\alpha$ (light bars) on the $\alpha$ sensitivity limit for the simulated models from [Table]{} \[tab:models\]. The horizontal axis shows the ratio of the $\alpha$ sensitivity limit including correlations to the one from statistics and systematics only (which corresponds to $1$), where we only include fit models with relevant model parameter contributions. Two models are highly correlated if a possible signature in one model can be compensated by a change of parameter(s) in the other. Since we include the standard neutrino oscillation scenario in all models, a small change in the fit neutrino oscillation parameters might also compensate a damping signature within the measurement precision of the experiment. Therefore, we include for all signatures the standard neutrino oscillation parameter correlation as dark bars, [[*i.e.*]{}]{}, the dark bars represent the fit against the standard neutrino oscillation scenario (for fixed fit parameter $\alpha$), and the light bars are a measure for the additional problem to distinguish a non-standard signature from the ones of other possible non-standard models. The interpretation of these bars is as follows: The dark bars reflect the limit (right edges) for $\alpha$ (as multiple of the statistics limit) beyond which the non-standard signature could be distinguished from the standard neutrino oscillation case at the $3 \sigma$ confidence level. However, if $\alpha$ should be within one of the light bar ranges, then it could not be uniquely identified, since it could also well be the non-standard signature corresponding to this bar. From [[Fig.]{} \[fig:corrplot\]]{}, we make a number of interesting observations: - Signatures which have negative $\gamma$’s (Absorption and Quantum decoherence I) are almost not affected by correlations with the neutrino oscillation parameters, [[*i.e.*]{}]{}, they cannot be explained by different neutrino oscillation parameter values. In these cases, the spectrum is more suppressed for large values of $E$ than for small values, which means that the signature behaves unlike an oscillation signature corresponding to $\gamma = 2$. However, it is difficult to identify which of these models is realized. - Signatures with $\gamma=2$ (Oscillations into $\nu_s$ and Quantum decoherence II) are highly affected by correlations with the standard neutrino oscillation parameters, since the signatures have an energy dependence similar to the oscillation signature. Similar signatures, such as decay, can enhance this correlation. - Unique signatures (Wave packet decoherence and Neutrino decay) can easily be distinguished from all the other models. Although there could be some correlations with similar signatures for neutrino decay, the absolute impact on the $\alpha$ sensitivity limit is comparatively small (up to a factor of three). Summary and conclusions {#sec:summary} ======================= We have introduced exponential damping factors in the neutrino oscillation probabilities, which lead to distinctive signatures, [[*i.e.*]{}]{}, energy dependent damping effects in the energy spectrum. These damping factors are one approach to test non-oscillation effects on the neutrino oscillation probability level. They can be motivated by many different models such as intrinsic wave packet decoherence, neutrino decay, oscillations into sterile neutrinos, neutrino absorption, quantum decoherence, [[*etc.*]{}]{}. They describe the second order contributions of small possible “non-standard” corrections to the three-flavor neutrino oscillation framework (in vacuum as well as in matter) on a rather abstract level. As opposed to tests of probability conservation, the damping factors can, in addition, describe a damping of the oscillating terms (which preserves the total probability) as well as they imply, by their energy dependence, some information on the type of effect. We have demonstrated how damping factors can modify the neutrino oscillation probabilities relevant for future high-precision short- and long-baseline experiments, since these experiments might be most sensitive to very small spectral effects. As one application, we have shown that decoherence-like damping signatures can severely modify the interpretation of experiments, where we have chosen wave packet decoherence damping at new short-baseline reactor experiments as an example. In this case, two competing small effects, namely the effect of a non-zero value of ${\sin^2(2 \theta_{13})}$ and a damping contribution, might be mixed up. In particular, the damping could fake a value of ${\sin^2(2 \theta_{13})}$ which is much smaller than the value provided by Nature. Such a ${\sin^2(2 \theta_{13})}$ suppression effect can either be intrinsic (such as quantum decoherence), experiment dependent (such as some averaging effect not taken into account), or both (such as wave packet decoherence related to the production and detection processes). Intrinsic effects will be observable by all types of experiments, which means that there are very stringent limits available from existing data as well as future experiments will test the consistency of the picture. On the other hand, experiment dependent effects can only be checked by complementary techniques measuring the same quantity. One such complementary pair has, in the past, been the solar and long-baseline reactor experiments. In the future, it will therefore be very important to measure ${\sin^2(2 \theta_{13})}$ by reactor experiments and superbeams as complementary techniques, since one of them alone could fail for such experiment dependent effects. Eventually, the LSND experiment could be a strong hint for such an experiment dependent effect if it is rejected by the MiniBooNE experiment. One of the most interesting features of damping signatures are their characteristic spectral (energy) dependencies, which can act as a “fingerprint” for many sources of non-oscillation effects. For example, specific signatures could point to new interesting physics beyond the standard model. We have therefore discussed how large the effects from different damping signatures have to be in order to be identified and how well these damping signatures could be distinguished for the example of neutrino factories. In some cases, such damping signatures can be compensated by a shift of the neutrino oscillation parameters, which means that given such a damping effect, it is quite likely to obtain an erroneous determination of these parameters. However, if the damping effects are strong enough, then an establishment of non-oscillation effects will be possible. Once such a damping effect is established, it will be very interesting to know from which non-standard mechanism it actually arises. Given this question of the identification problem, we have found that signatures with a damping similar to $\exp ( - \alpha L^\beta/ E^\gamma)$, $\gamma = 1,2,\hdots$ are strongly correlated (peaking at $\gamma=2$) with the standard neutrino oscillation parameters, [[*i.e.*]{}]{}, it is difficult to distinguish them from small adjustments in the neutrino oscillation parameters. However, damping signatures similar to $\exp ( - \alpha L^\beta E^2 )$ can be very easily disentangled from the neutrino oscillation parameters, but it is difficult to distinguish them from each other. It is also extremely difficult to establish a damping of the oscillations against a damping of the probabilities with the same spectral index $\gamma$ because of the correlations with the neutrino oscillation parameters. Finally, we conclude that spectral tests of damping signatures in neutrino oscillation probabilities are an important test of the consistency of the three-flavor neutrino oscillation picture. If any deviation from this picture is found, then the most important question will be what sort of effect we are dealing with. Exactly this information could be provided by the spectral dependence of the damping signature, which means that this approach could be an important test of physics beyond the standard model. Acknowledgments {#acknowledgments .unnumbered} --------------- We would like to thank John Bahcall, Manfred Lindner, and Thomas Schwetz for useful discussions. T.O. and W.W. would like to thank the IAS and the KTH respectively for the warm hospitality and the financial support during their respective research visits. This work was supported by the Royal Swedish Academy of Sciences (KVA), the Swedish Research Council (Vetenskapsr[å]{}det), Contract Nos. 621-2001-1611, 621-2002-3577, the G[ö]{}ran Gustafsson Foundation, the Magnus Bergvall Foundation, the W. M. Keck Foundation, and NSF grant PHY-0070928. [10]{} Super-Kamiokande, Y. Fukuda et al., *Evidence for oscillation of atmospheric neutrinos*, Phys. Rev. Lett. **81** (1998), 1562, `hep-ex/9807003`. SNO, Q. R. Ahmad et al., *Direct evidence for neutrino flavor transformation from neutral-current interactions in the [Sudbury Neutrino Observatory]{}*, Phys. Rev. Lett. **89** (2002), 011301, `nucl-ex/0204008`. SNO, S. N. Ahmed et al., *Measurement of the total active $^8$[B]{} solar neutrino flux at the [Sudbury Neutrino Observatory]{} with enhanced neutral current sensitivity*, Phys. Rev. Lett. **92** (2004), 181301, `nucl-ex/0309004`. K2K, M. H. Ahn et al., *Indications of neutrino oscillation in a 250-km long-baseline experiment*, Phys. Rev. Lett. **90** (2003), 041801, `hep-ex/0212007`. KamLAND, K. Eguchi et al., *First results from [KamLAND]{}: Evidence for reactor anti- neutrino disappearance*, Phys. Rev. Lett. **90** (2003), 021802, `hep-ex/0212021`. KamLAND, T. Araki et al., *Measurement of neutrino oscillation with [KamLAND]{}: Evidence of spectral distortion*, `hep-ex/0406035`. Super-Kamiokande, Y. Ashie et al., *Evidence for an oscillatory signature in atmospheric neutrino oscillation*, Phys. Rev. Lett. **93** (2004), 101801, `hep-ex/0404034`. C. Giunti and C. W. Kim, *Coherence of neutrino oscillations in the wave packet approach*, Phys. Rev. **D58** (1998), 017301, `hep-ph/9711363`. C. Giunti, *Coherence and wave packets in neutrino oscillations*, Found. Phys. Lett. **17** (2004), 103, `hep-ph/0302026`. C. Giunti, C. W. Kim, and U. W. Lee, *Coherence of neutrino oscillations in vacuum and matter in the wave packet treatment*, Phys. Lett. **B274** (1992), 87. W. Grimus, P. Stockinger, and S. Mohanty, *The field-theoretical approach to coherence in neutrino oscillations*, Phys. Rev. **D59** (1999), 013011, `hep-ph/9807442`. C. Y. Cardall, *Coherence of neutrino flavor mixing in quantum field theory*, Phys. Rev. **D61** (2000), 073006, `hep-ph/9909332`. J. N. Bahcall, N. Cabibbo, and A. Yahil, *Are neutrinos stable particles?*, Phys. Rev. Lett. **28** (1972), 316. V. Barger, W. Y. Keung, and S. Pakvasa, *Majoron emission by neutrinos*, Phys. Rev. **D25** (1982), 907. J. W. F. Valle, *Fast neutrino decay in horizontal [Majoron]{} models*, Phys. Lett. **B131** (1983), 87. V. Barger, J. G. Learned, S. Pakvasa, and T. J. Weiler, *Neutrino decay as an explanation of atmospheric neutrino observations*, Phys. Rev. Lett. **82** (1999), 2640, `astro-ph/9810121`. S. Pakvasa, *Do neutrinos decay?*, AIP Conf. Proc. **542** (2000), 99, `hep-ph/0004077`. V. Barger et al., *Neutrino decay and atmospheric neutrinos*, Phys. Lett. **B462** (1999), 109, `hep-ph/9907421`. M. Lindner, T. Ohlsson, and W. Winter, *A combined treatment of neutrino decay and neutrino oscillations*, Nucl. Phys. **B607** (2001), 326, `hep-ph/0103170`. M. Lindner, T. Ohlsson, and W. Winter, *Decays of supernova neutrinos*, Nucl. Phys. **B622** (2002), 429, `astro-ph/0105309`. A. Strumia, *Interpreting the [LSND]{} anomaly: Sterile neutrinos or [CPT]{}-violation or …?*, Phys. Lett. **B539** (2002), 91, `hep-ph/0201134`. M. Maltoni, T. Schwetz, M. A. Tortola, and J. W. F. Valle, *Status of global fits to neutrino oscillations*, New J. Phys. **6** (2004), 122, `hep-ph/0405172`. A. De Rújula, S. L. Glashow, R. R. Wilson, and G. Charpak, *Neutrino exploration of the [Earth]{}*, Phys. Rept. **99** (1983), 341. E. Lisi, A. Marrone, and D. Montanino, *Probing possible decoherence effects in atmospheric neutrino oscillations*, Phys. Rev. Lett. **85** (2000), 1166, `hep-ph/0002053`, and references therein. F. Benatti and R. Floreanini, *Open system approach to neutrino oscillations*, JHEP **02** (2000), 032, `hep-ph/0002221`. S. L. Adler, *Comment on a proposed [Super-Kamiokande]{} test for quantum gravity induced decoherence effects*, Phys. Rev. **D62** (2000), 117901, `hep-ph/0005220`. T. Ohlsson, *Equivalence between neutrino oscillations and neutrino decoherence*, Phys. Lett. **B502** (2001), 159, `hep-ph/0012272`. F. Benatti and R. Floreanini, *Massless neutrino oscillations*, Phys. Rev. **D64** (2001), 085015, `hep-ph/0105303`. A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *A study on quantum decoherence phenomena with three generations of neutrinos*, `hep-ph/0208166`. G. Barenboim and N. E. Mavromatos, *[CPT]{} violating decoherence and [LSND]{}: A possible window to [Planck]{} scale physics*, JHEP **01** (2005), 034, `hep-ph/0404014`. G. Barenboim and N. E. Mavromatos, *Decoherent neutrino mixing, dark energy and matter-antimatter asymmetry*, Phys. Rev. **D70** (2004), 093015, `hep-ph/0406035`. D. Morgan, E. Winstanley, J. Brunner, and L. F. Thompson, *Probing quantum decoherence in atmospheric neutrino oscillations with a neutrino telescope*, `astro-ph/0412618`. J. W. F. Valle, *Standard and non-standard neutrino oscillations*, J. Phys. **G29** (2003), 1819, and references therein. LSND, A. Aguilar et al., *Evidence for neutrino oscillations from the observation of $\bar\nu_e$ appearance in a $\bar\nu_\mu$ beam*, Phys. Rev. **D64** (2001), 112007, `hep-ex/0104049`. V. Barger, S. Geer, and K. Whisnant, *Neutral currents and tests of three-neutrino unitarity in long-baseline experiments*, New J. Phys. **6** (2004), 135, `hep-ph/0407140`. A. Donini, D. Meloni, and P. Migliozzi, *The silver channel at the neutrino factory*, Nucl. Phys. **B646** (2002), 321, `hep-ph/0206034`. Y. Farzan and A. Y. Smirnov, *Leptonic unitarity triangle and [CP]{}-violation*, Phys. Rev. **D65** (2002), 113001, `hep-ph/0201105`. H. Zhang and Z.-z. Xing, *Leptonic unitarity triangles in matter*, `hep-ph/0411183`. P. Huber and J. W. F. Valle, *Non-standard interactions: Atmospheric versus neutrino factory experiments*, Phys. Lett. **B523** (2001), 151, `hep-ph/0108193`. P. Huber, T. Schwetz, and J. W. F. Valle, *How sensitive is a neutrino factory to the angle $\theta_{13}$?*, Phys. Rev. Lett. **88** (2002), 101804, `hep-ph/0111224`. P. Huber, T. Schwetz, and J. W. F. Valle, *Confusing non-standard neutrino interactions with oscillations at a neutrino factory*, Phys. Rev. **D66** (2002), 013006, `hep-ph/0202048`. J. N. Bahcall, *The central temperature of the [Sun]{} can be measured via the ${}^7$[Be]{} solar neutrino line*, Phys. Rev. Lett. **71** (1993), 2369, `hep-ph/9309292`. J. N. Bahcall, *The ${}^7$[Be]{} solar neutrino line: A reflection of the central temperature distribution of the [Sun]{}*, Phys. Rev. **D49** (1994), 3923, `astro-ph/9401024`. E. Fiorini, *Cryogenic thermal detectors in subnuclear physics and astrophysics*, Physica B: Condensed Matter **169** (1991), 388. A. Alessandrello et al., *A bromine cryogenic detector for solar and non solar neutrino spectroscopy*, Astropart. Phys. **3** (1995), 239. Z. G. Berezhiani and M. I. Vysotsky, *Neutrino decay in matter*, Phys. Lett. **B199** (1987), 281. C. Giunti, C. W. Kim, U. W. Lee, and W. P. Lam, *Majoron decay of neutrinos in matter*, Phys. Rev. **D45** (1992), 1557. E. K. Akhmedov, R. Johansson, M. Lindner, T. Ohlsson, and T. Schwetz, *Series expansions for three-flavor neutrino oscillation probabilities in matter*, JHEP **04** (2004), 078, `hep-ph/0402175`. K. Kiers, S. Nussinov, and N. Weiss, *Coherence effects in neutrino oscillations*, Phys. Rev. **D53** (1996), 537, `hep-ph/9506271`. C. Giunti, *Coherence in neutrino interactions*, `hep-ph/0302045`. H. J. Lipkin, *Quantum mechanics of neutrino detectors determine coherence and phases in oscillation experiments*, `hep-ph/0312292`. S. Dutta, R. Gandhi, and B. Mukhopadhyaya, *nu/tau appearance searches using neutrino beams from muon storage rings*, Eur. Phys. J. **C18** (2000), 405–416, `hep-ph/9905475`. E. A. Paschos and J. Y. Yu, *Neutrino interactions in oscillation experiments*, Phys. Rev. **D65** (2002), 033002, `hep-ph/0107261`. S. Kretzer and M. H. Reno, *Tau neutrino deep inelastic charged current interactions*, Phys. Rev. **D66** (2002), 113007, `hep-ph/0208187`. R. Gandhi, C. Quigg, M. H. Reno, and I. Sarcevic, *Neutrino interactions at ultrahigh energies*, Phys. Rev. **D58** (1998), 093009, `hep-ph/9807264`. G. Lindblad, *On the generators of quantum dynamical semigroups*, Commun. Math. Phys. **48** (1976), 119. A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *Quantum dissipative effects and neutrinos: Current constraints and future perspectives*, Phys. Rev. **D63** (2001), 073001, `hep-ph/0009222`. A. M. Gago, E. M. Santos, W. J. C. Teves, and R. Zukanovich Funchal, *On the quest for the dynamics of nu/mu –&gt; nu/tau conversion*, Phys. Rev. **D63** (2001), 113013, `hep-ph/0010092`. J. Schechter and J. W. F. Valle, *Neutrino masses in [$SU(2) \times U(1)$]{} theories*, Phys. Rev. **D22** (1980), 2227. J. Schechter and J. W. F. Valle, *Neutrino-oscillation thought experiment*, Phys. Rev. **D23** (1981), 1666. G. R. Dvali and A. Y. Smirnov, *Probing large extra dimensions with neutrinos*, Nucl. Phys. **B563** (1999), 63, `hep-ph/9904211`. R. N. Mohapatra, S. Nandi, and A. P[é]{}rez-Lorenzana, *Neutrino masses and oscillations in models with large extra dimensions*, Phys. Lett. **B466** (1999), 115, `hep-ph/9907520`. R. Barbieri, P. Creminelli, and A. Strumia, *Neutrino oscillations and large extra dimensions*, Nucl. Phys. **B585** (2000), 28, `hep-ph/0002199`. R. N. Mohapatra and A. P[é]{}rez-Lorenzana, *Three flavour neutrino oscillations in models with large extra dimensions*, Nucl. Phys. **B593** (2001), 451, `hep-ph/0006278`. T. H[ä]{}llgren, T. Ohlsson, and G. Seidl, *Neutrino oscillations in deconstructed dimensions*, JHEP (to be published), `hep-ph/0411312`. CHOOZ, M. Apollonio et al., *Limits on neutrino oscillations from the [CHOOZ]{} experiment*, Phys. Lett. **B466** (1999), 415, `hep-ex/9907037`. CHOOZ, M. Apollonio et al., *Search for neutrino oscillations on a long base-line at the [CHOOZ]{} nuclear power station*, Eur. Phys. J. **C27** (2003), 331, `hep-ex/0301017`. F. Ardellier et al., *Letter of intent for [Double Chooz]{}: A search for the mixing angle $\theta_{13}$*, `hep-ex/0405032`. T. Schwetz, *Variations on [KamLAND]{}: Likelihood analysis and frequentist confidence regions*, Phys. Lett. **B577** (2003), 120, `hep-ph/0308003`. H. Minakata, H. Sugiyama, O. Yasuda, K. Inoue, and F. Suekane, *Reactor measurement of $\theta_{13}$ and its complementarity to long-baseline experiments*, Phys. Rev. **D68** (2003), 033017, `hep-ph/0211111`. P. Huber, M. Lindner, T. Schwetz, and W. Winter, *Reactor neutrino experiments compared to superbeams*, Nucl. Phys. **B665** (2003), 487, `hep-ph/0303232`. G. L. Fogli, E. Lisi, A. Marrone, and D. Montanino, *Status of atmospheric $\nu_\mu \rightarrow \nu_\tau$ oscillations and decoherence after the first [K2K]{} spectral data*, Phys. Rev. **D67** (2003), 093006, `hep-ph/0303064`. J. N. Bahcall, M. C. Gonzalez-Garcia, and C. Pe$\tilde{\mathrm{n}}$a-Garay, *Solar neutrinos before and after [Neutrino 2004]{}*, JHEP **08** (2004), 016, `hep-ph/0406294`. A. Bandyopadhyay, S. Choubey, S. Goswami, S. T. Petcov, and D. P. Roy, *Update of the solar neutrino oscillation analysis with the 766-[Ty]{} [KamLAND]{} spectrum*, (2004), `hep-ph/0406328`. M. C. Gonzalez-Garcia and C. Pe$\tilde{\mathrm{n}}$a-Garay, *On the effect of $\theta_{13}$ on the determination of solar oscillation parameters at [KamLAND]{}*, Phys. Lett. **B527** (2002), 199, `hep-ph/0111432`. V. D. Barger, D. Marfatia, and B. P. Wood, *Resolving the solar neutrino problem with [KamLAND]{}*, Phys. Lett. **B498** (2001), 53, `hep-ph/0011251`. P. Huber, M. Lindner, and W. Winter, *Simulation of long-baseline neutrino oscillation experiments with [GLoBES]{}*, Comp. Phys. Comm. (to be published), `hep-ph/0407333`. P. Huber, M. Lindner, and W. Winter, *Superbeams versus neutrino factories*, Nucl. Phys. **B645** (2002), 3, `hep-ph/0204352`. R. J. Geller and T. Hara, *Geophysical aspects of very long baseline neutrino experiments*, Phys. Rev. Lett. **49** (2001), 98, `hep-ph/0111342`. T. Ohlsson and W. Winter, *The role of matter density uncertainties in the analysis of future neutrino factory experiments*, Phys. Rev. **D68** (2003), 073007, `hep-ph/0307178`. S. V. Panasyuk, *[REM (Reference Earth Model)]{} web page*, , 2000. [^1]: Although it will be possible to describe some of our effects on Hamiltonian level, the Hamiltonian will not be Hermitian anymore. [^2]: For instance, some effect on Hamiltonian level, such as neutrino absorption, would require a full re-diagonalization of the effective Hamiltonian with the absorption terms included, see the section “Neutrino absorption” below. [^3]: In general, we do not change the symbol for $\alpha$ if its is exactly the same as the one in [[Eq.]{} (\[equ:dfactor\])]{}. However, if there are additional factors absorbed in $\alpha$, then we re-define the name (such as for wave packet decoherence). [^4]: Because of the higher $\tau$ production threshold, the $\nu_e$ and $\nu_\mu$ cross-sections are in fact considerably larger than the $\nu_\tau$ cross-section [@Dutta:1999jg; @Paschos:2001np; @Kretzer:2002fr]. However, for these low energies the standard absorption effects are anyway small. [^5]: We do not show these results, since the exact interpretation of the appearance signal is model dependent. In addition, matter effects are strong in this case and they depend on the treatment of those in the context of the damping model.
Introduction ============ A few years ago White  [@white1] introduced in the study of electron correlation a new and powerful numerical method: the density matrix renormalization group (DMRG). The method provided extremely accurate results in the case of the one-dimensional Heisenberg and Hubbard models  [@white1; @white2; @white3; @qin; @noack], Hubbard-like models with bond alternation [@pang-liang] and recently has been applied to some two dimensional models  [@xiang; @white2d]. DMRG is a new variational method that promises to be very useful in quantum chemistry. It deals with the main difficulty of this kind of calculations, i.e. the exponential increase of the dimension of the Hilbert space with the size of the system, in a new, direct and efficient way. While the usual packages of ab initio quantum chemistry cut the dimension of the Hilbert space by neglecting the coefficients of the configuration interaction expansion below a certain threshold, the DMRG obtains an analogous result with a different strategy. A system of localized electrons is partitioned in two blocks $A$,$B$ (sometimes $B$ is called “environment” or “universe”) and all the many-electron states corresponding to situations in which the population of the two blocks is unphysical (e.g. all electrons in $A$, no electrons in $B$) are automatically truncated by the formalism. A density matrix is introduced, whose eigenvectors, corresponding to the larger eigenvalues, are the most significant, the most probable states of $A$ in the presence of $B$. These states are retained, and states corresponding to very small eigenvalues are neglected. The two blocks are taken initially small and increase their size in the course of the calculation. As a result of the systematic truncation mentioned above, the time of computation does not grow more than the fourth power of the size of the system, keeping constant the number $m$ of retained eigenvectors of the density matrix. The error [@liang-pang] is an exponentially decreasing function of $m$. The method is especially suited to treat systems with translational or reflection invariance, since in an intermediate stage of the calculation wave functions suitable to describe the block $B$ can be obtained simply by translation (or reflection) from those of block $A$. A good candidate in order to test the method in quantum chemistry is provided by the Pariser-Parr-Pople model of conjugated polyenes. Many considerations are in favour of this choice: - A cyclic polyene $ (CH)_N$ with the carbon atoms at the vertices of a regular polygon is “translationally” invariant (here translation means a rotation of the circle circumscribed to the polygon); hence the simplification mentioned above can be applied. - Exact full configuration interaction calculations are available   [@stef1; @stef2], and we can compare the DMRG ground state energy values with these results. The comparison can be made up to $N=18$. A further comparison can be made with the coupled cluster (CC) method  [@paldus1]. However, the DMRG method is much more powerful; we have computed without much effort ground state energy values up to $N=34$ carbon atoms. The full CI Hilbert space corresponding to $N=34$ has dimension equal to ${34\choose 17}^2\approx 5.44\times 10^{18}$. - Trans-polyacetylene presents interesting experimental and theoretical problems: the bond alternation (and in particular the values of the two bond lengths) can in principle be deduced by ab initio computations but this problem meets considerable difficulties. Recently an interesting approach to the problem of the dimerization of polyacetylene using the DMRG method has been put forward by M.B.Lepetit and G.M.Pastor  [@lepetit]; these last authors treat accurately the hopping term allowing a dependence on the distance between the $C$ atoms and describe the electron interaction by a Hubbard term. Therefore it would be of interest to extend their work by substituting a PPP interaction to the Hubbard interaction. In the present paper we show that this extension is possible (but we do not derive the hopping term from ab initio calculations). - The unrestricted Hartree-Fock solutions of the PPP model Hamiltonian present spin density waves and charge density waves  [@paldus2; @paldus3; @paldus4; @fukutome]. It is of interest to know whether or not these waves persist after a more precise variational approximation to the ground state (like the DMRG) is performed. The paper is organized as follows: in Sec.2 the PPP Hamiltonian is written down and the DMRG method is reviewed. In particular we point out some mathematical aspects of the DMRG method that usually are not sufficiently emphasized. In Sec.3 the properties of the unrestricted (spin density wave) Hartree-Fock solution are briefly discussed. In Sec.4 the numerical results and the conclusions are presented. The PPP Hamiltonian and the DMRG method. ======================================== The Pariser-Parr-Pople Hamiltonian of the $\pi$ electronic model of a cyclic polyene $C_N H_N$ can be written as  [@paldus1; @paldus2; @parr]: $$H = \beta \sum_{ < \mu \nu >} \hat{E}_{\mu \nu} + {1\over 2} \sum_{\mu,\nu=0}^{N-1} \gamma_{\mu \nu} \left(\hat {n}_\mu - 1 \right) \left( \hat {n}_\nu - 1 \right)$$ where $\hat{E}_{\mu \nu}$ are the generators of the unitary group summed over spin, and $\hat {n}_\mu = \hat {E}_{\mu \mu}$ is the occupation number of the site $\mu$; $\beta$, $\gamma_{\mu \nu}$ are parameters of the model, and $<\mu \nu >$ denotes summation restricted to nearest neighbor. We limit ourselves to the series $N=2n=4 \nu +2, \nu=1,2,...$, where $N$ denotes the total number of electrons which is equal to the total number of sites. According to ref. \[\] we take $\beta = -2.5$ eV, and for the Coulomb repulsion we use the Mataga-Nishimoto prescription  [@mataga]: $$\gamma_{\mu \nu} = {1 \over \gamma_0^{-1} + d_{\mu \nu}} \qquad\hbox{(a. u.)}$$ where $\gamma_0 = 10.84$ eV, $d_{\mu \nu}$ denotes the distance between the vertex $\mu$ and the vertex $\nu$ of a regular polygon of $N$ sites and is given by $$d_{\mu \nu} = b\,{ \sin(\mu-\nu){ \pi\over N}\over\sin{\pi \over N}}$$ and $b$, the nearest-neighbor separation, is equal to $ 1.4$ Å. Let’s see now how the DMRG method can be applied to the PPP model. We will also review briefly the principal formal and physical ideas [@white1; @white2; @delgado] that are behind this new and powerful numerical method. Let $A$ and $B$ denote two adjacent subsets of respectively $N_A$, $N_B$ sites. The method consists of two parts: step 1, called the “infinite system method” and step 2, called the “finite system method”. In step 1, $N_A + N_B < N$, $N_A = N_B$ and $N_A$, $N_B$ are progressively increased up to reach the condition $N_A + N_B = N$, while in step 2, we have always $N_A + N_B = N$, with variable $N_A$ and $N_B$. For instance in step 1 we can have $N=18$, $A=\{1,2,..6\}$, $B=\{7,8,..12\}$, in step 2 we can have $A=\{1,2,3,4\}$, $B=\{5,6,....18\}$. The main task of the method is to find a reduced set of “localized” many particle states for subsets (blocks) $A$ and $B$ suitable to describe the union $A \bigcup B$. Let us denote by $A^+$, $B^+$ polynomials in the creation operators corresponding to sites in $A$, $B$, respectively. Let $|0\rangle$ denote the vacuum, and let $|a\rangle = A^+|0\rangle,\ |b\rangle = B^+|0\rangle$. Clearly $|a\rangle$, $|b\rangle$ represents states of electrons localized in different subsets. We can form the state $ A^+ B^+|0\rangle$; this state is similar but not identical to the tensor product $ |a\rangle \otimes \ |b\rangle $ since the operators $A^+$, $B^+$ do not necessarily commute. We use the notation $ |a\rangle |b\rangle $ to denote the compound state $A^+ B^+ |0\rangle$. Clearly, varying the polinomials $A^+$, $B^+$ in all possible independent ways, the states $|a\rangle|b\rangle$ generate the whole Hilbert space. ---- ------------ ------------ ------------ ------------ ---------------- ------------- N $E_{RHF}$ $E_{UHF}$ $E_{FCI}$ $E_{DMRG}$ $m_A^{(1,2)} $ $m_A^{(3)}$ 6 -11.358325 -11.358325 -12.722033 -12.722032 256 512 10 -17.441467 -17.910422 -20.060503 -20.060503 256 512 14 -23.731302 -24.924267 -27.671391 -27.671333 256 512 18 -30.101389 -32.007998 -35.385430 -35.384861 256 512 22 -36.513220 -39.105943 - -43.145027 256 512 26 -42.950070 -46.207715 - -50.928028 256 512 30 -49.403281 -53.310920 - -58.715323 200 400 34 -55.867856 -60.414852 - -66.509902 200 400 ---- ------------ ------------ ------------ ------------ ---------------- ------------- : Energy results: the energies (in eV) calculated via Restricted and Unrestricted HF, FCI and DMRG are compared for different values of N. $m_A^{(n)}$ indicates the number of states kept in block A during the the n-th DMRG iteration. \[table1\] Suppose that we have found an exact or approximate ground state $|\psi\rangle$ of $ N_A + N_B $ electrons in the subset $A \bigcup B$ of the chain; let us expand $ | \psi\rangle $ as: $$|\psi\rangle = \sum_{ I J} \psi_{ I J} A^+_{I} B^+_{J} |0\rangle$$ where $\{ A^+_{I} |0\rangle \} $ denote a complete orthonormal set of states of electrons localized in $A$, and $ \{ B^+_{J} |0\rangle \}$ is an analogous complete set of states corresponding to $B$. For instance, initially we can have $ A^+_{I} B^+_{J} |0\rangle = a^+_{i_1} a^+_{i_2} ... a^+_{N_A} b^+_{j_1} b^+_{j_2} ... b^+_{N_B} |0\rangle $ where the $a^+$ , $b^+$ create electrons in $A$, $B$ respectively ; in this case the numbers $\psi_{I J} $ are the usual configuration interaction (CI) coefficients. In principle the sums $ \sum_{ I} $, $ \sum_{ J} $ run over $ 4^{N_A} $, $ 4^{N_B} $ states respectively, since the occupation numbers $ n_\uparrow, n_\downarrow $ of a site can have four possible values: $ (0,0), (1,0), (0,1), (1,1) $. However the number of spin up electrons and the number of spin down electrons are good quantum numbers and can be fixed; we can choose states $A_I^+|0\rangle$, $B_J^+|0\rangle$ with fixed numbers of spin up and spin down electrons, and the coefficients $ \psi_{ I J} $ vanish unless this conservation law is fulfilled. Furthermore, during the iteration procedure, the number of states will be truncated; therefore in the expansion (2.4) we keep in general only $m_A$ states for the block $A$ and $m_B$ states for the block $B$. In the following we shall assume that the coefficients $\psi_{ I J}$ are real. The main mathematical tool of the DMRG theory is provided by the following density matrix: $$\rho_{ I I'} = \sum^{m_B}_J \psi_{I J} \psi_{ I' J} = \left( \psi \psi^T \right)_{ I I'}$$ The dimension of the matrix $ \rho$ is $ m_A \times m_A$; however, because of the number conservation laws described above, the matrix is actually in block form: the number of up and down electrons of the states $I$ and $I'$ must be the same. Furthermore $\rho$ is a non negative square matrix. Let us first make some simplifying assumptions, that will be relaxed in the following. Let’s assume that the blocks $A$ and $B$ are described by the same number of states ($m_A$ = $m_B$), so that the matrix $\psi$ is a square matrix. Denoting by $S$ the square root of $\rho$ ($\rho = \psi \psi^T = S^2$, $S = \rho^{1\over 2}$), we have the polar decomposition $$\psi = S U_1$$ where $U_1$ is an orthogonal matrix. We diagonalize $S$ by writing $ S = U D U^T $, where $U$ is an orthogonal matrix and $D$ is diagonal. Therefore we can write $$\psi = U D U^T U_1 = U D V^T$$ where $V$ is an orthogonal matrix, and $ \rho = U D^2 U^T $. Actually formula (2.7) holds for [*any*]{} rectangular matrix $m_A \times m_B$ $\psi$ (see e.g \[\]). $U$ and $D$ are square matrices $m_A \times m_A$ and $V^T$ is $m_A \times m_B$. These matrices verify the conditions: $$\begin{aligned} U^T U = I\;; \quad V V^T = I \;; \cr\cr \psi \psi^T = U D^2 U^T \;; \quad \psi^T \psi = V D^2 V^T\end{aligned}$$ and $D$ is diagonal and non-negative. Let us denote by $D_\alpha$ the eigenvalues of $D$. Substituting (2.7) into (2.4) we obtain: $$| \psi\rangle = \sum^{m_A}_\alpha D_\alpha |u_\alpha\rangle |v_\alpha\rangle$$ where $$|u_\alpha\rangle = \sum_I^{m_A} U_{I \alpha} A^+_{I} |0\rangle \;, \quad |v_\alpha\rangle = \sum_J^{m_B} V_{J \alpha} B^+_{J} |0\rangle \;$$ What is the meaning of $ |u_\alpha\rangle , |v_\alpha\rangle$? They represent states of the subsystems $A$, $B$, such that the probability for the whole system $A \bigcup B$ to be found in the state $|u_\alpha\rangle |v_\alpha\rangle$ is $ D_\alpha^2 $. The main idea of the DMRG method consists in neglecting, in Eq.(2.9), all eigenvalues $ D_\alpha $ below a certain threshold which amounts to keeping only a small number $m$ of terms in the sum (2.9) and using the corresponding states $|u_\alpha\rangle$ as a basis for the description of block $A$. Since $ \hbox{Tr} \psi \psi^T = \sum^{m_A}_\alpha D_\alpha^2 = 1$, this approximation is good if the probabilities $ D_\alpha^2 $ have a sufficiently rapid decrease to zero, so that $ \sum_{\alpha=1}^m D_\alpha^2 \simeq 1$. At the best of our knowledge, all numerical experiments performed so far (see, e.g. ref. \[\]) confirm this rapid decrease of the probabilities $D_\alpha^2 $. Let’s give an heuristic argument for this decreasing behaviour. Suppose that $ A_I^+ $ creates $ N_A^e$ electrons in the $N_A$ sites of the block $A$, and $ B_J^+ $ creates $N_B^e$ electrons in the $N_B$ sites of the block $B$. In absence of the interaction, usual statistical mechanics arguments prove that the probabilities $D_\alpha^2 $ are strongly peaked about the populations $N_A^e=N_A$, $N_B^e = N_B$ (which correspond to a density of one electron per site); this is analogous to the classical result in statistical mechanics stating that the probability of distributing a large number of molecules in two communicating volumes is strongly peaked about a distribution with equal density in the two volumes. ---- ----------- ----------- ----------- ----------- ---------------- ------------- N FCI DMRG FCI DMRG $m_A^{(1,2)} $ $m_A^{(3)}$ 6 -0.227285 -0.227285 -0.227285 -0.227285 256 512 10 -0.261904 -0.261904 -0.215008 -0.215008 256 512 14 -0.281435 -0.281431 -0.196223 -0.196219 256 512 18 -0.293558 -0.293526 -0.187635 -0.187603 256 512 22 - -0.301446 - -0.183595 256 512 26 - -0.306844 - -0.181551 256 512 30 - -0.310401 - -0.180147 200 400 34 - -0.313001 - -0.179266 200 400 ---- ----------- ----------- ----------- ----------- ---------------- ------------- : Correlation energies: The correlation energies per electron (in eV) of the FCI and DMRG solutions with respect to the Restricted and Unrestricted HF approximations are compared for different values of N. \[table2\] Because of the central limit theorem, the peak is gaussian in the classical case; we make the conjecture that even in the quantum interacting case that we are considering, this gaussian behaviour still holds, at least for translationally invariant systems, like the PPP model. Of course if the conjecture is true, it explains the strongly decreasing behaviour of the probabilities $D_\alpha^2$ mentioned above. Let’s now proceed with the description of the DMRG method. Once we have a pretty good basis of $m_A$ states $|u_\alpha\rangle$ that describe the block $A$, and $m_B$ states $|v_\alpha\rangle$ that describe the block $B$, the next task consists in the enlargement of the blocks. In the first part of the algorithm (infinite system method), since $N_A = N_B$ and the system is translationally invariant, the states $ |v_\alpha\rangle$ can be simply obtained by translating the states $ |u_\alpha\rangle $. Hence we can concentrate our attention on the block $A$. The simplest way of enlarging the block $A$ consists in adding a site $s$ to $A$, obtaining a new block $ A' = A \bigcup s$. White  [@white1] denotes by $A \;\bf{\bullet}$ this new block. Denoting by $ |s_1\rangle = |0\rangle$, $ |s_2\rangle = a_{s \uparrow}^\dagger |0\rangle$, $|s_3\rangle = a_{s \downarrow}^\dagger |0\rangle$, $|s_4\rangle = a_{s \uparrow}^\dagger a_{s \downarrow}^\dagger |0\rangle$, the states describing the site $s$, we have $ 4 m_A $ vectors $$\begin{aligned} A'^+_I\,|0\rangle &= |u_{\alpha_{\Inn}}\rangle |s_{\gamma_{\Inn}}\rangle,\qquad I=(\alpha_{\In},\gamma_{\In})\cr\cr \alpha_{\In}&= 1,...m_A,\qquad\gamma_{\In} = 1,...4\end{aligned}$$ in order to describe $A' = A \;\bf{\bullet}\;=\; A \bigcup s$. At the same time, we add an analogous site $t$ to the block $B$, and we consider the vectors $| v_{\beta}\rangle |t_{\delta}\rangle$ ($ \beta = 1,2,...m_B, \;\delta=1,2,3,4 $) in order to describe the block $ B' = B \;\bf{\bullet}\;=\; B \bigcup t$. With such a basis we can now proceed to compute the expansion (2.4) for the wavefunction for the new superblock $A' \bigcup B'$. Let us use the term “local” to denote operators $ a_{\mu}^\dagger , a_{\mu}, n_{\mu}$ referring to one site $\mu$ only, and the term “internal to block $A$” to denote operators whose site indices belong to the block $A$. The idea is now to compute a new “effective” Hamiltonian matrix $H'$, by using the truncated basis consisting of the $ 16 m_{A} m_{B} $ vectors $ | u_{\alpha} \rangle | s_{\gamma} \rangle | v_{\beta} \rangle | t_{\delta} \rangle $. ---- -------------------------------------------- --------- --------- ----------- --------------- ------------- N $|e^D \rangle|RHF\rangle\tablenotemark[1]$ ACPQ ACPTQ DMRG $m_A^{(1,2)}$ $m_A^{(3)}$ 6 -0.224196 -0.2238 -0.2253 -0.227285 256 512 10 -0.248723 -0.2515 -0.2577 -0.261904 256 512 14 -0.256777 -0.2649 -0.2762 -0.281431 256 512 18 - -0.2720 -0.2887 -0.293526 256 512 22 - -0.2763 -0.2994 -0.301446 256 512 ---- -------------------------------------------- --------- --------- ----------- --------------- ------------- : Comparison of approximate solutions: The correlation energy ${E-E_{RHF}}\over N$ (in eV) of the DMRG solution is compared with the partial cluster analysis $(|e^D\rangle|RHF\rangle)$ , the Approximate Coupled Pair theory with Quadruples (ACPQ)and the Approximate Coupled Pair theory with Triples and Quadruples (ACPTQ). \[table3\] Clearly it is easy to compute terms of the Hamiltonian containing local operators referring only to one of the four blocks $ A,\;s,\;B,\;t $; these terms are known from previous steps of the iteration. A little more care is needed in order to compute terms like $ a_{\mu}^\dagger a_{\nu}$ or $ n_{\mu} n_{\nu}$ with $\mu,\;\nu$ belonging to different blocks (e.g. $ \mu \in A, \;\nu \in s $, etc.). For this purpose it is necessary to keep in the computer memory all the matrix elements of the local operators $ \langle u_{\alpha_{1}} | a_{\mu}^\dagger | u_{\alpha_{2}} \rangle $, $ \langle v_{\beta_{1}} |n_{\nu} | v_{\beta_{2}} \rangle $. The entire procedure can now be repeated: we look for the ground state vector $\psi'$ of the truncated Hamiltonian $H'$, by using Lanczos’s or Davidson’s algorithm. A new density matrix $ \psi' {\psi'}^T $ and new state vectors $ |u_{\alpha}' \rangle $, that represent states of $A'$, are computed according to the analogous of the first of formulas (2.10) which now reads: $$\begin{aligned} |u'_\alpha\rangle &=& \sum_{I=1}^{4m_A} U'_{I\alpha} A'^+_I|0\rangle \cr\cr &=& \sum_{I=1}^{4m_A} U'_{I\alpha} |u_{{\alpha}_{\Inn}}\rangle |s_{\gamma_{\Inn}}\rangle \quad\alpha = 1,...m_{A'}\end{aligned}$$ Again we do not keep all the vectors: $m_{A'} $ is generally less than $ 4m_A$ and often one puts $ m_{A'} = m_A $, although this choice is not necessary. The corresponding $ |v_{\beta}' \rangle $ that describe $ B' $ are obtained from the $ |u_{\alpha}' \rangle $ by translation. In this new truncated basis we compute the matrix elements of all the local and internal operators relative to block $A'$ and we keep them in the computer memory, in order to use them in next steps of the method. If, for example, we have an operator $O$ internal to block $A$, it is also internal to the new block $A'$ and we have the following rule to update its matrix elements: $$\begin{aligned} \langle u'_{\alpha_1} | O | u'_{\alpha_2} \rangle = \sum_{I,I' = 1}^{4m_A} U'_{I \alpha_1 } \langle 0 | A'_I O A'^+_{I'}|0\rangle U'_{I' \alpha_2 }\cr =\sum_{I,I'= 1}^{4m_A} U'_{I \alpha_1 } \langle u_{\alpha_{\Inn}} | O | u_{\alpha_{\Innp}}\rangle \langle s_{\gamma_{\Inn}} | s_{\gamma_{\Innp}}\rangle U'_{I' \alpha_2 }\cr = \sum_{I,I'= 1}^{4m_A} U'_{I \alpha_1} \langle u_{\alpha_{\Inn}} | O | u_{\alpha_{\Innp}}\rangle \delta_{\gamma_{\Inn} \gamma_{\Innp}} U'_{I' \alpha_2},\cr\cr \hbox{for}\quad\alpha_1,\alpha_2 = 1,...m_{A'} \end{aligned}$$ Two more sites are added to the blocks $A'$, $B'$, giving rise to new blocks $ A'' = A' \ \bf{\bullet} $, $ B'' = B' \ \bf{\bullet} $, etc. By the systematic procedure of adding two more sites, truncating the basis and updating the Hamiltonian matrix at each iteration, systems of large size can be handled. A comment is in order about the choice of the two sites that are added and their position with respect to blocks $A$ and $B$. We can form the superblock $A\;{\bf\bullet}\;B\;{\bf\bullet}$ or the superblock $A\;{\bf\bullet}\;{\bf\bullet}\;B$. White suggests that the enlarged configuration $A\;{\bf\bullet}\;B\;{\bf\bullet}$ is to be preferred to $A\;{\bf\bullet}\;{\bf\bullet}\;B$ in the case of periodic boundary conditions, the opposite holds in the case of open boundary condition. $ A \ {\bf{\bullet}} B \ {\bf{\bullet}} $, -30pt -30pt = 8.6cm -30pt In fact the blocks $A$ and $B$ are separated by the site $t$ in the case while they become adjacent by periodicity in $ A \ { \bf{\bullet \ \bullet} } \ B $. The kinetic part of the Hamiltonian (2.1) “connects” two blocks only by its border sites with operators $a_\mu^\dagger$, $a_\nu$, whose matrix elements are known. These matrices are “big” for blocks $A$ and $B$, and “little” for the 1-site blocks $s$ and $t$, so the matrix elements of the hamiltonian $H'$ are simpler when a “big” block is surrounded by 1-site blocks. The “infinite system algorithm” is stopped when the number of sites of $A \bigcup B $ reaches the total number $N$ of sites. In order to improve the accuracy of the method, White himself proposed a second algorithm, that we will briefly describe. This second algorithm takes place after the infinite system algorithm reaches the end. In the finite system algorithm, to an increase of $A$ by one site it corresponds a [*decrease*]{} of the “universe” $B$ by one site. Denoting by $A_x$, $B_y$ blocks $A$ and $B$ with $x,y$ sites respectively, we start with the system $A_{ {N\over 2} - 1}\;{\bf\bullet}\;B_{ {N\over 2} - 1}\;{\bf\bullet}$ and we want to construct the systems $A_{ {N\over 2}}\;{\bf\bullet}\;B_{ {N\over 2} - 2}\;{\bf\bullet}$, $A_{ {N\over 2}+1}\;{\bf\bullet}\;B_{ {N\over 2} - 3}\;{\bf\bullet}$, etc. Therefore in order to use the translational invariance, we need to keep in the computer memory all the relevant matrix elements of $A_{ {N\over 2} - 2 }$, $ A_{ {N\over 2} - 3 } $, etc., in order to be able to use the symmetry and produce the matrix elements of $B_{ {N\over 2} - 2 } $, $ B_{ {N\over 2} - 3 } $, etc. It should be noticed that when $ m_{B'} < m_{A'} $ (this certainly happens when $B$ becomes small) the rows of $\psi'$ cannot be linearly independent. As a consequence, $\psi'{\psi'}^T$ has many eigenvalues equal to zero. From Eq.(2.8) we see that the $ m_{A'} \times m_{A'} $ matrix $ \psi' {\psi'}^T $ and the smaller $ m_{B'} \times m_{B'} $ matrix $ {\psi'}^T \psi' $ have the same non vanishing eigenvalues. In practice it is sufficient to diagonalize only the smallest of the two density matrices. The procedure stops when we reach the system $ A_{N-3} { \bf{\bullet \ }} B_1 {\bf{\bullet} } $, i.e. when the block $ B $ has reduced to a single site. We can now increase $B$ and decrease $A$; the subsystems $A$, $B$ behave like if they were separated by a moving zipper. At every step we increase the accuracy of the states $|u_\alpha\rangle$ that describe the blocks $A_x$ and after few oscillations of the zipper all the blocks $A'_x, \ 2 \le x \le N-2 $, accurately represent parts of a complete system of $N$ sites, the remaining “universe” being the corresponding $B'_{N-x}$ block. During this procedure, all the relevant matrix elements of the local operators must be stored and updated. A more detailed explanation of this point can be found in the original paper by White  [@white2]. Usually one stops when $A$ and $B$ have the same length. The unrestricted Hartree-Fock (UHF) solution. ============================================== Let us still denote by $ a_{\mu \sigma}^+ $ the creation operator of an electron of spin $\sigma$ on the site $\mu$. The creation operator of an electron in a symmetric Bloch orbital is given by (we use letters $k, k_1, k_2..$ to denote the symmetric orbitals): $$a_{k \sigma}^+ = {1\over \sqrt{N}} \sum_{k=0}^{N-1} e^{i \omega k \mu} a_{\mu \sigma}^+ \qquad k=0,1,... N-1$$ where $ \omega = {2 \pi \over N }$. In terms of these operators, the Hamiltonian can be written as $$\begin{aligned} H = 2 \beta \sum_{k \sigma} \cos(\omega k) a_{k \sigma}^+ a_{k \sigma} - E_0\cr\cr + {1\over 2} \sum_{k_1 k_3 k \sigma \tau} K(k) a_{k1 \sigma}^+ a_{k_3+k, \tau}^+ a_{k_3, \tau} a_{k_1+k,\sigma} \end{aligned}$$ where all $k$ indices run from $0$ to $N-1$, the constant term $ E_0 = \sum_{\gamma < \nu} \gamma_{\mu \nu} $ has been added to the Hamiltonian and represents the internuclear repulsion energy, and $ K(k) $ is given by: $$K(k) = {1\over {N}} \sum_{\mu}^{N-1} \gamma_{0 \mu} e^{i \omega k \mu} \qquad k=0,1,... N-1$$ Due to the discrete rotational symmetry of the polygon, all indices can be taken modulo $N$. It is convenient to represent the $k$ indices on a circle (see fig. 1 of ref. \[\]). The restricted Hartree-Fock orbitals are determined by the condition: $$N-\nu < k < N+\nu$$ which characterizes the Fermi sea $F$. The restricted Hartree-Fock (RHF)single particle energies are given by[@paldus3]: $$\epsilon_{k} = 2 \beta \cos(\omega k) + N K(0) - \sum_{k_{1} \in F} K(k - k_{1})$$ and the total RHF energy is: $$E_{RHF} = -E_0 + \sum_{k\in F} [ 2 \beta \cos(\omega k) + \epsilon_k ]$$ It is known since long time that it is possible to lower the RHF ground state energy by considering molecular orbitals that are linear combinations of the orbitals $\phi_k$ and $\phi_{k+n} $ corresponding to two endpoints of a diameter of the circle of Fig.1 in ref. \[\]. Furthermore, taking into account also the spin indices of the two orbitals, there are many different possibilities that give rise to local minima of the UHF energy. All these possibilities have been carefully studied many years ago by Fukutome [@fukutome], Paldus and Cizek [@paldus3], and others, and give rise to charge density waves (CDW) and spin density waves (SDW). However in the case of the Mataga-Nishimoto prescription for the two center Coulomb repulsion integral and with the values of the parameter given in Sec.1, we have checked that the lowest UHF energy is obtained by the following BCS-Bogoliubov canonical transformation (which corresponds to the $(A^{t} + B^{t})^+ $ case of ref. \[\]): $$\begin{aligned} \gamma_{k \uparrow} &= \ u_{k} a_{k \uparrow} + v_{k} a_{k+n \uparrow } \cr\cr \gamma_{k \downarrow}&= - u_{k} a_{k \downarrow} + v_{k} a_{k+n \downarrow } \end{aligned}$$ where $ u_{k}^2 + v_{k}^2 = 1 $ , $ u_{k+n} = u_{k} $, $ v_{k+n} = - v_k $ . The operators $\gamma_{k \sigma}^+ $ create UHF orbitals, since the linear combination depend on $\sigma$. The first-order density matrix (in the pseudomomenta representation) is given by: $$\langle a_{k_1 \sigma}^+ a_{k_2 \sigma} \rangle = \delta_{k_1,k_2} f^{(1)}(k_1) + \delta_{k_1,k_{2}+n} \ f^{(2)}(k_1) (-1)^{\sigma}$$ where $ \ f^{(1)}(k) = u_k^2 $, $ \ f^{(2)}(k) = u_k v_k $ for $k \in F$, and $ \ f^{(1)}(k) = v_k^2 $, $ \ f^{(2)}(k) = - u_k v_k $ for $k \not\in F$. In the original atomic-orbital basis we have the interesting formula: $$\langle a_{\mu \sigma}^+ a_{\mu \sigma} \rangle = {1 \over 2} + (-1)^{\mu + \sigma} \delta$$ where $ \delta = {1\over N} \sum_{k=0}^{N-1} |u_k v_k| $. This formula shows the existence of SDW of antiferromagnetic type; the occupation numbers $\langle n_\uparrow \rangle $ and $ \langle n_\downarrow \rangle $ are different from each other on the same site (when one of the two is larger than $ {1\over 2}$ the other is smaller than $ {1\over 2}$) giving rise to a decrease of the on-site Coulomb repulsion. Furthermore no CDW appear, since $ \langle n_\uparrow \rangle + \langle n_\downarrow \rangle = 1 $. The expectation value of the Hamiltonian (2.5) can be easily computed by using Wick’s theorem; minimization of $\langle H\rangle$ with respect to the coefficients $u_k, v_k$ gives rise to the following well known set of equations[@paldus3; @fukutome] of the BCS type: $$u_k^2 = {1\over 2} \left( 1 + { |\xi_k| \over \sqrt{ \xi_k^2 + \Delta_k^2 } } \right),\quad v_k^2 = {1\over 2} \left( 1 - { |\xi_k| \over \sqrt{ \xi_k^2 + \Delta_k^2 } }\right)$$ where $ \xi_k = {1\over 2} ( \hat{\epsilon} (k) - \hat{\epsilon} (k+n) )$, and the UHF orbital energies are given by $$\begin{aligned} \hat{\epsilon} (k) = 2 \beta \cos(\omega k) + N K(0) - \sum_{q\in F} K(k-q) u_q^2\cr\cr - \sum_{q\not\in F} K(k-q) v_q^2, \qquad\hbox{for} \ k=0,1,...N-1 \end{aligned}$$ and $\Delta(k)$ must fulfil the famous “gap equation”: $$\begin{aligned} \Delta(k) =& {1\over 2} \sum_{q\in F} [ K(k-q) + K(k-q+n) ] \cr\cr &\times { \Delta(q) \over \sqrt{ \xi_q^2 + \Delta(q)^2 } },\qquad \hbox{for} \ k=0,1,...N-1 \end{aligned}$$ If the only solution of the gap equation is the trivial solution $\Delta(k)=0$, we obtain simply the RHF ground state. If a non trivial solution exists, the non linear system of equations (3.10),(3.11),(3.12) can be easily solved numerically by an iterative method. Starting with $ \Delta(q)=\hbox{constant} $ and $ \hat{\epsilon}(k) = \epsilon(k) $, we solve the gap equation (3.12) by iteration. Usually $30-40$ iterations will suffice. The solution $\Delta(k)$ is substituted into Eqs. (3.10), (3.11); in this way we obtain a set of approximate orbital energies $\hat{\epsilon}_1(k)$. The entire procedure is repeated substituting in the right hand side of Eq. (3.12) the solution $\Delta(k)$ and $\xi_k = {1\over 2} (\hat{\epsilon}_{1}(k) - \hat{\epsilon}_{1} (k+n)) $, etc., until the entire set of equations is fulfilled with sufficient accuracy. The UHF ground state energy of the model is given by $$\begin{aligned} E_{UHF} = - \sum_{k_{1},k_{2}=0}^{N-1}\, K(k_2 -k_1)\cr\cr \times \Big[ f^{(1)}(k_1) f^{(1)}(k_2) + f^{(2)}(k_1) f^{(2)}(k_2)\Big]\cr\cr -E_0 + {1\over 2}\, K(0)\, N^2 + 4 \beta \sum_{k \in F}\, \cos(\omega k) \,f^{(1)}(k)\cr\cr \end{aligned}$$ The antiferromagnetic long-range order of the UHF solution appears also in the height of the peak of the magnetic structure factor: $$\begin{aligned} &S(k) = \sum_{j=0}^{N-1} e^{-i { 2 \pi \over N } j k } \langle S_{z} (j) S_{z} (0) \rangle \cr\cr &= {1\over 4} + N \delta_{k,{N \over 2}} \delta^2 - {1\over {2N}} \sum_{q=0}^{N-1} f^{(1)} (q) f^{(1)} (q-k)\cr\cr& - {1\over {2N}} \sum_{q=0}^{N-1} f^{(2)} (q-k-n) f^{(2)} (q) \end{aligned}$$ which is reached for $k={N\over 2} $. We have : $$S( {N \over 2}) = {1\over 4} + { 1 \over N } \big[ \sum_q | u_q v_q | \big]^2 -{ 1 \over N } \sum_q (u_q v_q )^2$$ and this quantity scales like $N$ for large $N$. =8.6cm -30pt Numerical results and conclusions ================================== In Table \[table1\] we show the energy results calculated with the DMRG method up to $N = 34$, and we compare them with RHF, UHF and FCI energies (the FCI energies are available only up to $N=18$). We see that the relative error of the DMRG solution with respect to FCI is only $2.1 \times 10^{-6}$ for $N=14$ and $1.6 \times 10^{-5}$ for $N=18$, which is a quite satisfactory result. Table \[table2\] shows the correlation energy per electron of the FCI and DMRG solutions with respect to the RHF and UHF approximations. The DMRG method compares favourably with the Coupled Cluster method; in Table \[table3\] the correlation energies $(E - E_{RHF}) / N$ are compared with coupled cluster results of ref. \[\]. The DMRG energy is slightly lower than the Approximate Coupled Pair with Triples and Quadruples (ACPTQ) value. All calculations were performed iterating the DMRG algorithm three times (the first iteration uses the infinite system method, the second and third iterations use the finite system method). We stop when A and B have the same length. In the first iteration the size of the system grows, but the potential between two sites is kept equal to its final value, i.e. to the value attained when the number of sites of the polygon is N. Generally we keep 256 states in block A during both the first and the second iteration; in order to achieve a better convergence, during the third iteration we keep 512 states. In the heaviest calculations ($N=30,34$) we keep only $200-400$ states in block A, due to memory/disk-space limitations. It should be noted that the disk-memory requirement grows with the number of sites even if the number of retained states is held constant. This is due to the long range nature of the interaction that forces us to keep on disk a linearly growing number of matrices that represent the local operators. We have checked that the disk-space grows as $N m^2$. In Fig. \[fig:s\_di\_ij\] the spin-spin correlation function $S(i-j) = \langle S_z(i)S_z(j)\rangle$ is plotted : a short range antiferromagnetic order is clearly present. We have computed the Fourier transform $S(k)$ which of course reaches its maximum value for $k={N\over 2}$, like the UHF-SDW solution (see (3.14)). However, the growth is linear with $N$ for the UHF-SDW solution, but scales approximately as $ 0.1398 + 0.1457 \hbox{Log} N $ for the DMRG solution. Therefore we cannot speak of long range SDW. Also the CDW are ruled out by the present calculation. This can be seen from the graph of the density-density correlation function $$R(i,j) = \langle n(i) n(j) \rangle - \langle n(i) \rangle \langle n(j) \rangle$$ (see Fig. \[fig:r\_di\_ij\]). Concluding, the DMRG method provides a very powerful tool for the calculation of energies and properties of simple many electrons Hamiltonians. It gives results very close to full CI results and is able to handle Hilbert spaces of very large dimension. It would be of great interest to apply the method to a realistic many electrons Hamiltonian, possibly after a previous localization of the occupied and virtual orbitals. However, this program meets with some difficulty because of the large number of matrices that must be kept when the four orbitals of the interaction term belong to different blocks. The authors are greatly indebted to G.L. Bendazzoli for teaching them the peculiarities of the PPP model, and to A. Parola for extremely useful discussions and suggestions. S.R. White, Phys. Rev. Lett. [**69**]{}, 2863 (1992). S.R. White, Phys. Rev. B [**48**]{}, 10345 (1993). S.R. White and D.H. Huse, Phys. Rev. B [**48**]{}, 3844 (1993). S. Qin, S. Liang, Z. Su and L. Yu, Phys. Rev. B [**52**]{}, R5475 (1995). S. Daul and R.M. Noack, Z. Phys. B. [**103**]{}, 293 (1997). H. Pang and S. Liang, Phys. Rev. B [**51**]{}, 10287 (1995). T. Xiang, Phys. Rev. B [**53**]{}, R10445 (1996). S.R. White and D.J. Scalapino, Phys. Rev. Lett. [**80**]{}, 1272 (1998). S. Liang and H. Pang, Phys. Rev. B [**49**]{}, 9214 (1994). G.L. Bendazzoli and S. Evangelisti, Chem. Phys. Lett. [**185**]{}, 125 (1991). G. L. Bendazzoli, S. Evangelisti and L. Gagliardi, Int. J. Quantum Chem. [**51**]{}, 13 (1994). J. Paldus and P. Piecuch, Int. J. Quantum Chem. [**42**]{}, 135 (1992). M.B. Lepetit and G.M. Pastor, Phys. Rev. B [**56**]{}, 4447 (1997). J. Cizek and J. Paldus, J. Chem. Phys. [**47**]{}, 3976 (1967). J. Paldus and J. Cizek, Phys. Rev A, [**2**]{}, 2268 (1970). J. Paldus and M. Boyle, Int. J. Quantum Chem. [**22**]{}, 1281 (1982). H. Fukutome, Progr. Theor. Phys. [**40**]{}, 998, 1227 (1968). R.G.  Parr, The Quantum theory of Molecular Electronic Structure (Benjamin, New York,1963). N. Mataga and K. Nishimoto, Z. Physik Chem. [**13**]{}, 140 (1957). J. Gonzalez, M.A. Martin-Delgado, G. Sierra and A.H. Vozmediano, [*Quantum Electron Liquids and High-$T_c$ Superconductivity*]{}, Springer-Verlag, Berlin Heidelberg (1995). G.L. Bendazzoli, S. Evangelisti, G. Fano, F. Ortolani and L. Ziosi, unpublished. W.H. Press, B.P. Flannery, S.A. Teukolsky and W.T. Vetterling, [*Numerical Recipes: The Art of Scientific Computing*]{}, Cambridge University Press, New York (1986).
--- abstract: 'We report the detection of type C QPO along with the upper harmonic in the commensurate ratio of 1:2 in the two observations of the low-mass black hole transient H 1743–322 jointly observed by *XMM-Newton* and *NuSTAR* during the 2016 outburst. We find that the QPO and the upper harmonic exhibit shifts in their centroid frequencies in the second observation with respect to the first one. The hardness intensity diagram implies that in contrast to 2008 and 2014 failed outbursts, 2016 outburst was a successful one. We also detect the presence of a broad iron K$\alpha$ line at $\sim$6.5 keV and reflection hump in the energy range of 15–30 keV in both the observations. Along with the shape of the power density spectra, the nature of the characteristic frequencies and the fractional rms amplitude of the timing features imply that the source stayed in the low/hard state during these observations. Moreover, photon index and other spectral parameters also indicate the low/hard state behavior of the source. Unlike the soft lag detected in this source during the 2008 and 2014 failed outbursts, we observe hard time-lag of $0.40\pm0.15$ and $0.32\pm0.07$ s in the 0.07–0.4 Hz frequency range in the two observations during the 2016 outburst. The correlation between the photon index and the centroid frequency of the QPO is consistent with the previous results. Furthermore, the high value of the Comptonized fraction and the weak thermal component indicate that the QPO is being modulated by the Comptonization process.' author: - Swadesh Chand - 'V. K. Agrawal' - 'G. C. Dewangan' - Prakash Tripathi - Parijat Thakur title: '2016 outburst of H 1743–322: *XMM-Newton* and *NuSTAR* view' --- Introduction {#sec:intro} ============ Low-mass black hole X-ray binaries consist of a low-mass companion star ($\lesssim 1$ $\textup{M}_\odot$) gravitationally bound to a stellar mass black hole [@Steiner; @et; @al.; @2012]. The companion star feeds material to the black hole via Roche lobe overflow, resulting in the formation of an accretion disc. The viscous forces between the different layers of the accretion disc near the black hole raise the temperature upto $10^7$ K, and the source primarily emits X-rays [@Steiner; @et; @al.; @2012; @Motta; @et; @al.; @2017]. Most of the black hole X-ray binaries (hereafter BHXRBs) are known to be transient that stay in quiescence for a long time and show outbursts very sporadically. These outbursts can last from several days to months during which the luminosity of the source is increased by several orders of magnitude [@Tanaka; @and; @Shibazaki; @1996; @Shidatsu; @et; @al.; @2014; @Plant; @et; @al.; @2015]. The BHXRBs can undergo generally four states during an outburst, namely, the low/hard state (LHS), the hard-intermediate state (HIMS), the soft-intermediate state (SIMS) and the high/soft state (HSS) [@Belloni; @et; @al.; @2005]. The classification of the states relies upon the detailed spectral and timing behavior of the source during an outburst. The spectrum of the LHS is dominated by a hard power-law with photon index $<$2 and a high-energy cutoff $\approx$ 100 keV [@Motta; @et; @al.; @2009; @Shidatsu; @et; @al.; @2014]. The power-law component is thought to originate from the Compton up-scattering of the soft X-ray photons from the disc by the hot electrons in corona. The power density spectra (hereafter PDS) in the LHS state show strong variability with fractional rms $\sim$ 30% [@McClintock; @and; @Remillard; @2006; @Belloni; @et; @al.; @2011; @Zhou; @et; @al.; @2013; @Shidatsu; @et; @al.; @2014; @Ingram; @et; @al.; @2017]. The transition of the BHXRBs to the HSS occurs via the two intermediate states (i.e., HIMS and SIMS). However, the transition from the hard to soft is not a smooth one and in many outbursts, several excursions to harder and softer states have been observed. As the source moves from the hard state to the soft state, the power-law component starts to steepen (upto $\Gamma \sim 2.5$) and the X-ray continuum becomes increasingly dominated by emission from an optically thick and geometrically thin accretion disc with a few percent of fractional rms variability [@Shakura; @and; @Sunyaev; @1973; @Belloni; @et; @al.; @2011]. In the soft state, the accretion disc either reaches closer to the inner most stable circular orbit (hereafter ISCO) or extends down to the ISCO [@Gierlinski; @and; @Done; @2004; @Steiner; @et; @al.; @2010]. It is worth mentioning here that there is an ongoing debate on the nature of extent of the accretion disc in the LHS. However, several workers has found that a hot advection dominated accretion flow (ADAF), as proposed by @Esin [@et; @al.; @1997], replaces the geometrically thin and optically thick accretion disc, introducing a truncated inner disc [@McClintock; @et; @al.; @1995; @Narayan; @and; @Yi; @1995; @Narayan; @et; @al.; @1996; @Esin; @et; @al.; @2001; @McClintock; @et; @al.; @2001; @McClintock; @et; @al.; @2003; @Plant; @et; @al.; @2015]. Another salient observational feature of BHXRBs is the X-ray reflection. This appears when the hard Comptonized X-rays from the corona gets reflected from the disc giving rise to a reflection hump in the $\sim$10–40 keV and a fluorescent iron line $K_{\alpha}$ line at $\sim$ 6.4–6.9 keV. The iron line may be distorted due to special and general relativistic effects [@Fabian; @et; @al.; @1989; @Reynolds; @and; @Nowak; @2003; @Miller; @2007; @Ingram; @et; @al.; @2017]. Modeling of the broad iron line with relativistic reflection gives an alternative way to measure the inner disc radius. The modeling of the reflection continuum can also play a crucial role in understanding the inner accretion dynamics of the disc, as well as to probe the key parameters like the black hole spin and the disc inclination. It is worth mentioning here that the relativistic reflection model RELXILL is being widely used to model angle-dependent X-ray reflection and allows to estimate parameters such as the inner disc radius, black hole spin, disc inclination, iron abundance and reflection fraction [@Dauser; @et; @al.; @2010; @Garcia; @and; @Kallman; @2010; @Garcia; @et; @al.; @2011; @Garcia; @et; @al.; @2013; @Dauser; @et; @al.; @2013; @Dauser; @et; @al.; @2014; @Garcia; @et; @al.; @2014]. Quasi-periodic oscillations (QPOs), which appear as broad peaks, are often observed in the X-ray emission from the black hole transients (hereafter BHTs). QPOs are categorized into the following three types: (i) high-frequency QPOs (HFQPOs) ($\sim$30–450 Hz) (ii) low-frequency QPOs (LFQPOs) ($\sim$0.05–30 Hz) and (iii) very low-frequency QPOs ($\sim$mHz)[@Morgan; @et; @al.; @1997; @Belloni; @et; @al.; @2000; @Trudolyubov; @et; @al.; @2001; @Casella; @et; @al.; @2005; @Motta; @et; @al.; @2011; @Altamirano; @et; @al.; @2011; @Altamirano; @et; @al.; @2012; @Belloni; @et; @al.; @2012; @Alam; @et; @al.; @2014; @AgrawalNandi2015]. Depending upon a few parameters such as quality factor, ($Q$), and shape of the PDS continuum, low-frequency QPOs are classified into type A, B and C. Type A QPOs appear as broad peaks around $\sim$6–8 Hz with few percent rms, whereas the type B QPOs show stronger rms (upto $\sim4\%$) compared to the type A QPOs. On the other hand, type C QPO appears as a narrow and variable peak with strong fractional rms $\geq10\%$ [@Casella; @et; @al.; @2004; @Motta; @et; @al.; @2011]. Though the exact mechanism for the origin of the QPOs is still not clear, it has been found in earlier studies that centroid frequencies of QPOs are correlated with the spectral index, as well as the disc flux [@Sobczak; @et; @al.; @2000; @Titarchuk; @and; @Fiorito; @2004; @Shaposhnikov; @and; @Titarchuk; @2007], indicating the coupling between QPOs and the structure of the inner disc [@Shidatsu; @et; @al.; @2014]. Strong variability of BHXRBs may also be related to the the time-lags found between the lightcurves in different energy bands. @Priedhorsky1979 and @Nolan1981 first found the presence of time-lags in Cygnus X-1, as well as several other BHXRBs. Many BHXRBs show hard X-ray lags, where the hard photons are found to be delayed with respect to the soft ones [@Page1981; @Miyamoto1988; @Nowak1999a; @Nowak1999b; @Grinberg2014]. These hard X-ray lags are generally thought to be originated due to the propagation of mass accretion rate fluctuation in the accretion disc and are of the magnitude of 1 percent of the variability time scale [@DeMarco2013; @ArevaloandUttley2006]. On the other hand, the soft X-ray lags, where the soft X-ray photons lag the hard ones, are caused due to the reflection of the coronal X-ray irradiation by the accretion disc. This time delay is named as reverberation lag and can provide important information about the geometry of the innermost region of the BHXRBs [@DeMarco2013; @MarcoPonti2016; @DeMarco2017; @KaraErin2019]. The low-mass BHT H 1743–322 was discovered in 1977 using *Ariel-V* [@Kaluzienski; @and; @Holt; @1977] and is located at a distance of 8.5$\pm$0.8 kpc [@Steiner; @et; @al.; @2012]. This source is well known for its transient nature and has shown frequent outbursts with an average interval of $\sim$200 days [@Shidatsu; @et; @al.; @2012; @Shidatsu; @et; @al.; @2014]. After a prolonged gap from its discovery, the brightest outburst of H 1743–322 in 2003 was detected by *INTEGRAL* [@Revnivstev; @2003] and *RXTE* [@Markwardt; @and; @Swank; @2003]. *RXTE* observation of this outburst resulted in the detection of a pair of HFQPOs at 240 and 160 Hz [@Homan; @et; @al.; @2005; @Remillard; @et; @al.; @2006]. Similar timing signature has also been found in a few other dynamical BHXRBs [@McClintock; @and; @Remillard; @2006; @McClintock; @et; @al.; @2009]. Using the *VLA* observation of 2003 outburst and by applying a symmetric kinematic model for the jet trajectories, @Steiner [@et; @al.; @2012] estimated the source distance and inclination angle to be 8.5 $\pm$0.8 kpc and 75$^\circ\pm3^\circ$, respectively. They also found the spin parameter to be 0.2$\pm$ 0.3 using the *RXTE* observation of the 2003 outburst. In addition to this, @srivivrao2009 found QPOs with a truncated accretion disc using the *RXTE* observations of 2003 outburst, when the source was in the steep powerlaw state. Before the October 2008 outburst, a few additional outbursts were detected which could not be studied extensively due to lack of sufficient observations. However, the October 2008 outburst was termed as a ‘failed outburst’ as the source never reached the HSS due to sudden decrease in the mass accretion rate [@Capitanio; @et; @al.; @2009]. Another outburst in July 2009 was detected by *Swift/BAT* telescope [@Krimm; @et; @al.; @2009; @Motta; @et; @al.; @2010], which was followed by three more outbursts detected by *RXTE* in December 2009, August 2010 and April 2011 [@Zhou; @et; @al.; @2013]. Using these 2010 and 2011 outbursts, @Molla [@et; @al.; @2017] estimated the mass of the black hole to be $11.21^{+1.65}_{-1.96}$ M$_\odot$ by combining the two methods: using the Two Component Advective Flow (TCAF) model and the correlation between the photon index versus QPO frequency [@DewanganTitarchukandGriffiths2006]. Apart from the above, few successive outbursts were reported in December 2011, January 2012 [@Negoro; @et; @al.; @2012], September 2012 [@Shidatsu; @et; @al.; @2012; @Shidatsu; @et; @al.; @2014], and August 2013 [@Nakahira; @et; @al.; @2013]. Following these outbursts, another outburst took place in 2014, which was observed quasi-simultaneously by both *XMM-Newton* and *NuSTAR*. Using the *Swift/XRT* monitoring of the 2014 outburst, @Stiele [@and; @Yu; @2016] reported the 2014 outburst as a failed one as the source never reached the HSS during the entire outburst. In addition to this, using the *XMM-Newton* observation of this outburst, the authors reported a low-frequency QPO and an upper harmonic at $\sim$0.25 and $\sim$0.51 Hz, respectively. Moreover, @Ingram [@et; @al.; @2017] used both the *XMM-Newton*, as well as *NuSTAR* observations of 2014 outburst and found a truncated accretion disc geometry when the source was in the LHS. Both the results from @Stiele [@and; @Yu; @2016] and @Ingram [@et; @al.; @2017] come to an agreement that the source stayed in the LHS during the 2014 outburst. Since two observations at different epochs jointly performed with *XMM-Newton* and *NuSTAR* during the 2016 outburst of H 1743–322 are still unexplored, it is worth examining in detail the behavior of the source in the light of various characteristics discussed above. Moreover, the 2016 outburst appears to be different from that of the 2008 and 2014 failed outbursts, as the 2016 outburst exhibits a full spectral state transition. In this paper, we carry out a systematic spectral and temporal study of H 1743–322 using these observations. Timing study using the *NuSTAR* data allows us to probe the nature of the timing features in the PDS beyond the energy range of the *XMM-Newton*. We report the detection of a low-frequency QPO along with upper harmonic in both epochs. We also find a shift in the centroid frequency of the QPO and the upper harmonic between the two epochs. We have also compared the characteristics of the PDS in the high energy band using *NuSTAR* to those obtained from the *XMM-Newton* observations. Besides, we study the energy dependence of the temporal parameters and compare them with the previous studies. We have also found a hard lag and a log-linear increase of the time-lag with energy in the energy dependent time-lag spectra derived from *XMM-Newton* observations. In addition, we present detailed broad-band spectral analysis of joint *XMM-Newton* and *NuSTAR* spectral data in the 2.5–78 keV band, as well as study the accretion disc and the relativistic reflection. We have also studied the relation between spectral and temporal parameters, and discussed the possible origin of the variability in the system. The remainder of the paper is organized as follows. We present the observations and data reduction in section 2, whereas section 3 contains the analysis and results of our spectral and temporal study. Finally, section 4 is devoted to discussion and concluding remarks. [lccccc]{} Obs. No. & Instrument & Obs. ID. & Obs. date & Exp.(ks) & Obs. mode\ Epoch - 1 & *XMM-Newton*/EPIC-pn & 0783540301 & 2016 Mar 13 & 142.6 & Timing\ & *NuSTAR*/FPMA,FPMB & 80202012002 & 2016 Mar 13 & 65.8 & Imaging\ \ Epoch - 2 & *XMM-Newton*/EPIC-pn & 0783540301 & 2016 Mar 15 & 142.6 & Timing\ & *NuSTAR*/FPMA,FPMB & 80202012004 & 2016 Mar 15 & 65.6 & Imaging\ OBSERVATIONS AND DATA REDUCTION =============================== Swift Monitoring ---------------- The 2016 outburst of H 1743–322 was detected and followed by *Swift/XRT* [@Burrows2000; @Hill2000]. We analyzed all the observations taken in *Swift/XRT* window timing–mode data between March 01, 2016 and April 08, 2016 using the the online data analysis tools provided by the Leicester Swift Data Centre[^1] [@Evans2009]. We derived the count rate in the 0.8–10 keV, 0.8–3 keV and 3–10 keV bands, and for the calculation of hardness ratio (hereafter HR), we divided the count rate in the 3–10 keV band by the count rate in the 0.8–3 keV band. Figure 1 shows the hardness intensity diagram (hereafter HID), which depicts that the source undergoes a full state transition during the 2016 outburst. *XMM-Newton* ------------ *XMM-Newton* observed H 1743–322 twice on March 13, 2016 (hereafter Epoch - 1) and March 15, 2016 (hereafter Epoch - 2) for an exposure time of 142.6 ks each. Details of the *XMM-Newton* observations are given in Table 1. During these two observations, only the European Photon Imaging Camera (EPIC-pn) was employed in the timing mode with a thick filter to observe the source. Scientific Analysis System (SAS v.16.0.0) with the most recent calibration files were utilized to filter and produce the EPIC-pn event files. We did not find any soft proton background flaring from the extracted lightcurve of Epoch - 1 in the energy range of 10–12 keV. However, background flaring towards the end of the observation was detected in the 10–12 keV lightcurve of Epoch - 2. To remove the flaring, we created a good time interval (GTI) file with count rate $\leqslant$ $1.7 s^{-1}$. We then applied this GTI file and filtered the event list for the background flaring. We used a rectangular region of 17 pixels width $(29 \leqslant RAWX \leqslant 46)$, keeping the source in the centered position, and extracted the source spectra. A narrow rectangular region of 5 pixels width $(05 \leqslant RAWX \leqslant 10)$ towards the edge of the detector was used to extract the background spectra. Using the `epatplot` task within SAS, we found that both Epoch - 1 and Epoch - 2 observations were affected by pile-up. In order to reduce the pile-up effect, we used only the single pixel events (<span style="font-variant:small-caps;">pattern==0</span>), as well as also excised the three central columns from the source position of both the observations. We then generated Redistribution Matrix File (RMF) and Ancilliary Region File (ARF) for each epoch using the tasks `rmfgen` and `arfgen` available within SAS, respectively. Moreover, we also extracted the source and background lightcurves and corrected the source lightcurves for the background contribution using the SAS task ’`epiclccorr`’. *NuSTAR* -------- *NuSTAR* [@Harrison; @et; @al.; @2013] also observed H 1743–322 simultaneously with *XMM-Newton* in the two epochs and the details of the observations are listed in Table 1. We reduced the data using the `nupipeline` task available within *NuSTARDAS* with the latest available calibration files. For both the FPMA and FPMB modules, we used a circular region of 30 arcsec by keeping the source in the center and generated the source spectra. We used another circular region of the same size on the same detector away from the source to extract the background. We generated the corresponding RMF and ARF files using the task `nuproducts` available within *NuSTARDAS*. ------------------------- ------------------------ ---------------------- --------------------------- --------------------------- Epoch - 1 Epoch - 2 Epoch - 1 Epoch - 2 $\nu_{qpo}$(Hz) 0.980$\pm$0.005 1.020$\pm$0.009 0.980$\pm$0.003 1.040$\pm$0.003 $FWHM_{qpo}$(Hz) 0.22$\pm$0.02 0.33$\pm0.03$ 0.22$\pm$0.01 0.30$\pm0.01$ $Q_{qpo}$ 4.50$^{+0.38}_{-0.44}$ 3.1$^{+0.2}_{-0.3}$ 4.4$\pm$0.2 3.5$\pm0.1$ $rms_{qpo}$\[%\] 13.0$\pm0.6$ 13.8$^{+0.5}_{-0.7}$ 18.8$\pm$0.3 19.24$\pm$0.2 $\nu_{har}$(Hz) 2.0$^{+0.07}_{-0.05}$ 2.26$\pm0.08$ 2.01$\pm0.04$ 2.20$\pm0.05$ $FWHM_{har}$(Hz) 0.3$^{+0.3}_{-0.2}$ 0.9$^{+0.4}_{-0.3}$ 0.7$\pm0.2$ 0.76 (f) $Q_{har}$ 6.5$^{+3.0}_{-6.9}$ 2.5$^{+1.5}_{-1.2}$ 2.8$^{+0.6}_{-0.8}$ 2.9$\pm0.1$ $rms_{har}$\[%\] 4.5$^{+0.02}_{-0.01}$ 8.7$^{+1.9}_{-2.0}$ 8.7$^{+1.3}_{-1.4}$ 7.2$^{+0.6}_{-0.7}$ $\nu_{bln}$(Hz) 0.17$^{+0.03}_{-0.04}$ 0.09$\pm0.05$ 0.18$\pm0.02$ 0.19$\pm0.01$ $FWHM_{bln}$(Hz) 0.59$\pm0.04$ 0.8 (f) 0.59$^{+0.08}_{-0.07}$ 0.59$^{+0.06}_{-0.05}$ $Q_{bln}$ 0.29$\pm$0.03 0.11$\pm0.02$ 0.300$^{+0.010}_{-0.004}$ 0.330$^{+0.007}_{-0.001}$ $rms_{bln}$\[%\] 10.3$^{+1.8}_{-2.2}$ 12.4$^{+0.9}_{-1.8}$ 13.0$^{+0.7}_{-1.0}$ 12.5$^{+0.4}_{-0.5}$ $\nu_{bln_{zero}}$(Hz) 0 (f) 0 (f) 0 (f) 0 (f) $FWHM_{bln_{zero}}$ 4.9$^{+3.2}_{-1.4}$ $10^p$ 8.5$^{p}_{-2.5}$ 9.2$^{p}_{-1.9}$ $rms_{bln_{zero}}$\[%\] 17.0$^{+1.5}_{-1.7}$ 10.5$^{+3.2}_{-3.9}$ 18.3$^{+1.1}_{-1.2}$ 18.4$\pm0.9$ ------------------------- ------------------------ ---------------------- --------------------------- --------------------------- f – indicates the fixed parameters, p – indicates the parameters pegged at lower/upper bounds ------------------------- ----------------------- --------------------------- ------------------------ ------------------------ Epoch - 1 Epoch - 2 Epoch - 1 Epoch - 2 $\nu_{qpo}$(Hz) 0.980$\pm$0.003 1.060$\pm$0.005 0.980$\pm$0.005 1.07$\pm$0.01 $FWHM_{qpo}$(Hz) 0.20$\pm$0.01 0.35$\pm0.02$ 0.18$\pm$0.02 0.34$\pm0.03$ $Q_{qpo}$ 4.9$^{+0.25}_{-0.27}$ 3.1$\pm0.12$ 5.4$^{+0.4}_{-0.5}$ 3.1$^{+0.2}_{-0.3}$ $rms_{qpo}$\[%\] 16.1$\pm0.4$ 17.1$\pm0.3$ 15.4$\pm$0.6 16.5$^{+0.2}_{-0.6}$ $\nu_{har}$(Hz) 2.01$\pm0.1$ 2.37$\pm0.01$ ... 2.25$^{+0.07}_{-0.80}$ $FWHM_{har}$(Hz) 1.02 (f) 0.99 (f) ... 0.23 (f) $Q_{har}$ 1.96$\pm0.1$ 2.4$\pm0.1$ ... 9.7$\pm0.3$ $rms_{har}$\[%\] 7.13$\pm1.1$ 7.2$^{+0.7}_{-0.8}$ ... 4.4$^{+1.0}_{-1.3}$ $\nu_{bln}$(Hz) 0.19$\pm0.02$ 0.19$\pm0.02$ 0.18$\pm0.03$ 0.13 (f) $FWHM_{bln}$(Hz) 0.5$\pm0.01$ 0.5$\pm0.1$ 0.35$^{+0.13}_{-0.20}$ 0.54$^{+0.16}_{-0.12}$ $Q_{bln}$ 0.38$^{+0.3}_{-0.2}$ 0.360$^{+0.100}_{-0.001}$ 0.52$^{+0.08}_{-0.06}$ 0.23$^{+0.05}_{-0.07}$ $rms_{bln}$\[%\] 10.3$^{+0.9}_{-2.2}$ 12.4$^{+0.9}_{-1.0}$ 8.5$^{+1.2}_{-1.5}$ 9.2$^{+0.9}_{-1.0}$ $\nu_{bln_{zero}}$(Hz) 0 (f) 0 (f) 0 (f) 0 (f) $FWHM_{bln_{zero}}$ 6.6$^{p}_{-1.72}$ $10^p$ 4.5$^{+2.6}_{-1.3}$ 10$^p$ $rms_{bln_{zero}}$\[%\] 15.5$^{+1.1}_{-1.2}$ 14.8$^{+1.3}_{-1.5}$ 15.0$^{+1.4}_{-1.6}$ 14.0$^{+2.5}_{-3.1}$ ------------------------- ----------------------- --------------------------- ------------------------ ------------------------ f – indicates the fixed parameters, p – indicates the parameters pegged at lower/upper bounds ANALYSIS AND RESULTS ==================== Power Density Spectra --------------------- For the timing analysis, we used the Interactive Spectral Interpretation System (ISIS, V.1.6.2–40) [@HouckandDenicola2000]. We quote the errors at 90% confidence level. The PDSs from the background subtracted lightcurves of the *XMM-Newton*/EPIC-pn observations were computed using “POWSPEC" task within FTOOLS in the two energy bands of 0.7–3 keV and 3–10 keV. After subtracting the contribution due to Poisson noise [@Zhangetal1995], we normalized the PDSs according to @Leahyetal1983 and then converted the variability power to the square fractional rms [@Belloni; @and; @Hasinger; @1990]. Figure 2 shows the PDSs in the 0.7–3 keV and 3–10 keV bands for Epoch - 1 and Epoch - 2, where the presence of the low-frequency QPO along with upper harmonic at $\sim$1Hz and $\sim$2Hz are prominent. The PDSs in both the energy bands exhibit identical shape and require a zero centered Lorentzian, as well as three peaked Lorentzian components for the QPO, upper harmonic and the band limited noise (BLN) component. All the best-fit parameters are listed in Table 2. The significance of detection of the QPO (upper harmonic) in the 0.7–3 keV band is 17.9$\sigma$ (3.1$\sigma$) and 19$\sigma$ (4.1$\sigma$), whereas that found in the 3–10 keV band is 43.4$\sigma$ (5.2$\sigma$) and 40.7$\sigma$ (6.6$\sigma$) for Epoch - 1 and Epoch - 2, respectively. This suggests that the significance of both the QPO and the upper harmonic remains more or less similar between Epoch - 1 and Epoch - 2 for each energy band, whereas their values are found to be increasing with energy band in each epoch. From Table 2 it is clear that in both the observations, the QPO and the upper harmonic are detected at $\sim$1:2 ratio in each energy band. We also note that the centroid frequencies of the QPO and the upper harmonic do not change with energy in both the epochs. However, we found that the centroid frequencies of the QPO (upper harmonic) in Epoch - 2 are shifted towards higher frequencies by $0.04\pm0.01$ Hz ($0.26\pm0.1$ Hz) in the 0.7–3 keV band and $0.06\pm0.004$ Hz ($0.19\pm0.06$ Hz) in the 3–10 keV band with respect to those found in Epoch - 1. The quality factor ($Q$=$\nu_{centroid}$/FWHM; FWHM - full width at half maximum) of the QPO in each energy band is reduced in Epoch - 2 compared to that in Epoch - 1, however, the $Q$–factor for the upper harmonic remains almost similar for both the energy bands and epochs. Although the fractional rms variability of the QPO does not change between Epoch - 1 and Epoch - 2 within each energy band, it is found to be higher in the 3–10 keV band with respect to that obtained in the 0.7–3 keV band for each epoch (see Table 2). Apart from this, the fractional rms variability of the upper harmonic appears to be larger in Epoch - 2 than that in Epoch - 1 in the 0.7–3 keV band, whereas it remains the same for both the epochs in 3–10 keV band. In addition to the above, we have also carried out the timing analysis of H 1743–322 using the *NuSTAR* observations. For this analysis, we have considered only the 3–30 keV band as this band encompasses 97% of the source photons detected by *NuSTAR* in the 3–78 keV energy range [@Stiele; @and; @Kong; @2017]. We derived the cross-power density spectra (CPDS) in the 3–10 and 10–30 keV energy bands using MaLTPyNT [@Bachetti2015]. The signals from the two completely independent focal plane modules are used to generate the CPDS, which acts like a good alternative of the white noise subtracted PDS [@Bachetti2015]. For the generation of the CPDS, we used the time bins of 0.1 s with the stretches of 512 s. Figure 3 shows the CPDSs derived from the *NuSTAR* observations for the Epoch - 1 and Epoch - 2 in the 3–10 and 10–30 keV bands. The best-fit parameters are given in Table 3. The shape of the CPDS and the required best-fit model for both the epochs in the 3–10 keV band are found to be the same as obtained in the same energy band of *XMM-Newton* observations. The ratio at which the QPO and the upper harmonic is detected in each energy band are found to be consistent with the *XMM-Newton* observations. The shifts in the centroid frequencies of the QPO and the upper harmonic in Epoch - 2 with respect to those in Epoch - 1 are $0.08\pm0.006$ Hz and $0.36\pm0.1$ Hz towards the higher frequency side. As can be seen from the Table 2 and 3 that the value of the quality factor ($Q$) and the fractional rms variability of the QPO, as well as the upper harmonic in both the epochs show similar nature to those obtained from the *XMM-Newton* observations in the same energy band. Contrary to the *XMM-Newton* observations in this energy band, the significance level of the QPO, as well as the upper harmonic increase from 36.3 $\sigma$ to 47 $\sigma$ for the QPO and 5.4 $\sigma$ to 8 $\sigma$ for the upper harmonic in Epoch - 2 with respect to Epoch - 1. The *NuSTAR* CPDS in the 10–30 keV band shows the similar shape as in the 3–10 keV band. The QPO and upper harmonic in this band are found in the $\sim1:2$ ratio in Epoch - 2 similar to the CPDS in the 3–10 keV band. However, no signature of the upper harmonic was found in the 10–30 keV band of Epoch - 1 that may be due to lower S/N of CPDS compared to that in Epoch - 2. The shift in the centroid frequency of the QPO in Epoch - 2 with respect to Epoch - 1 resembles to that obtained in the 3–10 keV band of the *NuSTAR* observations and is $0.09\pm0.01$ Hz towards the higher frequency side. The fractional rms amplitude of the QPO is similar in both the epochs in the 10–30 keV band and is consistent with those found in the 3–10 keV band. On the other hand, the upper harmonic in the Epoch - 2 exhibits a reduced fractional rms amplitude in the 10–30 keV band compared to the 3–10 keV band (see Table 3). The $Q$–factor of the QPO also exhibits similar behavior as the 3–10 keV energy band. In order to study the evolution of characteristic frequency and fractional rms amplitude with energy, we divided the full energy band of the *XMM-Newton* observations into 8 equal narrow bands of $\sim$ 1 keV width, and derived Poisson noise subtracted, as well as rms normalized PDS in each energy band. For this analysis, we excluded the *NuSTAR* observations due to low signal to noise ratio in each of these energy bands. The PDSs derived for each narrow energy band of *XMM-Newton* data were modeled with three Lorentzian components for the QPO, the upper harmonic and the zero centered BLN. We calculated the characteristics frequency ($\nu_{char}=\sqrt{\nu^2+\Delta^2}$; where $\Delta$ is the half width at half maximum) and fractional rms amplitude of the QPO, upper harmonic and the zero-centered BLN in each energy band for Epoch - 1 and Epoch - 2. Figure 4 shows the evolution of the characteristics frequency of the QPO, as well as its upper harmonic and the zero centered BLN as a function of energy. It is clear that the characteristics frequencies of the QPO and its upper harmonic show a flat nature without showing any significant dependence on energy for both the epochs. The characteristics frequency of the zero centered BLN also remains almost flat except for the slight decrease above $\sim$6 keV. Figure 5 exhibits the rms spectra of the QPO, as well as its upper harmonic and the zero centered BLN component for both the epochs, which demonstrate either flat or slightly decreasing trend with the energy. Frequency-dependent Lag and Lag-energy Spectra ---------------------------------------------- We used only *XMM-Newton* data and the GHATS package[^2] for the lag analysis. For the study of evaluation of time lag as a function of temporal frequency, we extracted EPIC-pn lightcurves in 1-1.5 keV and 1.5-4 keV bands. Each of these lightcurves was divided into 141 segments each with a length of 983 s. We computed Fourier transform for each segment and calculated average cross-spectrum. Using the averaged cross-spectrum, we calculated the frequency dependent time lag for both the epochs [@Uttley2014]. As in the top panels of Figure 6, we found a hard lag of $0.40\pm0.15$ and $0.32\pm0.07$ s between the above mentioned energy bands in the 0.07–0.4 Hz frequency range for Epoch - 1 and Epoch - 2, respectively. It is noteworthy that error bars are dominating below 0.1 Hz, and no time lag has been observed above 0.4 Hz in the time-lag-frequency spectra for both the epochs. The coherence in the 0.2–0.4 Hz frequency range is also found to be closer to unity for both the epochs. To study the variation of the time lag as a function of energy, we generated lightcurves in the 0.3–0.7, 0.7–1, 1–1.5, 4–5, 5–6, 6–7, 7–8 and 8–10 keV bands for both the epochs. We considered the 1.5–4 keV band as the reference band. The energy dependent time lag was estimated between the each narrow energy band and the reference band. Figure 6 (bottom panels) depicts the averaged time lags, estimated in the frequency range of 0.2–0.4 Hz, as a function of energy. This exhibits the increasing nature of the time lag with energy in a log-linear trend. Energy Spectra -------------- The time averaged *XMM-Newton*/EPIC-pn spectral data in the 0.7–10 keV band and *NuSTAR* FPMA, FPMB spectral data in the 3–78 keV were fitted simultaneously using ISIS (Version 1.6.2-40). The errors on the best fitted parameters are calculated at 90% confidence level unless otherwise specified. A systematic uncertainty of 1$\%$ was added to each *XMM-Newton*/EPIC-pn and *NuSTAR* FPMA/FPMB dataset to account for calibration uncertainty between different instruments [@Ingram; @et; @al.; @2017; @Madsen2017]. In order to use $\chi^2$ minimization to obtain the best fit, we grouped the EPIC-pn data to a minimum signal-to-noise ratio of 5 and a minimum of 10 channels per bin. Similarly, we also grouped the FPMA and FPMB data to the same signal-to-noise ratio used for the EPIC-pn data but with a minimum number of channels of 5. Initially, we fitted the three spectral datasets jointly with a POWERLAW model modified by the Galactic absorption. We used the absorption model TBabs with the abundances given by @Wilms [@et; @al.; @2000] and the cross section as in @Verner [@et; @al.; @1996]. We also multiplied the absorbed POWERLAW model with a constant factor to account for any difference in the relative normalizations of the three instruments. We fixed the constant factor to 1 for the FPMA data and varied for the EPIC-pn and FPMB data. We noticed discrepancy between the *XMM-Newton* and *NuSTAR* spectral dataset in the 3–10 keV band for both the epochs. Similar discrepancy was found in the 2014 outburst of H 1743–322 and was eliminated with the inclusion of an additional $E^{\Delta{\Gamma}}$ model by @Ingram [@et; @al.; @2017]. We adopted the same procedure in our analysis. We fixed the value of ${\Delta{\Gamma}}$ at zero for both the *NuSTAR* FPMA and FPMB datasets and varied for the EPIC-pn data. This model provided an unacceptable fit with $\chi^2$/dof equal to $8109.6/884$ and $7699.6/883$ for the Epoch - 1 and Epoch - 2, respectively. Inclusion of the multicoloured disc blackbody model (DISKBB) [@Mitsuda; @et; @al.; @1984] to account for the thermal emission from the accretion disc improved the fit with $\chi^2$/dof = $6456.8/882$ for Epoch - 1 and $\chi^2$/dof = $6079.6/881$ for the Epoch - 2. However, as shown in Figure 7, the model CONSTANT\*$E^{\Delta{\Gamma}}$\*TBABS\*(DISKBB+POWERLAW) resulted in strong residuals at $\sim$6–8 keV due to presence of an iron line and reflection hump at $\sim$15–30 keV. Two additional emission line–like features in the *XMM-Newton* EPIC-pn spectra near 1.8 and 2.2 keV were also noticed. These lines most likely arise due to the calibration uncertainties near the Si and Au edges, respectively [@Hiemstra2011; @Diaz2014]. In addition to this, a broad excess around 1 keV is clearly noticeable in Figure 7, and similar excess in EPIC-pn timing mode has been studied extensively by several authors [@Boirin2005; @Martocchia2006; @Sala2008; @Hiemstra2011; @Alam2015]. The reason behind the origin of this excess is not yet clear but it is thought to be related with the instrumental calibration [@Alam2015]. In order to further clarify this, we fitted the *Swift*/XRT spectral data available on the same epochs as considered in this work with the POWERLAW and TBABS, and did not find any excess below 2.5 keV. This confirms the finding of the previous workers mentioned above that the excesses below 2.5 keV in the *XMM-Newton*/EPIC-pn spectral data arise due to the calibration issues in the timing mode. We therefore excluded the EPIC-pn data below 2.5 keV in our spectral fitting. [cccc]{} & $\Delta{\Gamma}$ & 0.14$\pm0.01$ & 0.13$\pm0.01$\ TBabs & N$_H$($\times$10$^{22} cm^{-2})$ & 2.3$^{+0.4}_{-0.3}$ & 2.40$^{+0.05}_{-0.04}$\ DISKBB & kT$_{in}$(keV) & 1.1$^{+0.3}_{-0.2}$ & 1.2$\pm0.2$\ & n$_{diskbb}$ & 4.70$^{+0.02}_{-2.80}$ & 3.9$^{+6.2}_{-2.0}$\ RELXILL & i & 75$^\circ$ (f) & 75$^\circ$ (f)\ & a & 0.2 (f) & 0.2 (f)\ & q & 3(f) & 3(f)\ & r$_{in}$(ISCO) & 16.8$^{+5.9}_{-13.6}$ & 10.0$^{+3.1}_{-8.4}$\ & r$_{out}$(r$_g$) & 400 (f) & 400 (f)\ & $\Gamma$ & 1.51$^{+0.04}_{-0.05}$ & 1.51$^{+0.04}_{-0.05}$\ & A$_{Fe}$ & 3.0$^{+2.0}_{-1.0}$ & 3.0$^{+1.9}_{-0.9}$\ & log$\varepsilon$ & 3.20$\pm0.09$ & 3.20$^{+0.04}_{-0.03}$\ & E$_{cut}$ & 92.8$^{+14.0}_{-13.8}$ & 91.9$^{+13.7}_{-13.3}$\ & $\mathcal{R}$ & 0.3$\pm0.1$ & 0.4$\pm$0.1\ & n$_{rel}$($\times$10$^{-3}$) & 7.2$\pm0.2$ & 6.8$\pm0.2$\ & $\chi^2$/dof & 855.8/840 & 810.9/839\ & $F_{abs}^p$ & 4.0 & 3.8\ f – indicates the fixed parameters $p$ – X-ray flux in the 2.5–78 keV band in units of $10^{-9}$ erg cm$^{-2}$ s$^{-1}$ We tried to fit the iron line excess seen at $\sim$6–8 keV by adding a GAUSSIAN model component to the above mentioned model. Moreover, we also replaced the POWERLAW model by the thermally Comptonized continuum model [NTHCOMP]{}. The model CONSTANT\*$E^{\Delta{\Gamma}}$\*TBABS\*(DISKBB + GAUSSIAN + NTHCOMP) (hereafter Model 1) provided an acceptable fit with $\chi^2$/dof = $855.2/842$ and $791.2/841$ for Epoch - 1 and Epoch - 2, respectively. The centroid energy of the iron line are found to be at 6.6$\pm0.1$ keV with line width $\sigma = 0.9\pm0.1$ keV for the Epoch - 1 and 6.5$\pm0.1$ keV with line width $\sigma = 1.0\pm0.1$ keV for the Epoch - 2. The equivalent width (EW) of the iron line are $147^{+19}_{-25}$ eV and $167.7^{+25.1}_{-33.0}$ eV for Epoch - 1 and Epoch - 2, respectively. From these calculated values, it is clear that the line energy, line width and EW of the iron line are similar within error between the two epochs. To fit the observed iron line and the reflection hump, we used the relativistic reflection model RELXILL [@Garcia; @et; @al.; @2014; @Dauser; @et; @al.; @2014] that describes the broad iron line and reflected emission from an accretion disc illuminated by a power-law X-ray continuum with high energy cut-off. We replaced both the GAUSSIAN and NTHCOMP components in Model 1 with RELXILL. Thus, the model CONSTANT\*$E^{\Delta{\Gamma}}$\*TBABS\*(DISKBB+RELXILL) (hereafter Model 2) resulted in the best fit with $\chi^2$/dof = $855.8/840$ (Epoch - 1) and $810.9/839$ (Epoch - 2). The best-fit model (Model 2) to the *XMM-Newton*/EPIC-pn and *NuSTAR* FPMA/FPMB data for both the epochs is shown in Figure 8, whereas the corresponding best-fit parameters are listed in Table 4. In this, we fixed the inclination angle at $75^\circ$ and the spin parameter at 0.2 as estimated by @Steiner [@et; @al.; @2012], which was also used by the previous workers for H 1743–322 [@Ingram; @and; @Motta; @2014; @Ingram; @et; @al.; @2017]. We also fixed emissivity index ($q=3$) for the whole disc by tying the break radius with the outermost disc radius at 400 $r_g$ [@Stiele; @and; @Yu; @2016; @Ingram; @et; @al.; @2017]. For the absorption component, we kept the hydrogen column density parameter ($N_H$) free. In addition to this, the photon index ($\Gamma$), ionization parameter ($\xi$), as well as the iron abundance ($A_{Fe}$) were allowed to vary freely. From the best spectral fitting, the value of the foreground absorption ($N_H$) is found to be $2.3^{+0.4}_{-0.3}$ and $2.40^{+0.05}_{-0.04}$ for the Epoch - 1 and Epoch - 2, respectively. These values are found to be almost similar to those obtained by the previous workers [see @Stiele; @and; @Yu; @2016; @Parmar2003; @Miller2006; @CorbelTomsickKaaret2006; @Shidatsu; @et; @al.; @2014]. The disc is found likely to be truncated with inner radius ($r_{in}$) 16.8$^{+5.9}_{-13.6}$ $r_{isco}$ and 10.0$^{+3.1}_{-8.4}$ $r_{isco}$ for Epoch - 1 and Epoch - 2, respectively. It is to be noted here that the truncation of the inner disc radius is not statistically significant as the uncertainties on the lower limit for the inner disc radius are large enough. However, the inner radii are similar within errors for both the epochs. The photon index ($\Gamma$) is $\sim$1.5 for both the epochs. The high values of the ionization parameter ($log\xi$ = $3.20\pm0.09$ and $3.20^{+0.04}_{-0.03}$ for Epoch - 1 and Epoch - 2, respectively) suggest that the disc is highly ionized. The reflection fraction ($\mathcal{R}$) is far below from unity ($\sim$0.3 and $\sim$0.4 for the Epoch - 1 and Epoch - 2, respectively). Finally, the disc temperature (kT$_{in}$) is high with the values of $\sim$1.1 and $\sim$1.2 keV for Epoch - 1 and Epoch - 2, respectively. Spectral/Temporal Correlation ----------------------------- To study the connections between the temporal/spectral parameters, we divided each of the *XMM-Newton*/EPIC-pn and *NuSTAR*/FPMA, FPMB datasets into seven equal 20 ks long time intervals, giving a total number of fourteen data sets. PDSs from the corresponding background subtracted lightcurves of the *XMM-Newton* EPIC-pn data were derived, and the variation in the centroid frequency, as well as the fractional rms amplitude of the QPO were obtained for each time interval of 20 ks. Since the significance of the upper harmonic is very low with respect to the QPO (see Section 3.1), and in the short intervals of 20 ks, the signal to noise ratio is poor, we have considered only the QPO for this part of work. In addition to this, we have also excluded the *NuSTAR* data from time resolved temporal analysis due to the small number of photons in the short interval of 20 ks. For the time resolved temporal and spectral analysis, all the errors are derived at 68% confidence level. From the time resolved spectral and temporal studies, we derived the photon index and the QPO frequency for each of these time intervals. Figure 9 shows the variation of the photon index ($\Gamma$) with the QPO frequency, which shows the linear correlation between these two parameters. For each of the time-selected spectral dataset, we also computed the disc fraction and the Comptonized fraction in the 0.7–78 keV using the Model 1 by dividing the total unabsorbed flux to the disc flux and the Comptonized flux, respectively. Figure 10 shows the evolution of the spectral and temporal parameters with time, where panels (a)–(d) represent the centroid frequency of the QPO, fractional rms amplitude of the QPO, disc fraction and the Comptonized fraction, respectively. It is clear from the panels (a) and (b) that the centroid frequency and fractional rms amplitude of the QPO are weakly anti-correlated with the correlation coefficient, R $\sim$ 0.38. However, p-value of this correlation coefficient is found to be $\sim0.09$, indicating that the anti-correlation appears not to be statistically significant. This might be due to the difficulty in constraining the model parameters of the timing features as a result of poor signal to noise ratio in the short intervals of 20 ks. Moreover, the QPO frequency is weakly correlated with the disc fraction (see panels (a) and (c)), as well as weakly anti-correlated with the Comptonized fraction (see panels (a) and (d)) with the correlation coefficients, R $\sim$ 0.44 and $\sim$ 0.56, respectively. These two relations are found to be statistically significant with p-values of $\sim0.05$ and $\sim0.02$, respectively. DISCUSSION AND CONCLUDING REMARKS ================================= We performed the temporal and spectral analysis of H 1743–322 using the joint observations by the *XMM-Newton* and *NuSTAR* in two different epochs during the 2016 outburst. The HID derived from the *Swift/XRT* observations (see Figure 1) indicates that the source was in the hard state during the *XMM-Newton* and *NuSTAR* observations, as well as the source undergoes a full spectral state transition during the 2016 outburst. This is in contrast to the 2008 and 2014 outburst, where the source was in the hard state during the entire outburst [@Capitanio; @et; @al.; @2009; @Stiele; @and; @Yu; @2016]. We have detected QPO along with its upper harmonics at high significance levels in the 0.7–3 keV, 3–10 keV and 10–30 keV bands of *XMM-Newton* and *NuSTAR* data in both the epochs. The absence of the upper harmonic in the 10–30 keV band of *NuSTAR* data in Epoch - 1 is due to the poor signal to noise ratio of the data. The shape of the PDSs, as well as the fractional rms amplitude of the QPO, derived in all the above mentioned energy bands (see Table 2 and 3), suggest that the QPO is of type C [@Casella; @et; @al.; @2004; @Motta; @et; @al.; @2011]. The *NuSTAR* observations provided the opportunity to investigate the nature of the variability of H 1743–322 in the the hard band (10–30 keV). The similar shape of the PDSs in different energy bands clearly indicates energy independent nature of the PDSs. We noticed that the centroid frequencies of the QPO and its upper harmonic in Epoch - 2 are shifted towards higher frequency side with respect to Epoch - 1 for each energy band. These shifts may indicate a certain geometrical change in the system between the two epochs. In the 2010 and 2011 outbursts, @Altamirano [@et; @al.; @2012] reported the presence of QPO along with upper harmonic at the frequency ratio of 1:2 and a shift less than $\sim$2.2 mHz in the QPO frequency of H 1743–322 between the two successive *RXTE* observations. In the failed outburst of 2014, @Stiele [@and; @Yu; @2016] also detected QPO and upper harmonic at the same ratio of 1:2 using a single *XMM-Newton* observation. On the other hand, the unchanged shape of the PDS with energy (see Figure 2 and 3) and the absence of strong energy dependence of the characteristics frequency of the QPO, as well as its upper harmonic and the zero centered BLN component (see Figure 4) are consistent with the LHS of the system observed during the rising phase of the outburst [@Stiele; @and; @Yu; @2015; @Stiele; @and; @Yu; @2016]. As in Figure 5, the LHS of the system is also supported by either flat or slightly decreasing trend of the rms amplitude of the zero centered BLN with increasing energy in both the epochs [@Stiele; @and; @Yu; @2015; @Stiele; @and; @Yu; @2016]. Similar rms spectra below 10 keV were also found in few other BHXRBs such as XTE J1650–500 and XTE J1550–564 in the LHS [@Gierlinski; @and; @Zdziarski; @2005]. As mentioned in Section 3.3, the centroid energy, width and EW of the iron line, found during the 2016 outburst, do not change between the two epochs, and are consistent with those found by @Stiele [@and; @Yu; @2016] based on the *XMM-Newton* observation of the 2014 outburst. The X-ray power-law shape in both the epochs is similar ($\Gamma \sim 1.5$) and is generally found in the LHS. Furthermore, the inner disc radius ($r_{in}$) estimated during the 2016 outburst is found likely to be truncated away from the ISCO for both the epochs, although this is not statistically significant due to the large values of uncertainties on the lower limit (see Table 4). These results are consistent with those found by @Ingram [@et; @al.; @2017] during the 2014 failed outburst of H 1743–322. However, the accretion disc temperature pertains to be high for both epochs. We note that high inner disc temperature with a low value of $\Gamma$ representing hard state of H 1743–322 is also reported by the previous workers [see @McClintock; @et; @al.; @2009; @Chen; @et; @al.; @2010; @Motta; @et; @al.; @2011; @Cheng; @et; @al.; @2019]. The high disc temperature may be due to irradiation of the disc by hard X-rays from the corona. The illuminating X-rays likely get absorbed by the disc and the disc gets thermalized, which may result in the increase of the disc temperature [@Gierdonepage2009]. The high values of ionization parameter (log$\xi\approx 3.2$) obtained in both the epochs indicate high irradiation of the accretion disc by hard X-ray photons from the coronal region. As mentioned in Table 4, the energy spectra shows a high energy cutoff value at $\sim$92 keV for both the epochs, which also indicates the characteristic of the LHS [@Motta; @et; @al.; @2009; @Alam; @et; @al.; @2014]. Moreover, the ratio between the disc illuminating coronal intensity to the intensity that approaches towards the observer is termed as the reflection fraction ($\mathcal{R}$), which is found to be far below from the unity for both the epochs. Low values of the Reflection fraction may also indicate a truncated inner disc radius [@Garcia; @et; @al.; @2015; @Furst; @et; @al.; @2015], although the truncation of the inner disc radius estimated here is not statistically significant. This can also be justified by the findings of @Ingram [@et; @al.; @2017] who also reported the reflection fraction to be less than unity along with a truncated disc for the same source in 2014 outburst. The value of the iron line abundance is found to be high with respect to that obtained by @Ingram [@et; @al.; @2017]. As shown in the bottom panels of Figure 6, time-lag-energy spectra indicates the presence of hard lag in H 1743–322 during the 2016 outburst, where the hard X-ray variations lag behind the soft X-ray variations. The hard lag is found to be $0.40\pm0.15$ s and $0.32\pm0.07$ s for the Epoch - 1 and Epoch - 2, respectively. Figure 2 indicates that the QPO does not contribute significant power in the frequency range 0.2–0.4 Hz, considered for estimating the average lags. This can also be confirmed from the top panels of Figure 6, wherein the time lag above $\sim0.4$ Hz is zero. The presence of hard X-ray time-lag is very common for X-ray binaries. Such hard lags can be explained in terms of propagation fluctuations model [@Lyubarskii1997]. According to this model, the fluctuations in the mass accretion rate are different at different radii of the accretion disc and propagate down to the central object after being originated at the larger radii. As a result, the soft photons, originating from the outer part of the disc, get affected first by the fluctuation rather than the hard photons in the innermost region and thus, the hard photons lag the soft ones [see @IngramDone2011]. Presence of hard lag can also be attributed to the delay due to the Comptonization process [@Uttley2014; @MarcoPonti2016]. However, the lag due to the Comptonization does not completely follow the log-linear trend with energy [@Uttley2011]. Many previous researchers such as @Pei2017, @srivivrao2009, @Sriram2007 and @Choudhury2005 invoked the truncated disc geometry to explain the hard lag, whereas @KaraErin2019 found the evidence of hard lag when the disc is consistent with being at the ISCO. In Active Galactic Nuclei (AGN), the hard lag is commonly observed even if the accretion disk is found to be extended upto the ISCO [see e.g., @Kara2016; @EpitropakisPapadakis2017]. Hence, the presence of hard lag in both BHXRBs and AGN is unlikely to be due to the disk truncation. It is also interesting that soft X-ray lag has been reported during the 2008 and 2014 outburst of H 1743–322 [@Marco2015; @MarcoPonti2016]. However, these two outburst were reported to be failed ones, whereas the 2016 outburst exhibits a full spectral state transition from the hard to soft state (see Figure 1), implying to be a successful one. As the luminosity may be the reason for the change in lag properties during the 2016 outburst from the 2008 and 2014 ones, we derived the Eddington-scaled luminosity ($L_{3-10 keV}/L_{Edd}$) in the 3-10 keV band for the 2016 outburst, considering the mass and distance to the source to be the same as that used in @MarcoPonti2016. The values of $L_{3-10 keV}/L_{Edd}$ are found to be $0.006\pm0.002$ and $0.005\pm0.002$ for Epoch - 1 and Epoch - 2, respectively. These values are nearly similar to the reported value of $L_{3-10 keV}/L_{Edd}$ $\sim0.004$ by @MarcoPonti2016 in the same energy band for the 2008 and 2014 outbursts. The similar values of the Eddington-scaled luminosity indicate that there may be other possible reasons for the change in the lag properties rather than the luminosity. The correlation found between the QPO frequency and the photon index ($\Gamma$) (see Figure 9) is very common in BHXRBs and an ample amount of study has been performed by @Vignarca2003 [@Titarchuk; @and; @Fiorito; @2004; @Titarchuk; @and; @Shaposhnikov; @2005; @McClintock; @et; @al.; @2009; @StieleMotta2011; @StieleBelloni2013]. Furthermore, @StieleBelloni2013 stated that the $\Gamma$-QPO frequency correlation can be explained through the ‘sombero’ geometry, where the black hole is surrounded by the quasi-spherical corona and the accretion disc enters the corona by a little amount. Such type of $\Gamma$-QPO frequency correlation indicates that the QPO properties are strongly related to the geometry of the coronal region. Although the weak anti-correlation between the centroid frequency and fractional rms amplitude of QPO in Figure 10 is not statistically significant, it might still give some indication of the type C nature of the QPO (McClintock et al. 2009) in addition to the obtained shape of the PDS (see Figure 2) and the fractional rms value of the QPO (see Table 2). However, there may be possibility of achieving better significance of this anti-correlation by considering the larger dataset with higher signal to noise ratio. The disc fraction is always less than $3\%$ and shows a weak correlation with the QPO frequency, as well as weak anti-correlation with the QPO fractional rms amplitude. In this regard, it is worth mentioning here that based on the study of the BHXRBs XTE J1550–564 and GRO J1655–40, @Sobczak [@et; @al.; @2000] suggested that the strong correlation between the disc flux and the QPO frequency could imply the regulation of the QPO frequency by the accretion disc. Apart from this, it was also pointed out that not only the accretion disc regulates the QPO frequency but also the QPO phenomenon is closely related to the power-law component, which acts like a trigger only when the threshold value is $\sim20\%$ of the total flux. For H 1743–322, we have found that the Comptonized fraction is weakly anti-correlated with the QPO frequency, as well as weakly correlated with the QPO fractional rms amplitude and is always greater than $97\%$ (see Figure 10) that indicates the maximum amount of the total flux is coming from the Comptonized component. The weak anti-correlation between the QPO frequency and the power-law flux was also found for type C QPOs in the BHXRB GX 339–4 by @Motta [@et; @al.; @2011]. Moreover, the high value of the Comptonized fraction, as well as the weak value of the thermal disc component clearly support the LHS behavior of the H 1743–322 during the 2016 outburst period and is in accordance with the above mentioned temporal and spectral parameters. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the anonymous referee for useful comments that improve the quality of the paper. This research has made use of archival data of *XMM-Newton*, *NuSTAR* and *Swift* observatories through the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by the NASA Goddard Space Flight Center. The NUSTARDAS jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (Caltech, USA), as well as Science Analysis System (SAS), provided by ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA) and *Swift* online data analysis tools provided by the Leicester Swift Data Centre ([http://www.swift.ac.uk/user objects/](http://www.swift.ac.uk/user objects/)) are used for the processing of the data of the corresponding observatories. Authors express their sincere thanks to David P. Huenemoerder for the help regarding the use of ISIS package. This research has made use of the General High-energy Aperiodic Timing Software (GHATS) package developed by T. M. Belloni at INAF–Osservatorio Astronomico diBrera. SC and PT acknowledge the financial support from grant under ISRO(AstroSat) Announcement of Opportunity (AO) programme (DS-2B-13013(2)/8/2019-Sec.2). PT expresses his sincere thanks to Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for granting supports through IUCAA asscociateship programme. SC is also very much grateful to IUCAA, Pune, India for providing support and local hospitality during his frequent visits for giving the final shape to this paper. Agrawal, Vivek., Nandi, Anuj., 2015, MNRAS, 2015, 446, 3926 Alam et al., 2014, , 445, 4259 Alam et al., 2015, , 451, 3078 Altamirano D. et al., 2011, , 742, L17 Altamirano D., Strohmayer T., 2012, , 754, L23 P. Ar[é]{}valo & P. Uttley, 2006, , 367, 801 Bachetti, M. 2015, MaLTPyNT: Quick look timing analysis for NuSTAR data, Astrophysics Source Code Library, ascl:1502.021 Belloni T., Hasinger G., 1990, , 230, 103 Belloni T., Klein-Wolt M., M ́endez M., van der Klis M., van Paradijs J., 2000, , 355, 271 Belloni T., Homan J., Casella P., et al., 2005, , 440, 207 Belloni T. M., 2010, in Belloni T., ed., Lecture Notes in Physics Vol. 794, States and Transitions in Black Hole Binaries. Springer Verlag, Berlin, p. 53 Belloni T. M., Motta S. E., Munoz-Darias T., 2011, Bulletin of the Astronomical Society of India, 39, 409 Belloni T. M., Sanna A., Méndez M., 2012, , 426, 1701 Boirin L., Mendez M., Da[í]{}z Trigo M., Parmar A. N., Kaastra J. S., 2005, A&A, 436, 195 Burrows D. N. et al., 2000, in Flanagan K. A., Siegmund O. H., eds, X-Ray and Gamma-Ray Instrumentation for Astronomy XI, Vol. 4140 SPIE Conf Ser., Swift X-Ray Telescope. SPIE, Bellingham, p. 64 Casella P., Belloni T., Homan J., Stella L., 2004, , 426, 587 Casella, P., Belloni, T., Stella, L. 2005, , 629, 403 Capitanio F., Belloni T., Del Santo M., Ubertini P., 2009, , 398, 1194 Chen, Y. P., et al., 2010, , 522, A99 Cheng et al., 2019, , 482, 550 Choudhury, M. et al., 2005, , 631, 1072 Corbel S., Tomsick J. A., Kaaret P., 2006, ApJ, 636, 971 Dauser, T., Wilms, J., Reynolds, C. S., and Brenneman, L. W. 2010, , 409, 1534 Dauser, T., García, J., Wilms, J., et al. 2013, , 430, 1694 Dauser, T., García, J., Parker, M. L., Fabian, A. C., and Wilms, J. 2014, , 444, L100 De Marco, B., Ponti, G., Cappi, M., et al. 2013, MNRAS, 431, 2441 De Marco, B.& Ponti, G., 2016, , 826, 70 De Marco, B., Ponti, G., Munoz-Darias, T., & Nandra, K. 2015, ApJ, 814, 50 De Marco, B. et al., 2017, MNRAS, 471, 1475 Dewangan, G. C., Titarchuk, L., & Griffiths, R. E. 2006, ApJL, 637, L21 D[í]{}az Trigo et al., 2014, , 571, A76 Epitropakis A., Papadakis I. E., 2017, MNRAS, 468, 3568 Esin, A. A., McClintock, J. E., and Narayan, R. 1997, , 489, 865 Esin, A. A., McClintock, J. E., Drake, J. J., et al. 2001, , 555, 483 Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2009, , 397, 1177 Fabian A. C., Rees M. J., Stella L., White N. E., 1989, , 238, 729 F[ü]{}rst, F., Nowak, M. A., Tomsick, J. A., et al. 2015, , 808, 122 Garc[í]{}a, J., Kallman, T. R. 2010, , 718, 695 Garc[í]{}a, J., Kallman, T. R., and Mushotzky, R. F. 2011, , 731, 131 Garc[í]{}a, J., Dauser, T., Reynolds, C. S., et al. 2013, , 768, 146 Garc[í]{}a J. et al., 2014, , 782, 76 Garc[í]{}a, J. A. et al., 2015, , 813, 84 Gierli[ń]{}ski, M., Done, C. 2004, , 347, 885 Gierli[ń]{}ski M., Zdziarski A. A., 2005, , 363, 1349 Gierli[ń]{}ski M., Done, Chris and Page, Kim, 2009, , 392, 1106 Grinberg, V., Pottschmidt, K., B[ö]{}ck, M., et al. 2014, A&A, 565, 1 Harrison, F.A., Craig, W.W., Christensen, F.E., et al. 2013, , 770, 103 Hiemstra et al., 2011, , 411, 137 Hill J. E., Zugger M. E., Shoemaker J., Witherite M. E., Koch T. S., Chou L. L., Case T., Burrows D. N., 2000, in Flanagan K. A., Siegmund O. H., eds, X-Ray and Gamma-Ray Instrumentation for Astronomy XI, Vol. 4140 SPIE Conf. Ser., Laboratory X-ray CCD Camera Electronics: a Test Bed for the Swift X-Ray Telescope. SPIE, Bellingham, p. 87 Homan J., Miller J. M., Wijnands R., van der Klis M., Belloni T., Steeghs D., Lewin W. H. G., 2005, , 623, 383 Houck, J. C., & Denicola, L. A. 2000, in ASP Conf. Ser. 216, Astronomical Data Analysis Software and Systems IX, ed. N. Manset, C. Veillet, & D. Crabtree (San Francisco, CA: ASP), 591 Ingram A. & Done, Chris, 2011, , 415, 2323 Ingram A., Motta S., 2014, , 444, 2065 Ingram A., van der Klis. M., Middleton M. and Altamirano, D., 2017, , 464, 2979 Kaluzienski L. J., Holt S. S., 1977, IAU Circ., 3099, 1 Kara et al., 2016, , 462, 511 Kara E. et al., 2019, Nature, 565, 198 Krimm H. A. et al., 2009, Astron. Telegram, 2058 Leahy, D. A., Elsner, R. F., & Weisskopf, M. C. 1983, ApJ, 272, 256 Lyubarskii, Yu. E., 1997, , 292, 679 Madsen, K. K. et al., 2017, , 153, 2 Markwardt C. B., Swank J. H., 2003, Astron. Telegram, 133 Martocchia A., Matt G., Belloni T., Feroci M., Karas V., Ponti G., 2006, A&A, 448, 677 McClintock, J. E., Horne, K., and Remillard, R. A. 1995, , 442, 358 McClintock, J. E., Haswell, C. A., Garcia, M. R., et al. 2001, , 555, 477 McClintock, J. E., Narayan, R., Garcia, M. R., et al. 2003, , 593, 435 McClintock J. E., Remillard R. A., 2006, in Compact Stellar X-ray Sources, vol. 39, ed. W. Lewin and M. van der Klis (Cambridge: Cambridge univ. Press), 157 McClintock, J. E., Remillard, R. A., Rupen, M. P., et al. 2009, , 698, 1398 Miyamoto, S., Kitamoto, S., Mitsuda, K., & Dotani, T. 1988, Natur, 336, 450 Miller J. M. et al., 2006, ApJ, 646, 394 Miller J. M., 2007, , 45, 441 Mitsuda, K., Inoue, H., Koyama, K., et al. 1984, , 36, 741 Molla, A. A., et al., 2017, , 834, 88 Morgan E. H., Remillard R. A., Greiner J., 1997, , 482, 993 Motta S., Belloni T., Homan J., 2009, , 400, 1603 Motta S., Munoz-Darias T., Belloni T., 2010, , 408, 1796 Motta S., Munoz-Darias T., Casella P., Belloni T., Homan J., 2011, , 418, 2292 Motta S. E., Rouco-Escorial A., Kuulkers E., Munoz-Darias T., Sanna A., 2017, , 468, 2311 Nakahira S. et al., 2013, Astron. Teleg., 5241, 1 Narayan, R., Yi, I. 1995, , 452, 710 Narayan, R., McClintock, J. E., and Yi, I. 1996, , 457, 821 Negoro H. et al., 2012, Astron. Teleg., 3842, 1 Nolan P. L. et al., 1981, ApJ, 246, 494 Nowak, M. A., Vaughan, B. A., Wilms, J., Dove, J. B., & Begelman, M. C. 1999a, ApJ, 510, 874 Nowak, M. A., Wilms, J., & Dove, J. B. 1999b, ApJ, 517, 355 Page, C. G., Bennetts, A. J., & Ricketts, M. J. 1981, SSRv, 30, 369 Parmar A. N., Kuulkers E., Oosterbroek T., Barr P., Much R., Orr A., Williams O. R., Winkler C., 2003, A&A, 411, L421 Pei, Songpeng. et al., 2017, Ap&SS, 362, 118 Plant, D. S. et al., 2015, , 573, 120 Priedhorsky W., Garmire G. P., Rothschild R., Boldt E., Serlemitsos P., Holt S., 1979, ApJ, 233, 350 Remillard, R. A., McClintock, J. E., Orosz, J. A., Levine, A. M. 2006, ApJ, 637, 1002 Revnivtsev M., 2003, , 410, 865 Reynolds C. S., Nowak M. A., 2003, , 377, 389 Sala G., Greiner J., Ajello M., Primak N., 2008, A&A, 489, 1239 Shakura, N. I., Sunyaev, R. A. 1973, , 24, 337 Shaposhnikov, N., Titarchuk, L. 2007, , 663, 445 Shidatsu, M., Negoro, H., Nakahira, S., et al. 2012, ATel, 4419, 1 Shidatsu M. et al. 2014, , 789, 100 Sobczak, G. J. et al., 2000, , 531, 537 Sriram, K., Agrawal, V.K., Pendharkar, J.K., Rao, A.R., 2007, , 661, 1055 Sriram, K., Agrawal, V. K., Rao, A. R., 2009, RAA, 9, 901 Steiner, J. F., McClintock, J. E., Remillard, R. A., et al. 2010, , 718, L117 Steiner F. J., McClintock J. E., Reid M. J., 2012, , 745, 7 Stiele, H., Motta, S., Muñoz-Darias, T. & Belloni, T. M., 2011, , 418, 1746 Stiele, H., Belloni, T. M., Kalemci, E. & Motta, S., 2013, , 429, 2655 Stiele, H., Yu W., 2015, , 452, 3666 Stiele, H., Yu, W., 2016, , 460, 1946 Stiele, H., Kong, A. K. H., 2017, , 844, 8 Tanaka y., Shibazaki N., 1996, , 34, 607 Titarchuk, L., Fiorito, R. 2004, , 612, 988 Titarchuk, L., & Shaposhnikov, N. 2005, , 626, 298 Trudolyubov S. P., Borozdin K. N., Priedhorsky W. C., 2001, , 322, 309 Uttley P., Wilkinson T., Cassatella P., Wilms J., Pottschmidt K., Hanke M., Böck M., 2011, , 414, 60 Uttley, P., Cackett, E. M., Fabian, A. C., Kara, E., & Wilkins, D. R. 2014, A&ARv, 22, 72 Verner, D. A., Ferland, G. J., Korista, K. T., and Yakovlev, D. G. 1996, , 465, 487 Vignarca, F., Migliari, S., Belloni, T., Psaltis, D., & van der Klis, M. 2003, , 397, 729 Wilms, J., Allen, A., and McCray, R. 2000, , 542, 914 Zhang, W., Jahoda, K., Swank, J. H., Morgan, E. H., & Giles, A. B. 1995, ApJ, 449, 930 Zhou, J. N., Liu, Q. Z., et al., 2013, , 431, 2285 [^1]: <http://www.swift.ac.uk/user_objects/> [^2]: <http://astrosat.iucaa.in/~astrosat/GHATS_Package/Home.html>
--- abstract: 'In this paper, a new variant of ElGamal signature scheme is presented and its security analyzed. We also give, for its theoretical interest, a general form of the signature equation.' --- **[Omar Khadir]{}** Department of Mathematics, Faculty of Science and Technology, University of Hassan II-Mohammedia, Morocco. khadir@hotmail.com \[section\] \[Theorem\][Definition]{} \[Theorem\][Corollary]{} \[Theorem\][Lemma]{} \[Theorem\][Example]{} \[Theorem\][Proposition]{} \[Theorem\][Remark]{} [**Mathematics Subject Classification:**]{} 94A60\ [**Keywords:**]{} Public key cryptography, ElGamal signature scheme, discrete logarithm problem. Introduction ============ Since the invention of the public key cryptography in the late 1970s \[2, 13, 12\], several new subjects related to the data security as identification, authentication, zero-knowledge proof and secret sharing were explored. But among all these issues, and perhaps the most important, is how to build secure digital signature systems. During more than three decades, the topic, probably due to its fundamental and practical role in electronic funds transfer, was intensively investigated \[10, 15, 14, 4, 1, 11, 9\].\ There is only one principle on which rest the digital signature algorithms. To sign a message $m$, Alice with the help of her private key, must answer a question asked by Bob, the verifier. The question is naturally a function of $m$. Nobody other than Alice is able to forge her signature and give the right answer, even the asker himself.\ In most digital signature schemes, the considered question is a difficult mathematical equation depending of $m$ as a parameter. Only Alice, because she possesses a private key, is able to solve it. In this protocol, we are not necessary concerned by the transmitted data security. Indeed, Bob and Alice can publish respectively the equation and the solution in two protected and separated personal servers.\ In 1985, ElGamal \[3\], inspired by the Diffie-Hellman ingenious ideas on new directions in cryptography \[2\], was one of the firsts to propose a practical signature scheme. Used properly, this signature system has never been broken. He built it on a simple equation with two unknown variables. The hardness of this equation relies on the discrete logarithm problem \[7, p.103\]. In general, from a public key cryptosystem, one can derive a signature scheme. Curiously, in his paper \[3\], ElGamal did not exploit this possibility and it is still unclear how he found his signature equation. This fact has encouraged many researchers to look for equations having properties similar to those of ElGamal. See, for instance, \[14, 4, 5\].\ Some practical signature protocols as Schnorr method \[14\] and the digital signature algorithm DSA \[8\] are directly derived from ElGamal scheme.\ Permanently, ElGamal signature scheme is facing attacks more and more sophisticated. If the system is completely broken, alternative protocols, previously designed, prepared and tested, would be useful. In this work we present a new variant of the ElGamal signature method and analyze its security. Furthermore, we give, just for its theoretical interest, a general form of our signature equation.\ The paper is organized as follows. In section 2, we review the basic ElGamal signature algorithm and recall the main known attacks. Our new variant and a theoretical generalization are presented in section 3. We conclude in section 4.\ In the sequel, we will adopt ElGamal paper notations \[3\]. $\mathbb{Z}$, $\mathbb{N}$ are respectively the sets of integers and non-negative integers. For every positive integer $n$, we denote by $\mathbb{Z}_n$ the finite ring of modular integers and by $\mathbb{Z}_n^*$ the multiplicative group of its invertible elements. Let $a,b,c$ be three integers. The great common divisor of $a$ and $b$ is denoted by $gcd(a,b)$. We write $a\equiv b$ $[c]$  if $c$ divides the difference $a-b$, and $a=b\ mod\ c$ if $a$ is the remainder in the division of $b$ by $c$.\ We start by describing the original ElGamal signature scheme. ElGamal Original Signature Scheme ================================== We recall in this section the basic ElGamal protocol in three steps, followed by the most known attacks. [**2.1. ElGamal Algorithm** ]{} [**1.**]{} Alice begins by choosing three numbers : - $p$, a large prime integer. - $\alpha$, a primitive root \[7, p.69\] of the finite multiplicative group $\mathbb Z_p^*$. - $x$, a random element in $\{1,2,\ldots,p-1\}$. She computes $y=\alpha^x\ mod\ p$. We consider then that : $(p,\alpha,y)$ is Alice public key and $x$ her private key. [**2.**]{} Assume that Alice wants to sign the message $m<p$. She must solve the congruence $$\alpha^m \equiv y^r\,r^s\ [p]$$ where $r$ and $s$ are two unknown variables.\ Alice fixes arbitrary $r$ to be $r=\alpha^k\ mod\ p$, where $k$ is chosen randomly and invertible modulo $p-1$. She has exactly $\varphi(p-1)$ possibilities for $k$, where $\varphi$ est the phi-Euler function \[7, p.65\]. Equation (1) is then equivalent to : $$m\equiv x \, r+k\, s \ [p-1]$$ As Alice possesses the secret key $x$, and as the integer $k$ is invertible modulo $p-1$, she computes the second unknown variable $s$ by : $\displaystyle s\equiv \frac{m-x\, r}{k}\ [p-1]$ [**3.**]{} Bob can verify the signature by checking that congruence (1) is valid. Keys generation problem must be taken into account. There exist essentially probabilistic algorithms for generating prime integers. In a recent previous work \[6\], we obtained experimental results on the subject.\ Now, we recall the main known attacks. [**2.2. Main attacks** ]{} The first attack was mentioned by ElGamal himself \[3\]. It is not recommended to sign two different messages with the same secret exponent. As the complete justification of this attack does not figure in the ElGamal paper, we reproduce here the proof from \[16, p. 291\] which seems to us, less restrictive than that in \[7, p.455\]. If Alice signs more than one message with the same secret exponent, then her system can be totally broken. Let $(m_1,r,s_1)$ and $(m_2,r,s_2)$ be the signatures of the two messages $m_1$ and $m_2$ with the same secret exponent $k$. Due to relation (2), we retrieve Alice secret key $x$ if we find the value of the parameter $k$ provided that $r$ is invertible modulo $p-1$.\ We have $m_1\equiv x\, r + k\, s_1\ [p-1]$ and $m_2\equiv x\, r + k\, s_2\ [p-1]$, so : $$m_1-m_2\equiv k\,(s_1-s_2)\ [p-1]$$ If we put $gcd(s_1-s_2,p-1)=d$, there exist two integers $S$ and $P$ such that $s_1-s_2=d\, S$, $p-1=d\, P$ and $gcd(S,P)=1$. Thus relation (3) becomes :\ $ m_1-m_2= k\,(s_1-s_2)+K\, (p-1)=k\,d\,S+K\,d\,P,\ K\in \mathbb Z$. With $M=k\,S+K\, P$, we obtain $M\equiv k\, S\ [P]$. As $S$ is invertible modulo $P$, we have $$k=M\, S^{-1}+K\, P$$ Since $k<p-1$ and $p-1=d\, P$, we deduce that $K<d$. By equality (4), we can test every value of $K$ and check if $r\equiv \alpha^k\ [p]$. We find $K$ if $d$ is not too large.\ In 1996, Bleichenbacher \[1\] has discovered an important fact : when some parameters are smooth \[16, p.197\], it is possible to forge ElGamal signature without solving the discrete logarithm problem. We present here a slightly modified version of his result. Let $(p,\alpha,y)$ be Alice public key. Suppose that $\beta<p$ is a positive integer for which one can efficiently compute $t\in \mathbb N$ such that $\alpha\equiv \beta^t\ [p]$.\ If $\displaystyle \frac{p-1}{gcd(p-1,\beta)}$ is smooth, then an Alice adversary will be able to forge her signature for any given message $M$. Let $D=gcd(p-1,\beta)$ and $\beta=\lambda\, D, \ \lambda\in\mathbb N^*$. We denote by $H$ the subgroup of $\mathbb Z^*$ generated by $\alpha^D\ mod\ p$. Since $y^D\equiv (\alpha^x)^D\equiv (\alpha^D)^x\ [p]$, we have $y^D\in H$. From a well known result, as the order $(p-1)/D$ of H is smooth, the discrete logarithm problem is computationally feasible : one can efficiently find $z_0\in \mathbb N$ such that $y^D\equiv (\alpha^D)^{z_0}\ [p]$.\ Let $M$ a message to be signed and $m=h(M)\ mod\ p$ where $h$ is a public hash function. Alice adversary sets $r=\beta$. ElGamal signature equation (1) becomes : $$\beta^{t\, m}\equiv y^\beta\, \beta^s\equiv y^{\lambda\, D}\, \beta^s\equiv (\alpha^D)^{z_0\, \lambda}\, \beta^s\equiv \beta^{\lambda\, t\, z_0\, D}\, \beta^s \ [p]$$ Hence $s\equiv t\, (m-\beta \, z_0)\ [p-1]$, and then the couple $(r,s)$ is a valid signature of the message $M$, which achieves the proof.\ Observe that it is not so surprising to choose $r=\beta$ or $r=\beta^i\ mod \ p, i\in\mathbb N$, since $\beta^t\equiv \alpha \ [p]$ implies that $\beta$ is an other generator of $\mathbb Z_n^*$.\ Next section presents our main contribution. New Variant and Theoretical Generalization ========================================== In this section, we suggest a new variant of ElGamal signature scheme based on an equation with three unknown variables. The method does not need the computation of the secret exponent inverse and so avoids the use of the extended Euclidean algorithm. Technical report \[4\], although it collected several signature equations, did not study the case we propose here. [**3.1. Our protocol**]{} We suppose first that $h$ is a public secure hash function. We can take $h$ equal to the secure hash algorithm SHA1 \[7, Chap.9\] and \[16, Chap.5\].\ [**1.**]{} Alice begins by choosing her public key $(p,\alpha, y)$, where $p$ is a large prime integer, $\alpha$ is a primitive element of the finite multiplicative group $\mathbb Z_p^*$ and $y=\alpha^x\ mod\ p$. Element $x$, which is a random integer in $\{1,2,3,\ldots,p-1\}$, is Alice private key. [**2.**]{} Assume that Alice wants to sign the message $M<p$. She must solve the congruence $$\alpha^t \equiv y^r\,r^s\, s^m \ [p]$$ where $r,s$ and $t$ are three unknown variables and $m=h(M)\ mod\ p$.\ Alice fixes arbitrary $r$ to be $r=\alpha^k\ mod \ p$, and $s$ to be $s=\alpha^l\ mod \ p$, where $k,l$ are chosen randomly in $\{1,2,\ldots,p-1\}$.\ Equation (5) is then equivalent to : $$t\equiv r\,x+ks+l\, m\ [p-1].$$ As Alice detains the secret key $x$ and knows the values of $r,s,k,l,m$, she is able to compute the third unknown variable $t$. [**3.**]{} Bob can verify the signature by checking that congruence (5) holds.\ Our scheme has the advantage that it does not need the use of the extended Euclidean algorithm for computing $k^{-1}$ modulo $p-1$. May be this can be an answer to problems evoked in \[9, subsection 1.3\]. To illustrate the technique, we give the following small example. Let $(p,\alpha,y)$ be Alice public key where : $p=509$, $\alpha=2$ and $y=482$. We emphasize that we are not sure if using a short value of $\alpha$ does not weaken the system. The private key is $x=281$. Suppose that Alice wants to produce a signature for the message $M$ for which $m\equiv h(M)\equiv 432\ [508]$ with the two random exponents $k=208$ and $l=386$. She computes $r\equiv \alpha^k\equiv 2^{208}\equiv 332\ [p]$, $s\equiv \alpha^l\equiv 2^{386}\equiv 39\ [p]$ and $t\equiv r\,x+k\,s+l\,m\equiv 440\ [p-1]$. Bob or anyone can verify the relation $\alpha^t \equiv y^r\,r^s\, s^m \ [p]$. Indeed, we find that $\alpha^t\equiv 436\ [p]$ and $y^r\,r^s\, s^m \equiv 436 \ [p]$. Notice here that $k$ and $l$ are even integers unlike in ElGamal protocol where the exponent $k$ is always odd since it must be relatively prime with $p-1$. [**3.2. Security analysis**]{} Suppose that Oscar is an Alice adversary. Let us discuss some possible and realistic attacks. [**Attack 1 :** ]{} Knowing all signature parameters for a particular message $M$, Oscar tries to find Alice secret key $x$.\ Equation (5) is equivalent to $\alpha^t\equiv \alpha^{x\, r}\,r^s\, s^m\ [p]$, so ${\alpha^r}^x\equiv \alpha^t\, r^{-s}\, s^{-m} \ [p]$. Therefore, Oscar is confronted to the hard discrete logarithm problem.\ If Oscar prefers to work with relation (6), he needs to know $k$ and $l$. Their computation conducts to the discrete logarithm problem. [**Attack 2 :** ]{} Oscar tries to forge Alice signature for a message $M$, by first, fixing arbitrary two unknown variables and looking for the third parameter. [**(1)**]{} Suppose for example that Oscar has fixed $r,s$, and tries to solve equation (5) in the variable $t$. But here again, he will be confronted to the discrete logarithm problem. [**(2)**]{} Assume that Oscar has fixed $r$ and $t$. We have from relation (5): $r^s \, s^m\equiv \alpha^t\, y^{-r} \ [p]$; and there is no known way to solve this equation. [**(3)**]{} Assume now that Oscar has fixed $s$ and $t$. We have from relation (5) : $y^r \, r^s\equiv \alpha^t\, s^{-m} \ [p]$; and this equation is similar to the last case, so it is intractable. [**Attack 3 :** ]{} Let us admit that Oscar has collected $n$ valid signatures for messages $M_i$, $i\in\{1,2,3,\ldots,n\}$ and $n\in \mathbb N$. He will obtain a system of $n$ modular equations : $$(S)\left\{\begin{array}{c} t_1\equiv x\, r_1+k_1\,s_1+l_1\,m_1\ [p-1] \\ t_2\equiv x\, r_2+k_2\,s_2+l_2\,m_2\ [p-1] \\ \vdots \ \ \ \vdots \ \ \ \vdots\\ t_n\equiv x\, r_n+k_n\,s_n+l_n\,m_n\ [p-1] \\ \end{array}\right.$$ Where $\forall i\in\{1,2,3,\ldots,n\},$ $r_i\equiv \alpha^{k_i}\ [p],\ s_i\equiv \alpha^{l_i}\ [p]$ et $m_i\equiv h(M_i)\ [p]$\ Since system (S) contains $2n+1$ unknown variables $x,r_i,s_i,\ i\in\{1,2,3,\ldots,n\}$, Oscar can find several valid solutions. However, as $x$ is Alice secret key, it has a unique possibility and therefore Oscar will never be sure what value of $x$ is the correct one. Consequently, this attack is to be rejected.\ Next result is similar to that exists in ElGamal scheme. If no hash function is used, then Oscar can forge existentially Alice signature. Assume that Alice products the parameters $(r,s,t)$ as a signature for the message $M$. So $\alpha^t \equiv y^r\,r^s\, s^m \ [p]$. Let $k,k',l,l'\in\mathbb N$ be four arbitrary integers with $gcd(l',p-1)=1$. If Oscar chooses $r\equiv \alpha^k\, y^{k'}\ [p]$ and $s\equiv \alpha^l\, y^{l'}\ [p]$, he would obtain : $$\alpha^t \equiv y^r\,(\alpha^{k\, s}\, y^{k'\, s})\, (\alpha^{l\, m}\, y^{l'\, m}) \ [p].$$ Relation (7) holds if $\left\{\begin{array}{c} t-k\,s -l\, m \equiv 0\ [p-1]\ \ \ (7.1)\\ t-k'\,s -l'\, m\equiv 0 [p-1]\ \ \ (7.2)\\ \end{array}\right.$\ Oscar computes $m$ from equality (7.2) : $\displaystyle m\equiv \frac{r+k'\,s}{l'}\ [p-1]$; and from (7.1) he has $\displaystyle t\equiv k\,s+ \frac{l\, (r+k'\,s)}{l'}\ [p-1]$. Thus $(r,st)$ is a valid signature for the message $m$.\ Alice can sign two messages with the same couple of secret exponents. Indeed, let $(r,s,t_1)$ and $(r,s,t_2)$ be the signatures of the two different messages $M_1$ and $M_2$ associated to the secret exponents $(k,l)$. We have $\left\{\begin{array}{c} t_1\equiv x\,r+k\, s+l\,m_1\ [p-1] \\ t_2\equiv x\,r+k\, s+l\, m_2\ [p-1] \\ \end{array}\right.$\ where $m_1\equiv h(M_1)\ [p-1]$ et $m_2\equiv h(M_2)\ [p-1]$.\ We can follow the method used in the proof of Proposition 1 and find the value of $l$, but it seems that it is not an easy task to retrieve secret parameters $k$ and $x$. [**3.3. Complexity of our method :**]{} As in \[5\], let $T_{exp},\ T_{mult},\ T_h, $ be respectively the time to perform a modular exponentiation, a modular multiplication and hash function computation of a message $M$. We ignore the time required for modular additions, substractions, comparisons and make the conversion $T_{exp}=240\, T_{mult}$.\ The signer Alice needs to perform two modular exponentiations, three modular multiplications and one hash function computation. So the global required time is : $T_1=2\,T_{exp}+ 3\, T_{mult}+T_h=483\,T_{mult}+T_h$.\ The verifier Bob needs to perform four modular exponentiations, two modular multiplications and one hash function computation. So the global required time is : $T_2=4\, T_{exp}+ 2\, T_{mut}+T_h=962\,T_{mult}+T_h$.\ The cost of communication, without $M$, is $6\, |p|$, since to sign, Alice transmits $(p,\alpha,y)$ and $(r,s,t)$. $|p|$ denotes the bit-length of the integer $p$.\ Observe that the complexity of our method is not too high relatively to that of ElGamal scheme or to that in \[5\]. [**3.4. Theoretical generalization**]{} Let $h$ be a public secure hash function. [**1.**]{} Alice begins by choosing her public key $(p,\alpha, y)$, where $p$ is a large prime integer, $\alpha$ is a primitive element of the finite multiplicative group $\mathbb Z_p^*$ and $y=\alpha^x$, $x$ is a random integer in $\{1,2,3,\ldots,p-1\}$. $x$ is the Alice private key. [**2.**]{} Assume that Alice wants to sign the message $m<p$. She must solve the congruence $$\alpha^t \equiv y^{r_1}\,r_1^{r_2}\, r_2^{r_3}\ldots r_{n-1}^{r_n}\,r_n^m\ [p]$$ where $r_1,r_2,\ldots, r_n, t$ are $n+1$ unknown variables.\ Alice fixes arbitrary $r_1$ to be $r_1=\alpha^{k_1}$, $r_2$ to be $r_2=\alpha^{k_2}$,..., and $r_n$ to be $r_n=\alpha^{k_n}$, where $k_1,k_2,\ldots ,k_n$ are chosen randomly.\ Equation (8) is then equivalent to : $$t\equiv x\,r_1+k_1\,r_2+ \ldots+k_{n-1}\,r_n+k_n\, m\ [p-1].$$ As Alice detains the secret key $x$ and knows the values $r_i,k_j,m, \ i\in \{1,2,\ldots, n\}$, she is able to compute the $(n+1)th$ unknown variable $t$. [**3.**]{} Bob can check that verification condition (8) is valid. Let $\overrightarrow{u}=(x,k_1,k_2,\ldots, k_n)$ be Alice secret keys vector and $\overrightarrow{v}=(r_1,r_2,\ldots,r_n,m)$ the signature parameters vector. If $\overrightarrow{u}.\overrightarrow{v}$ denotes the scalar product, then the last signature parameter $t$ can be obtained from the modular equation $t\equiv \overrightarrow{u}.\overrightarrow{v}\ [p-1]$, which is an immediate consequence of relation (9). Conclusion ========== In this work, we described a new variant of ElGamal signature scheme and analyzed its security. Our method relies on an ElGamal similar equation with three unknown variables and it avoids the use of the extended Euclidean algorithm. We also gave a generalization for its theoretical interest.\ For the future, one may try to see how to improve our new variant. One idea is to replace the modular group $\mathbb Z_p^*$ by a subgroup whose order is a prime divisor of $p-1$ or by other remarkable structures as the elliptic curves group. [99]{} D. Bleichenbacher, [*Generating ElGamal signatures without knowing the secret key*]{}, In Advances in Cryptology, Eurocrypt’96, LNCS 1070, Springer-Verlag, (1996), 10 - 18. W. Diffie and M. E. Hellman, [*New directions in cryptography*]{}, IEEE Transactions on Information Theory, vol. IT-22, (1976), 644 - 654. T. ElGamal, [*A public key cryptosystem and a signature scheme based on discrete logarithm problem*]{}, IEEE Trans. Info. Theory, IT-31, (1985), 469 - 472. P. Horster, M. Michels, H. Petersen, [*Generalized ElGamal signature schemes for one message block*]{}, Technical Report, TR-94-3, 1994. E. S. Ismail, N. M. F. Tahat and R. R. Ahmad, [*A new digital signature scheme based on factoring and discrete logarithms* ]{}, J. of Mathematics and Statistics ([**4**]{}): (2008), 222 - 225. O. Khadir, L. Szalay, [*Experimental results on probable primality*]{}, Acta Univ. Sapientiae, Math. [**1**]{}, no. 2, (2009), 161 - 168.\ Available at http://www.emis.de/journals/AUSM/C1-2/math2-6.pdf A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, [*Handbook of applied cryptography*]{}, CRC Press, Boca Raton, Florida, 1997.\ Available at http://www.cacr.math.uwaterloo.ca/hac/ National institute of standard and technology (NIST). FIPS Publication 186, DSA, Department of commerce, 1994.\ http://www.itl.nist.gov/fipspubs/fip186.htm P. Q. Nguyen and I. E. Shparlinski, [*The insecurity of the digital signature algorithm with partial known nonces*]{}, J. of Cryptology, Vol. [**15**]{}, (2002), 151 - 176. H. Ong, C .P . Schnorr and A. Shamir, [*Efficient signature schemes on polynomial equations*]{}, In Advances in Cryptology, Crypto’84, LNCS 196, Springer-Verlag, (1985), 37 - 46. D. Pointcheval and J. Stern, [*Security proof for signature schemes*]{}, In Advances in Cryptology, Eurocrypt’96, LNCS 1070, Springer-Verlag, (1996), 387 - 398. M. O. Rabin, [*Digitalized signatures and public key functions as intractable as factoring*]{}, MIT/LCS/TR, Vol. 212, 1979. R. Rivest, A. Shamir and L. Adeleman, [*A method for obtaining digital signatures and public key cryptosystems*]{}, Communication of the ACM, Vol. no 21, (1978), 120 - 126. C. P. Schnorr, [*Efficient signatures generation by smart cards* ]{}, In Advances in Cryptology, Crypto’89, LNCS 435, Springer-Verlag, (1990), 239 - 252. A. Shamir, [*How to prove yourself : practical solutions to identification and signature problems*]{}, In Advances in Cryptology, Crypto’86, LNCS 196, Springer-Verlag, (1987), 186 - 194. D. R. Stinson, [*Cryptography, theory and practice*]{}, Third Edition, Chapman & Hall$/$CRC, 2006. [**Received: Month xx, 200x**]{}
--- abstract: 'The I4U consortium was established to facilitate a joint entry to NIST speaker recognition evaluations (SRE). The latest edition of such joint submission was in SRE 2018, in which the I4U submission was among the best-performing systems. SRE’18 also marks the 10-year anniversary of I4U consortium into NIST SRE series of evaluation. The primary objective of the current paper is to summarize the results and lessons learned based on the twelve sub-systems and their fusion submitted to SRE’18. It is also our intention to present a shared view on the advancements, progresses, and major paradigm shifts that we have witnessed as an SRE participant in the past decade from SRE’08 to SRE’18. In this regard, we have seen, among others, a paradigm shift from supervector representation to deep speaker embedding, and a switch of research challenge from channel compensation to domain adaptation.' address: | $^1$NEC Corporation :: Tokyo Institute of Technology, Japan\ $^2$University of Eastern Finland, Finland :: INRIA, France\ $^3$JD AI Research and Platform, USA – $^4$Institute for Infocomm Research, Singapore\ $^5$LIUM, France – $^6$National University of Singapore, Singapore – $^7$LIA, France\ $^8$Nanyang Technological University, Singapore – $^9$Northwest Polytechnic University, China\ $^{10}$CRSS, University of Texas at Dallas, USA – $^{11}$EURECOM, France bibliography: - 'mybib.bib' title: | I4U Submission to NIST SRE 2018:\ Leveraging from a Decade of Shared Experiences --- **Index Terms**: speaker recognition, benchmark evaluation Introduction ============ The series of speaker recognition evaluations (SRE) conducted by NIST has been a major driving force advancing speaker recognition technology [@MARTIN2000; @Martin2007]. The basic task is speaker verification: given a segment of speech, decide whether a specified target speaker is speaking in that segment. The SRE in 2018 marks the most recent and ambitious attempt to tackle more realistic tasks [@sre18]. The SRE’18 evaluation set comprises two partitions – *Call-My-Net 2* (CMN2) and *Video-Annotation-for-Speech-Technology* (VAST) – named after the corpora [@Jones2017; @Tracey2018] from which the data were derived. For the CMN2 partition, domain mismatch appears to be the major challenge – the *train set* consists of English utterances while the *test set* consists of Tunisian Arabic utterances. For the VAST partition, the major challenge is the *multi-speaker test* scenario, for which an additional diarization module has to be used to determine the target speaker (if any) from a given test segment. This paper presents the technical details of the datasets, sub-system development, and fusion strategy of I4U SRE’18 submission. In the past decade, I4U participated in five SREs, namely SRE’08, 10, 12, 16, and 18 [@hli2009; @hli2010; @Saeidi2013; @Lee2017; @i4u2018]. Aside from a joint submission, the I4U consortium was formed with a common vision to promote research collaboration and facilitate active exchange of information and experience towards the open evaluation of speaker recognition technology. Along the way we have seen *old* technical challenges were solved, *e.g.*, channel compensation [@nap; @Kenny2007; @kenney2010plda], after which researchers have moved on to tackle new challenges, *e.g.*, domain adaptation [@Romero2014; @alam2018coral; @kalee2018; @Sun2016]. SRE’18 marks the ten-year anniversary of I4U consortium into NIST SRE series of evaluation. As we set out with the aim to tackle new frontiers in robust speaker recognition, we reckon that it is beneficial looking into past I4U submissions, to share the lessons learned and the insights gained from a decade of I4U experiences. The paper is organized as follows. Section 2 gives a brief description on SRE’18 dataset, the challenges, and the I4U solutions to deal with them. Then, we present the I4U SRE’18 results in Section 3. Section 4 looks into past I4U submissions. Section 5 concludes the paper. Data and Challenges {#sec:train_and_dev} =================== Two main challenges of SRE’18 are (i) **domain mismatch** in the CMN2, and (ii) **multi-speaker test** segment in VAST. In this section, we provide a brief description on the CMN2 and VAST data conditions that give rise to the aforementioned challenges and elaborate on the strategy and techniques implemented in I4U sub-systems to deal with them. CMN2 and VAST Partitions ------------------------ Table \[table:dataset\] shows the list of corpora made available for the fixed-training condition of SRE’18. The train, development and evaluation sets consists of two partitions [@sre18], namely, the *Call-My-Net 2* (CMN2) [@Jones2017] and *Video-Annotation-for-Speech-Technology* (VAST) [@Tracey2018]. - **CMN2** partition comprises conversational speech in Tunisian Arabic recorded over *voice over internet protocol* (VOIP), in addition to the *public switch telephone network* (PSTN). This is different from Fisher, Switchboard and the Mixer corpora used in previous SREs. Comparing `CMN2-Train` to the `CMN2-Dev` and `CMN2-Eval` sets (see Table \[table:dataset\]), two major differences are languages (English versus Tunisian Arabic) and transmission channels (a mix of VOIP and PSTN versus PSTN only). These differences lead to the so-called **domain mismatch** problem, in which the test set does not follow the same distribution as the train. - **VAST** partition comprises wideband English speech segments extracted from amateur video recordings downloaded from YouTube^^. A signature feature of the VAST partition is multi-speaker conversation with considerable background noise. The VoxCeleb [@Nagrani2017] and SITW [@McLaren2016] used as the `VAST-Train` and `VAST-Dev`, as shown in Table \[table:dataset\], bear the same properties and therefore the same domain. While it might seem unusual to include two distinct data partitions in a single core task, the setup enables a systematic comparison to past results and system performance on new tasks. In this regard, the CMN2 partition is the continuation of past SREs with new challenges (domain mismatch and lack of labelled in-domain data), while the VAST partition represents a new initiative towards speaker recognition in the wild. See Figure \[fig:sres\]. We shall touch upon this point further in Section \[sec:pastandfuture\]. \[table:dataset\] [l l l]{} **Partition** & **Corpus** & **Language**\ `CMN2-Train` & SRE’04-05-06-08-10-12 &\ & Swb-2 Phase I, II, III &\ & Swb-Cell Part 1, 2 &\ & Fisher 1, 2 &\ `CMN2-Dev` & SRE’18-Dev & Tunisian Arabic\ & SRE’18-CMN2-Unlabeled & (PSTN + VOIP)\ `CMN2-Eval` & SRE’18-Eval &\ `VAST-Train` & VoxCeleb1, VoxCeleb2 &\ `VAST-Dev` & SRE’18-Dev, SITW-Eval &\ `VAST-Eval` & SRE’18-Eval &\ Domain adaptation {#sec:domain_adaptation} ----------------- A state-of-the-art speaker recognition system consists of a speaker embedding front-end (*e.g.*, i-vector [@Dehak10frontend], x-vector [@snyder2018vector]), followed by a scoring back-end, which is typically implemented with the *probabilistic linear discriminant analysis* (PLDA) [@ioffe2006plda; @Princepaper]. One advantage of the two-stage pipeline is that the same feature extraction and speaker embedding front-end could be used while domain adaptation is accomplished via a transformation on the x-vectors (or i-vectors) [@Sun2016; @alam2018coral], or the parameters of the PLDA model [@kalee2018], to cater for the condition in the anticipated application. The two-stage pipeline design was used for all the twelve sub-systems in I4U SRE’18 submission. In the case of CMN2, the speaker embedding front-end and PLDA backend are trained on the **out-of-domain** `CMN2-Train` dataset. Let $\boldsymbol{\phi}$ be the speaker embeddings (*i.e.*, x-vector or i-vector). A PLDA model is given by $$p\left(\boldsymbol{\phi}\right) = \mathcal{N}\left(\left. \boldsymbol{\phi} \right| \boldsymbol{\mu}, \mathbf{\Phi}_{\rm b}+\mathbf{\Phi}_{\rm w} \right), \notag$$ where $\boldsymbol{\mu}$ is the global mean, $\boldsymbol{\Phi}_\text{b}$ and $\boldsymbol{\Phi}_\text{w}$ are the between and within-speaker covariance matrices of full rank, respectively. Given `SRE’18-CMN2-Unlabeled`, an unlabelled set of **in-domain** data (see Table \[table:dataset\]), the central idea of domain adaptation is to estimate the in-domain between and within-speaker covariance matrices from the in-domain, yet unlabelled, dataset with some helps from out-of-domain covariance matrices. In I4U SRE’18 submission, two unsupervised domain adaptation techniques have been found to be useful, namely, (i) model-level *correlation alignment* with CORAL+ [@kalee2018], and (ii) Kaldi’s PLDA adaptation[^1]. We refer the interested reader to [@alam2018coral; @kalee2018; @Sun2016] and references therein for more details. Multi-speaker test segment {#sec:multi-speaker} -------------------------- The multi-speaker test scenario is not new. It first appeared in NIST SRE’99 [@MARTIN2000] where a summed two-channel telephone speech consisting of two speakers was used as the test segment. For the case of SRE’18 VAST partition, there may be several speakers in a test segment. One straightforward solution is to score the entire test segment regardless of other competing speakers. Alternatively, one could use a diarization system to obtain several speaker clusters, score the enrollment segment against all the speaker clusters and select the maximum score. Speaker diarization was explored in Sys. 6 and 7 as shown in Table \[table:subsysperf\] Following [@shell2014diarization], speaker diarization was accomplished using an x-vector PLDA system. Given a VAST test segment, it is first split uniformly into cuts of about 1 second, which are then represented as x-vectors. A matrix of PLDA scores is computed from all the cross-pairs of these x-vectors. The score matrix is used as the affinity matrix in *hierarchical agglomerative clustering* (AHC) where speaker clusters are derived. The number of clusters is determined by an AHC stopping threshold tuned on the `SITW` set. It is worth mentioning that speaker change point detection which has shown to be critical in reducing the diarization rate seem to be less important in reducing the error rate in speaker verification task. I4U SRE’18 Submission and Results ================================= The sub-system performance is shown in Table \[table:subsysperf\]. Among the twelve sub-systems, eight of them employed x-vector embedding in some form. Notably, Sys. 5 and 6 use attentive pooling layer in the x-vector extractor, while Sys. 10 uses a t-vector embedding trained with a triplet loss [@zhang2018text]. The remaining three sub-systems use i-vector. Comparing the results, x-vector gives a much better performance than i-vector on both CMN2 and VAST. The Kaldi PLDA domain adaptation was the most commonly used strategy. The CORAL+ was also successfully employed resulting in the lowest EER and $C_{\mathrm{prim}}$. Clustering unlabeled set to obtain pseudo-speaker labels was tried in Sys. 3, though no significant difference between clustering and Kaldi adaptation strategy is observed. In terms of the performance on the VAST partition, we observe only slight benefit in using speaker diarization (Sys. 6 and 7) suggesting a good potential for further improvement. The scores of the sub-systems were pre-calibrated before fusion. To this end, we apply an affine transformation with simple scaling factor and bias to the scores. The calibrated scores from sub-systems were then combined with a linear fusion. The cross-entropy cost was used for the calibration and fusion with a slight different setting on the effective prior. In this regard, the effective prior was set to 0.5 for score calibration, while an effective prior $P_{\mathrm{eff}}$ of 0.005 and 0.05 was used for the fusion for CMN2 and VAST partitions, respectively. Note that the effective priors were set based on those specified in the evaluation plan [@sre18]. The BOSARIS Toolkit [@brummer] was used to perform calibration and fusion. In the primary submission, only subsystems with positive weights were retained. This resulted in 7 subsystems in primary submission of the CMN2 partition (Sys. 3, 4, 6, 7, 9, 10 ,11), and 11 subsystems in the primary submission of the VAST partition (Sys. 1 to 11). The final submitted fusion system performance is shown in Table \[table:performance\]. In general, the performances on development set and evaluation set agree on the CMN2 partition. On the VAST partition, we notice a large performance gap between development and evaluation sets where the EER increases from 3.70% to 10.18 %. This result reflects the lack of suitable development set for the VAST data. This justifies the use of `SITW` as `VAST-Dev` as shown in Table \[table:dataset\]. [l c c c c c c ]{} & & & &\ Sys. & Diar.& DA & EER & $C_{\mathrm{prim}}$ & EER & $C_{\mathrm{prim}}$\ 1 `i` & N & Kaldi& 12.6 & 0.761 & 16.8 & 0.676\ 2 `x` & N & Kaldi& 11.6 & 0.759 & 15.9 & 0.713\ 3 `x` & N & Clust. & 8.1 & 0.549 & 14.3 & 0.557\ 4 `x` & N & Kaldi& 7.5 & 0.452 & [**12.1**]{} & [**0.543**]{}\ 5 `x`+ & N & Kaldi& 7.9 & 0.558 & 15.5 & 0.637\ 6 `x`+ & Y& CORAL+& [**5.9**]{} & [**0.421**]{} & 12.7 & [**0.543**]{}\ 7 `x` & Y& Kaldi& 7.3 & 0.491 & 14.3 & 0.571\ 8 `x` & N& Kaldi& 8.1 & 0.551 & 14.6 & 0.601\ 9 `x` & N& Kaldi& 7.5 & 0.482 & 14.3 & 0.533\ 10`t` & N& Kaldi & 10.5 & 0.678 & 17.1 & 0.720\ 11`i` & N& Kaldi & 12.4 & 0.755 & 18.7 & 0.700\ 12`i` & N& -& 16.4 & 0.814 & 21.3 & 0.788\ \[table:subsysperf\] Past Lessons and Future Outlook {#sec:pastandfuture} =============================== The I4U consortium participated in five SREs in the past decade from SRE’08 to SRE’18. In this section, we look into past I4U results (fusion and single best) to derive insights and to have a glimpse into the current and possible future trends. To start with, we give a brief synopsis and highlight the major challenges in the past SREs. - SRE’08, 10, and 12 have in common their evaluation sets drawn from the Mixer corpus, or more precisely, different phases of the Mixer corpus [@Cieri2006; @Brandschain2008; @Brandschain2010]. One unique feature of the Mixer corpus is that it consists not only conversational telephone speech (CTS) but also conversational and interview style speech recorded over microphone channel. Among others, one major challenge put forward was cross-channel enrollment and test. This is referred to as the *short2-short3* core task in SRE’08, where the enrollment utterances are either telephone or microphone speech, while the test utterances could be telephone, microphone, or interview speech. SRE’10 followed similar setup except that the core task were split into nine *common conditions* (CCs) corresponding to various combinations of channel (telephone, interview, or microphone) and vocal efforts (low, normal, or high). A larger train set was also provided. SRE’12 has a more complicated setup in which the enrollment utterances were derived from previous SRE’08 and SRE’10, while the test utterances were drawn from previously undisclosed subset of the Mixer corpora. The number of CCs was reduced to five. - SRE’16 was derived from the *Call-My-Net* corpus [@Jones2017]. Though the evaluation set is much smaller than that of SRE’12 (few hundreds as opposed to few thousands speakers), SRE’16 posed a new challenge in terms of domain mismatch between the train and evaluation sets. In particular, the train set consists of mainly English speech while the evaluation set was in Tagalog (tgl) and Cantonese (yue). The CMN2 partition of SRE’18 is a continuation of SRE’16 where the same *Call-My-Net* protocol was used to collect speech in Tunisian Arabic [@Jones2017]. The VAST partition of SRE’18 explores a new direction of data collection from online video [@Tracey2018]. [l|c|c|c]{} **CMN2** & EER (%) & Min $C_{primary}$ & Act $C_{primary}$\ Development & 4.52 & 0.277 & 0.290\ Evaluation & 5.11 & 0.362 & 0.368\ **VAST**& EER (%) & Min $C_{primary}$ & Act $C_{primary}$\ Development & 3.70 & 0.268 & 0.300\ Evaluation & 10.18 & 0.444 & 0.550\ \[table:sre18eval\] Table \[table:i4u\_sre\] shows the EER of I4U submissions in the past five SREs. Both single-best sub-system and fusion show the same trends. Note that the number of sub-systems used in the fusion varies in each SREs. For SRE’10 and SRE’12, EERs were first computed for each CC and their averages are shown in the table. Figure \[fig:sres\] shows the evolution of EERs on the evaluation set across five past SREs. Strictly speaking, these EERs are not comparable as they were obtained from different evaluation sets. Nevertheless, it is possible to make observations about the general trends. **From SRE’08 to SRE’12**, we see that the EER decreases drastically from SRE’08 at $5.90\%$ to $2.23\%$ in SRE’10 and $2.30\%$ in SRE’12. The main theme in these SREs was channel compensation. In this regard, a larger train set benefited significantly channel compensation techniques like *joint factor analysis* (JFA) [@Kenny2007] and *nuisance attribute projection* (NAP) [@nap] which led to $62\%$ relative EER reduction in SRE’10. In SRE’12, we saw the popularity of i-vector PLDA pipeline [@kenney2010plda] as a simpler alternative to JFA where (i) sequence embedding (i-vector), and (ii) channel compensation and scoring (PLDA) are carried out separately in a pipeline as opposed to a monolithic device. In SRE’12, the EER settled down at similar level as in SRE’10. Compared to its predecessor, the merit of i-vector PLDA is that score normalization is not required. Also shown in Figure \[fig:sres\] are the GMM-SVM (Gaussian mixture model – support vector machine) [@nap] and GLDS-SVM (generalized linear discriminant sequence kernel SVM), which were two popular technique that use high-dimensional utterance-level representation with SVM. **From SRE’16 to SRE’18 and beyond**. We witnessed a rebound in EER with the introduction of CMN evaluation set in SRE’16, which posed a different set of challenges compared to SRE’08-12. Language mismatch and lack of labeled in-domain data are among these challenges. In SRE’18, the EER reduces significantly by $51\%$ from $11.48\%$ to $5.58\%$ on SRE’18 CMN2 partition. Undoubtedly, one major contributor is the x-vector deep speaker embedding method [@snyder2018vector; @okabe2018attentive]. There is also considerable contribution from unsupervised PLDA adaptation technique as noted in Section \[sec:domain\_adaptation\]. Another new facet introduced in SRE’18 is the VAST partition. The unconstrained nature of VAST data had proven to be relatively difficult compared to its CMN2 counterpart. We foresee the EER on CMN2 would settle down at around the same level as in SRE’12 when more data is made available. For VAST partition, the difficulty lies at the multi-speaker test segment as noted in Section \[sec:multi-speaker\]. In view of the performance gap between the two partitions, we reckon that new breakthrough in speaker diarization aiming at improving speaker recognition accuracy rather than diarization error is necessary. The forthcoming SRE’19 offers another avenue towards that direction with the use of video information [^2]. **Large-scale fusion** has always been the central stage of I4U submissions. In particular, the I4U submission to SRE’16 encompassed 32 sub-systems, each of them presenting a high-end recognizer involving careful parameter optimization and data engineering. Deploying such massive fusion may be challenging in real use case, reliable fusion indeed plays a key role: it provides a vehicle to solve a common engineering goal, which could not be realistically solved with a single system alone. The SRE’16 fusion result shows that a fairly simple linear fusion improves the performance considerably compared the single-best from 11.48% to 8.59% (see Table \[table:i4u\_sre\]). Interestingly, in the case of SRE’18 CMN2 we do not observe similar large performance gap, indicating the need for new innovations in the underlying technique. Two other useful points that we can derive from I4U experience are: (i) Score pre-calibration before fusion always help. Notably, it allows classifier selection base on their weights. Classifiers with negative correlation with others will have negative weights and could usually be discarded; (ii) Fusion of fusion (*i.e.*, fusing multiple fused systems) is problematic and should be avoided. The rationale is that it tends to over-fit the Development set. **Channel versus domain mismatch.** The notion of channel is used to describe the extrinsic variability imposed on a speech utterance by the acoustic environment, recording device, and the transmission channel. Channel mismatch denotes the inconsistency between the enrollment and test segments in a given trial. For example, a target speaker might be rejected if the channel effects (e.g., enrollment and test utterances of the same speaker but recorded with different devices) is stronger than the speaker characteristic rendered in the utterances. This was the main topic in SRE’08, 10, and 12, and had led to the use of channel compensation techniques, like, JFA [@Kenny2007], NAP [@nap], and PLDA [@kenney2010plda]. Domain mismatch, in turn, denotes the inconsistency between Train and Evaluation sets. What this means in the context of SRE’18 CMN2 is that the speaker recognition system was trained with English dataset which is different from from those in which we use the system (i.e., Tunisian Arabic). By domain adaptation, we assume that the channel variability learned from one domain shares some common behaviors in another domain. Simple covariance transformation techniques [@alam2018coral; @kalee2018] have shown to work well compared to a much complicated counterpart [@Lin2018]. This is a topic for future research. [l c c c]{} & & Single best\ & \#sub-systems & EER (%) & EER (%)\ SRE’08 & 7 & 5.90 & 6.10\ SRE’10 & 13 & 2.23 & 3.55\ SRE’12 & 17 & 2.30 & 3.70\ SRE’16 CMN & 32 & 8.59 & 11.48\ SRE’18 CMN2 & 12 & 5.11 & 5.86\ SRE’18 VAST & 12 & 10.18 & 12.06\ \[table:i4u\_sre\] \[t!\] ![Progress and performance comparison of I4U submissions from SRE’08 to SRE’18.[]{data-label="fig:sres"}](i4u_sre08_to_sre18.png){width="1.0\linewidth"} Conclusions =========== This paper presents an overview of the recognition systems and their fusion developed for NIST SRE’18 by I4U consortium. In general, sub-systems that utilized more recent x-vector deep speaker embedding were more successful. On the CMN2 partition, the CORAL+ [@kalee2018] unsupervised PLDA adaptation technique has shown to be effective. The VAST partition is more difficult compared to the CMN2. One major challenge is the multi-speaker test segment. Marginal improvement was achieved by pre-processing the multi-speaker test segments with a speaker diarization module. Fusion has always been the center stage of I4U submissions. Comparing the single-best and fusion results in the past SREs from SRE’08 to SRE’18, linear fusion optimized with cross-entropy cost works well. We also found that score pre-calibration helps making classifier selection easier. From SRE’08 to SRE’10 and SRE’12, we observed a significant performance gain in I4U submission due to effective channel compensation techniques (joint factor analysis [@Kenny2007] and PLDA [@kenney2010plda]) coupled with a large train set. From SRE’16 to SRE’18, we observed another significant performance gain benefited from the use of deep speaker embedding [@snyder2018vector]. [^1]: https://github.com/kaldi-asr/kaldi/tree/master/egs/sre16/v2 [^2]: https://www.nist.gov/itl/iad/mig/nist-2019-speaker-recognition-evaluation
--- abstract: | We consider the problem of determining the structure of the dark halo of nearby dwarf spheroidal galaxies (dSphs) from the spherical Jeans equations. Whether the dark halos are cusped or cored at the centre is an important strategic problem in modern astronomy. The observational data comprise the line-of-sight velocity dispersion of a luminous tracer population. We show that when such data are analysed to find the dark matter density with the spherical Poisson and Jeans equations, then the generic solution is a dark halo density that is cusped like an isothermal (${\rho_{\rm D}}\propto r^{-2}$). Although milder cusps (like the Navarro-Frenk-White ${\rho_{\rm D}}\propto r^{-1}$) and even cores are possible, they are not generic. Such solutions exist only if the anisotropy parameter $\beta$ and the logarithmic slope of the stellar density $\gamma_\ell$ satisfy the constraint $\gamma_\ell = 2\beta$ at the centre or if the radial velocity dispersion falls to zero at the centre. This surprisingly strong statement is really a consequence of the assumption of spherical symmetry, and the consequent coordinate singularity at the origin. So, for example, a dSph with an exponential light profile can exist in Navarro-Frenk-White halo and have a flat velocity dispersion, but anisotropy in general drives the dark halo solution to an isothermal cusp. The identified cusp or core is therefore a consequence of the assumptions (particularly of spherical symmetry and isotropy), and not the data. author: - | N. W. Evans,$^1$[^1] J. An,$^{2,3}$and M. G. Walker$^1$\ $^1$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA\ $^2$ Dark Cosmology Centre, Niels Bohr Institutet, K[ø]{}benhavns Universitet, Juliane Maries Vej 30, DK-2100 Copenhagen Ø, Denmark\ $^3$ Niels Bohr International Academy, Niels Bohr Institutet, K[ø]{}benhavns Universitet, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark date: to be submitted title: Cores and Cusps in the Dwarf Spheroidals --- \[firstpage\] [galaxies: kinematics and dynamics — galaxies: structure]{} Introduction ============ Hot stellar systems are held up by the stellar velocity dispersion and have little or no rotation. In fact, many such stellar systems – giant elliptical galaxies and dwarf spheroidals – have a velocity dispersion profile that is constant to a good approximation. The case of the dwarf spheroidals (dSphs) has received particular attention in recent years. @Kl01 [@Kl02] showed that the velocity dispersion profile of Draco is flattish through out the bulk of the galaxy, although later work by @Wi04 found evidence for kinematically cold populations in the very outermost parts. @Wa07 presented stellar velocity dispersion profiles for seven Milky Way dSphs and found almost all to be constant to a good approximation right in to the very centre, although the profile of Sextans seems to dip somewhat. @Ko07a [@Ko07b] studied Leo I and Leo II, and also found essentially flattish profiles. The hope has been that gathering line-of-sight velocities of bright giant stars in the Milky Way dSphs may provide evidence of the structure of the halo in these extremely dark matter dominated galaxies. A question of great interest is whether the dark halos of the dSphs are cusped, as predicted by numerical simulations in cold dark matter cosmogonies, or cored – but results so far have been inconclusive. For example, @Ko07b found that the velocity dispersion data on Leo II are consistent with halo dark matter densities that are both cored and cusped. @Wi06 and @Gi07 presented an analysis of the velocity dispersion data for six dSphs based on the Jeans equations and argued that cored density profiles were favored, partly because this also explains the persistence of kinematically cold substructure in the Ursa Minor dSph [@Kl03] and the maintenance of globular clusters in the Fornax dSph [@Go06]. Here, we are less concerned with modelling observational data on any given dSph than with understanding the generic qualities of the light and dark matter profiles. We consider the general problem of a tracer stellar population with a known line-of-sight velocity dispersion residing in a spherical dark matter halo of an unknown density law. Given the data, Sect. \[sec:jd\] considers what can be legitimately inferred concerning the properties of the dark halo. Motivated by the flatness of the observed dispersion profiles, Sect. \[sec:cvd\] considers constant velocity dispersion solutions of the Jeans equations. We show that the generic solution gives a halo density law with an isothermal cusp (${\rho_{\rm D}}\propto r^{-2}$). Although other solutions are possible – in particular with cores or with the milder cusps preferred by cosmologists (${\rho_{\rm D}}\propto r^{-1}$) – they are not generic. Sect. \[sec:ex\] gives some examples, which show why Jeans modelling in the isotropic case has yielded results consistent with both cores and cusps. In our examples, however, any anisotropy drives the dark halo solution to the isothermal cusp. Finally, in Sect. \[sec:gen\], we discard the assumption of constancy of the velocity dispersion. Solely within the framework of the spherical symmetric Poisson and Jeans equations, we show that solutions of these equations almost always possess isothermal cusps, unless some very special conditions are satisfied either by the radial velocity dispersion or by the anisotropy and the logarithmic gradient of the light profile. The Jeans Degeneracy {#sec:jd} ==================== The observables are the surface brightness and the line-of-sight velocity dispersion of a stellar population. Given a mass-to-light ratio ($\Upsilon$), the surface mass density of the stellar populations (${\Sigma_\ell})$ can be deduced from the surface brightness. If the system is spherically symmetric, ${\Sigma_\ell}(R)$ is then related to the three-dimensional density associated with the luminous material ${\rho_\ell}(r)$ via an Abel transform. Here $R$ is the projected distance, whilst $r$ is the three-dimensional distance measured from the centre of the halo. The inverse transform provides us with the unique ${\rho_\ell}$ [@BT]; $$\label{eq:integdeb} {\rho_\ell}(r)= -\frac1{\upi}\int_r^\infty\frac{\mathrm d{\Sigma_\ell}}{\mathrm dR} \frac{\mathrm dR}{\sqrt{R^2-r^2}}.$$ However, even if one assumes spherical symmetry, the behaviour of the line-of-sight velocity dispersion does not produce a unique solution for the radial dependence of the radial and tangential velocity dispersions. The “luminosity weighted” (assuming a constant $\Upsilon$ for the stellar population) line-of-sight velocity dispersion ${\sigma_\mathrm{los}}^2$ is given by the integral $$\label{eq:losvd} {\Sigma_\ell}(R){\sigma_\mathrm{los}}^2(R) =2\int_R^\infty\left(1-\beta\frac{R^2}{r^2}\right) \frac{{\rho_\ell}\sigma_r^2r\,\mathrm dr}{\sqrt{r^2-R^2}}$$ where $\beta=1-(\sigma_\theta^2/\sigma_r^2)$ is the anisotropy parameter for a spherical system, and $\sigma_r$ and $\sigma_\theta$ are the radial and (one-dimensional) tangential velocity dispersions. It has long been known [see e.g., @MK90; @DM92] that the line-of-sight velocity second moment is degenerate, in that there exist many sets of solutions – $\sigma_r^2(r)$ and $\beta(r)$ – that reproduce the observables. For example, suppose that we have ${\rho_\ell}(r)$ from equation (\[eq:integdeb\]). Then, for any given behaviour of $\sigma_r^2(r)$, the anisotropy parameter $\beta(r)$ can be found[^2] to reproduce the observables such as $$\begin{aligned} \beta(r)&=& -\frac1{\upi}\frac{r^2}{{\rho_\ell}\sigma_r^2} \int_r^\infty\frac{\mathrm dR}{\sqrt{R^2-r^2}}\, \frac{\mathrm d}{\mathrm dR}\! \left[\frac{{\Sigma_\ell}(\bar\sigma_r^2-{\sigma_\mathrm{los}}^2)}{R^2}\right] \nonumber\\&=& 1+\frac1{{\rho_\ell}\sigma_r^2r} \int_r^\infty\!\mathrm d\tilde r\,{\rho_\ell}(\tilde r)\sigma_r^2(\tilde r) \nonumber\\&&+\frac{r^2}{{\upi}{\rho_\ell}\sigma_r^2} \int_r^\infty\frac{\mathrm dR}{\sqrt{R^2-r^2}}\, \frac{\mathrm d}{\mathrm dR}\! \left(\frac{{\Sigma_\ell}{\sigma_\mathrm{los}}^2}{R^2}\right) \label{eq:abelbeta}\end{aligned}$$ where $$\bar\sigma_r^2(R) =\frac2{{\Sigma_\ell}(R)} \int_R^\infty\frac{{\rho_\ell}\sigma_r^2r\,\mathrm dr}{\sqrt{r^2-R^2}}.$$is the luminosity-weighted mean projected radial velocity dispersion. \[We have not found eq. (\[eq:abelbeta\]) for the anisotropy parameter in the existing literature.\] Not all solutions are necessarily physical since $-\infty \le \beta \le 1$, which has to be checked *a posteriori*. However, subject to this condition, *for any given arbitrary $\sigma^2_r(r)$, an anisotropy parameter $\beta(r)$ can be found to reproduce any observable ${\sigma_\mathrm{los}}^2(R)$.* Once we have ${\rho_\ell}(r)$, guessed (any) $\sigma_r^2(r)$, and found the anisotropy parameter $\beta(r)$ to reproduce the observables, then from the spherically-symmetric steady-state Jeans equation $$\label{eq:ssj0} \frac{\mathrm d({\rho_\ell}\sigma_r^2)}{\mathrm dr} +2\beta\frac{{\rho_\ell}\sigma_r^2}r =-\frac{4{\upi}G{\rho_\ell}}{r^2}\! \int_0^r\!\rho_{\rm t}(\tilde r)\,\tilde r^2\,\mathrm d\tilde r.$$ Here, $\rho_{\rm t}$ is the total density such that $\rho_{\rm t}=\Upsilon{\rho_\ell}+{\rho_{\rm D}}$ where ${\rho_\ell}$ and ${\rho_{\rm D}}$ are the stellar and dark matter densities, respectively. With the self-consistency assumption, if there is no dark matter (${\rho_{\rm D}}=0$), and the mass-to-light ratio $\Upsilon$ is additionally constant, then the choice of $\sigma_r^2(r)$ is uniquely determined by the coupled Poisson and Jeans equations [see e.g., @Bi82; @To83; @DM92]. However, if $\Upsilon$ is varying or ${\rho_{\rm D}}(r)\ne 0$, there is no unique choice of $\sigma_r^2(r)$. That is, any $\sigma_r^2(r)$ is allowed subject only to the constraints that $-\infty \le \beta(r) \le 1$ and $\rho_{\rm t}(r)\ge0$. Consequently, without further observational constraints or simplifying assumptions, determining a model to reproduce the observed ${\Sigma_\ell}(R)$ and ${\sigma_\mathrm{los}}^2(R)$ is completely indeterminate in the spherical case. Jeans Modelling with Constant Velocity Dispersions {#sec:cvd} ================================================== Isotropy -------- The simplest assumption to make is that of isotropy ($\beta=0$). Then $\sigma_r^2(r)$ is recovered uniquely from ${\sigma_\mathrm{los}}^2(R)$ by an inverse Abel Transform. The Jeans equation reduces basically to hydrostatic equilibrium with the “pressure” being equal to $P={\rho_\ell}\sigma^2$, that is, $$\label{eq:hse} {\bmath\nabla}({\rho_\ell}\sigma^2)=-{\rho_\ell}{{\bmath\nabla}}\psi$$ where $\sigma$ is the one-dimensional velocity dispersion of the tracer population and $\psi$ is the gravitational potential. If we further assume that ${\sigma_\mathrm{los}}^2(R)$ is a constant $\sigma_0^2$, as suggested by most of the observations, then the unique solution is $\sigma_r^2(r)=\sigma_0^2$. Under this assumption, the central properties of the halo are severely restricted, as we now show. Since ${\bmath\nabla}({\rho_\ell}\sigma^2)=\sigma_0^2{\bmath\nabla}{\rho_\ell}$, the Jeans equation indicates that ${\bmath\nabla}{\rho_\ell}$ and ${\bmath\nabla}\psi$ are (anti-)parallel everywhere. This further implies that the surfaces of constant ${\rho_\ell}$ and $\psi$ coincide, and thus ${\rho_\ell}$ can be considered as a function of $\psi$. Consequently, ${\bmath\nabla}{\rho_\ell}=(\mathrm d{\rho_\ell}/\mathrm d\psi){\bmath\nabla}\psi$ and equation (\[eq:hse\]) reduces to $$\label{eq:deq} \sigma_0^2\frac{\mathrm d{\rho_\ell}}{\mathrm d\psi}+{\rho_\ell}=0.$$ By solving this differential equation, we find that $$\label{eq:sol} {\rho_\ell}=\rho_0\,\exp\!\left\lgroup{-\frac\psi{\sigma_0^2}}\right\rgroup \,;\qquad \psi=\psi_0-\sigma_0^2\ln{\rho_\ell},$$ where $\psi_0=\sigma_0^2\ln\rho_0$ is an integration constant. Combined with the Poisson equation under the assumption that the potential is generated by the dark matter halo of a density ${\rho_{\rm D}}$ (i.e., ${\rho_{\rm D}}\gg{\rho_\ell}$ and so $\rho_{\rm t}\approx{\rho_{\rm D}}$), we obtain $$\label{eq:integdea} {\rho_{\rm D}}=\frac{{\nabla^2}\psi}{4{\upi}G}= -\frac{\sigma_0^2}{4{\upi}G}{\nabla^2}\ln{\rho_\ell}.$$ This is an interesting equation, as the dark matter density, which we wish to know, depends on the Laplacian of the luminosity density. The combined integro-differential equations (\[eq:integdeb\]) and (\[eq:integdea\]) now relate the dark matter density to the observables, $\sigma_0$ and ${\Sigma_\ell}(R)$. The implications of equations (\[eq:sol\]) and (\[eq:integdea\]) on the behaviour of tracer populations in dark matter halos are of considerable interest, even before we apply to any specific example. First, equation (\[eq:sol\]) indicates that the potential is finite for any finite luminosity density. Consequently, we find that any cored luminosity density implies that the central potential well cannot be infinitely deep and so the dark matter halo density profile is also cored, or diverges strictly slower than the singular isothermal sphere ($r^{-2}$) if it is cusped. Similar argument also leads us to the conclusion that any cusped luminosity density would only be supported by a cusped halo diverging at least as fast as a singular isothermal sphere. In fact, this latter conclusion can be sharpened with further analysis. Under the assumption of spherical symmetry, equation (\[eq:integdea\]) may be written to be $$\label{eq:slope} \gamma_\ell+ r\frac{\mathrm d\gamma_\ell}{\mathrm dr} =\frac{4{\upi}G}{\sigma_0^2}{\rho_{\rm D}}r^2$$ where $\gamma_\ell=-(\mathrm d\ln{\rho_\ell}/\mathrm d\ln r)$ is the logarithmic slope of the density of the tracers. By taking the limit to the centre ($r\rightarrow0$), we find that $$\lim_{r\rightarrow0}{\rho_{\rm D}}r^2= \frac{\sigma_0^2}{4{\upi}G}\gamma_{\ell,0},$$ where $\gamma_{\ell,0}$ is the limiting value of $\gamma_\ell$ towards $r\rightarrow0$, that is, the logarithmic cusp slope of the luminous tracer density. This indicates that if the luminosity density is cusped with $\gamma_{\ell,0} >0$, the halo density must be cusped as ${\rho_{\rm D}}\sim r^{-2}$ like a singular isothermal sphere. On the other hand, any cored ${\rho_\ell}$ indicates that $\gamma_{\ell,0}=\lim_{r\rightarrow0}({\rho_{\rm D}}r^2)=0$ and thus the halo density may not diverge as fast as or faster than the cusp of the singular isothermal sphere. Finally, ${\rho_\ell}$ with a central hole implies that $\gamma_{\ell,0}<0$ and is therefore unphysical as it will lead to ${\rho_{\rm D}}<0$ [c.f., @AE06] Anisotropy ---------- In reality, the velocity dispersion tensor of a “collisionless” stellar system is not necessarily isotropic. However, provided the gravitational potential is still spherically symmetric, the Jeans equation reduces to $$\label{eq:ssj} \frac1I\frac{\mathrm d}{\mathrm dr}\!\left(I{\rho_\ell}\sigma_r^2\right)= -{\rho_\ell}\frac{\mathrm d\psi}{\mathrm dr}$$ where $I=\exp\int(2\beta/r)\mathrm dr$ is the integrating factor (e.g., $I=r^{2\beta}$ if $\beta$ is constant). Inspired by the discussion in the preceding section, we now consider the case of constant radial velocity dispersions with an arbitrary functional form of $\beta$. Note that it is possible, from equation (\[eq:abelbeta\]), to find $\beta(r)$ that is consistent with the observed ${\Sigma_\ell}(R)$ and ${\sigma_\mathrm{los}}^2(R)$ once we ascribe a particular behaviour to $\sigma_r^2(r)$. Hence, we can in principle find such a model that produces a constant line-of-sight velocity dispersion (or any other desired form), which is indicated by the observations using equation (\[eq:abelbeta\]). If $\sigma_r^2=\sigma_0^2$ is a constant, equation (\[eq:ssj\]) becomes $$\frac{\mathrm d\psi}{\mathrm dr}= -\sigma_0^2\frac{\mathrm d}{\mathrm dr}\ln(I{\rho_\ell})$$and consequently we find that $$\psi=\psi_0-\sigma_{0}^2\ln(I{\rho_\ell}) =\psi_0-\sigma_{0}^2\left(\ln{\rho_\ell}+\int\frac{2\beta}r\,\mathrm dr\right) \label{eq:newpot}$$ $${\rho_{\rm D}}=\frac{{\nabla^2}\psi}{4{\upi}G} =-\frac{\sigma_0^2}{4{\upi}Gr^2}\,\frac{\mathrm d}{\mathrm dr}\! \left\lgroup{r^2\frac{\mathrm d}{\mathrm dr}\ln(I{\rho_\ell})}\right\rgroup. \label{eq:newdens}$$ Here, the last equation can be re-cast similarly to equation (\[eq:slope\]) so that $$\gamma_\ell-2\beta +r\left(\frac{\mathrm d\gamma_\ell}{\mathrm dr} -2\frac{\mathrm d\beta}{\mathrm dr}\right) =\frac{4{\upi}G}{\sigma_0^2}{\rho_{\rm D}}r^2.$$ This is basically the same as equation (\[eq:slope\]), except $\gamma_\ell$ is replaced by $\gamma_\ell-2\beta$. We note that the result does not require the assumption that $\beta$ is a constant. The implication of this result is quite similar to that of equation (\[eq:slope\]). At the limit towards the centre ($r\rightarrow0$), we find that $\gamma_{\ell,0}>2\beta_0$ implies that ${\rho_{\rm D}}\sim r^{-2}$ whilst we infer that $\lim_{r\rightarrow0}({\rho_{\rm D}}r^2)=0$ if $\gamma_{\ell,0} =2\beta_0$. Here, $\beta_0$ is the limiting value of anisotropy parameter at the centre. As per @AE06, $\gamma_{\ell,0}<2\beta_0$ is unphysical (although $\rho_{\ell,0}$ does not self-consistently generate $\psi$, the potential well depth is finite at the centre so that their result holds) since it indicates negative halo density. In summary, therefore, *given a tracer population with a constant radial velocity dispersion, the generic solution of the Jeans equation for the dark matter is cusped like a singular isothermal sphere (${\rho_{\rm D}}\propto r^{-2}$). Milder cusps (like ${\rho_{\rm D}}\propto r^{-1}$) and cores are possible, but they are not generic. Such solutions only exist if the anisotropy parameter $\beta$ and the logarithmic slope of the stellar density $\gamma_\ell$ satisfy $\gamma_{\ell,0}=2\beta_0$.* Examples {#sec:ex} ======== A Plummer Light Profile ----------------------- Plummer’s law is commonly used to model the light of dSphs [e.g., @La90; @Wi02]. Assuming a constant mass-to-light ratio $\Upsilon$, then the surface density is $${\Sigma_\ell}(R) = \frac{\Sigma_0}{(1+ R^2/r_0^2)^2}.$$ Here, $r_0$ is the radius of the cylinder that encloses half the light, whilst the total luminosity is $L = {\upi}r_0^2 \Sigma_0 /\Upsilon$. It is straightforward to establish via equation (\[eq:integdeb\]) that the stellar density is $${\rho_\ell}(r) = \frac{3 \Sigma_0}{4 r_0} \frac{1}{ (1+ r^2/r_0^2)^{5/2}}$$ Now, using equation (\[eq:integdea\]), the dark matter density must be $${\rho_{\rm D}}(r) = \frac{5\sigma_0^2 }{ 4 {\upi}G r_0^2} \frac{3 + r^2/r_0^2 }{ (1+ r^2/r_0^2)^2},$$ which is a cored isothermal sphere [see @Ev93]. However, as the model is isotropic ($\beta =0$) and the stellar density is cored at the centre ($\gamma_{\ell,0}=0$), this dark halo solution corresponds to the special case $\gamma_{\ell,0}=2\beta_0$. Now suppose the assumption of isotropy is dropped. Using equation (\[eq:newdens\]), we find that the dark halo density acquires an additional term that behaves near the centre as $$\label{eq:densextra} {\rho_{\rm D}}(r)\simeq-{\beta_0 \sigma_0^2 \over 2 {\upi}G r^2},$$ from which we deduce that $\beta_0 \le0$ so that the model is tangentially anisotropic as $r \rightarrow 0$. Notice that this has changed the behaviour of the density at the origin – the halo law has now an isothermal cusp. This is in accord with the general result that provided $\gamma_{\ell,0}>\beta_0$, the cusp is isothermal. Exponential Light Profiles -------------------------- Another set of profiles often used to model dSph light distributions is based on the exponential law [@Se68; @Fa83; @Ir95]. Suppose that the three-dimensional density law ${\rho_\ell}(r)$ is exponential $${\rho_\ell}(r) = \rho_0\, \exp\!\left\lgroup{-\frac r{{r_{\rm d}}}}\right\rgroup.$$ The corresponding surface density is cored as $${\Sigma_\ell}(R) = \Sigma_0\ \frac R{{r_{\rm d}}}\ K_1\!\left\lgroup{\frac R{{r_{\rm d}}}}\right\rgroup \label{eq:bessel}$$ where $\Sigma_0 = 2{r_{\rm d}}\rho_0$ and $K_\nu(x)$ is the modified Bessel function of the second kind and of order $\nu$. We refer to this as the Bessel profile. The half-light radius is $\sim2.027{r_{\rm d}}$, whilst the total luminosity is $L = 4{\upi}{r_{\rm d}}^2 \Sigma_0 /\Upsilon$. The dark matter density inferred from equation (\[eq:integdea\]) is then $${\rho_{\rm D}}= \frac{\sigma_0^2}{2 {\upi}G {r_{\rm d}}r},$$ which is the $r^{-1}$ cusp beloved of cosmologists [e.g., @NFW]. In fact, a three-dimensional density law ${\rho_\ell}=\rho_0\mathrm e^{-(r/{r_{\rm d}})^\alpha}$ leads to an infinite dark matter cusp of form ${\rho_{\rm D}}\propto r^{-(2-\alpha)}$. Suppose instead the surface brightness profile is modelled with an exponential law $${\Sigma_\ell}(R) = \Sigma_0\, \exp\!\left\lgroup{ -\frac R{{R_{\rm d}}} }\right\rgroup.$$ The luminosity density is then [see e.g., @Ke92] $${\rho_\ell}(r) = \frac{\Sigma_0}{{\upi}{R_{\rm d}}}\ K_0\!\left\lgroup {\frac r{{R_{\rm d}}} }\right\rgroup,$$ which is logarithmically divergent at the centre. The dark matter density is now $${\rho_{\rm D}}(r)=\frac{\sigma_0^2}{4{\upi}G{R_{\rm d}}r} \left[\left(\frac{[K_1]^2}{[K_0]^2}-1\right)\frac r{R_{\rm d}}+\frac{K_1}{K_0}\right]$$ where $K_n=K_n(r/{R_{\rm d}})$ and $n=0,1$. It is singular as $r \rightarrow 0$, $${\rho_{\rm D}}\simeq \frac{\sigma_0^2}{4{\upi}G} \frac{1}{r^2\ln(r^{-1})},$$ which exhibits strictly slower divergence than a singular isothermal sphere. In other words, once the luminosity density has been assumed to be of exponential form (either in projection or in three-dimensions), then the inferred dark matter density is cusped, but the cusp is always weaker than the isothermal cusp ${\rho_{\rm D}}\sim r^{-2}$. It is easy to see that equation (\[eq:slope\]) still applies since $\gamma_\ell\rightarrow0$ as $r\rightarrow0$. Again, the isotropic models are somewhat unusual – the introduction of anisotropy drives the dark halo solution towards an isothermal cusp. The fact that the terms in the density and the anisotropy decouple in equation (\[eq:newdens\]) means that the assumption of any central anisotropy gives the same additional contribution to the halo density (\[eq:densextra\]) as in the previous example. The General Case {#sec:gen} ================ Let us now gain insight into the general case by discarding the assumption that the radial velocity dispersion is constant. We derive the extension of our result to the general spherically symmetric case. By rewriting the spherical steady-state Jeans equation (\[eq:ssj0\]), we obtain (under the assumption that $\rho_{\rm t}={\rho_{\rm D}}$) $$4{\upi}G\int_0^r\!{\rho_{\rm D}}(\tilde r)\,\tilde r^2\,\mathrm d\tilde r =r \sigma_r^2 \left( \gamma_\ell - 2 \beta - \frac{\mathrm d \ln \sigma_r^2}{\mathrm d \ln r}\right),$$ and $$4{\upi}G {\rho_{\rm D}}r^2 = \sigma_r^2 \left( 1 + \frac{\mathrm d \ln \sigma_r^2}{\mathrm d \ln r} + r \frac{\mathrm d}{\mathrm dr}\right) \left( \gamma_\ell - 2\beta - \frac{\mathrm d \ln \sigma_r^2}{\mathrm d \ln r} \right). \label{eq:generalcase}$$ Now note that if the dark matter density has an isothermal or steeper cusp, then the left-hand side of equation (\[eq:generalcase\]) tends to a non-zero value as $r \rightarrow 0$. However, if the dark matter density is cored or diverges more slowly than $r^{-2}$, then the left-hand side vanishes as $r \rightarrow 0$. Then, for the right-hand side also to vanish as $r \rightarrow 0$, one of the following three conditions must hold at the same limit 1. $$\frac{\mathrm d \ln \sigma^2_r}{\mathrm d \ln r} \rightarrow -1$$ 2. $$\frac{\mathrm d \ln \sigma^2_r}{\mathrm d \ln r} \rightarrow\gamma_\ell - 2 \beta$$ 3. $\sigma_r^2 \rightarrow 0$. Case (i) implies that $\sigma_r^2 \sim r^{-1}$. Excluding the possibility that there is a black hole at the centre, then the central potential must be finite or divergent strictly slower than the logarithm (as ${\rho_{\rm D}}r^2 \rightarrow 0$), and the velocity dispersion diverging as a power law cannot be supported. Case (ii) implies that $\sigma_r^2 \sim r^{{\gamma_\ell}- 2 \beta}$. If $\gamma_\ell > 2\beta$, then $\sigma_r^2 \rightarrow 0$ and so this may be subsumed into Case (iii). If $\gamma_\ell < 2 \beta$, then $\sigma_r^2$ diverges as a power law and so is again unphysical. This leaves only $\gamma_\ell = 2 \beta$ as an independent possibility. Consequently, we have established that if the dark matter density is cored or falls off with a cusp less severe than $r^{-2}$, then either $\gamma_\ell = 2 \beta$ or $\sigma_r^2 =0$ at the centre. The discarding of the assumption of constancy of the radial velocity dispersion permits one additional possibility, namely that $\sigma_r^2$ falls to zero at $r=0$. In summary, therefore, we have a surprisingly strong and general result. *Given a tracer population in the spherical Jeans equation, the generic solution for the dark matter is cusped like a singular isothermal sphere (${\rho_{\rm D}}\propto r^{-2}$). Milder cusps and cores are possible, but they are not generic. Such solutions exist either if the anisotropy parameter $\beta$ and the logarithmic slope of the stellar density $\gamma_\ell$ satisfy $\gamma_{\ell,0}=2\beta_0$ or if the central radial velocity dispersion $\sigma^2_{r,0}$ vanishes.* Such a strong result may seem astonishing. It is worth remarking that this result is essentially due to the spherical symmetry assumption, and in particular, the coordinate singularity at the origin. A real dSph is not likely to be exactly spherically symmetric at the centre. So, it may be possible for a dSph to have a finite velocity dispersion towards the centre by mild symmetry-breaking. Outside of the very central region, a dSph can often be well-approximated by the idealized spherically symmetric fiction. Our theorem therfore really lays bare the dangers of inferring the central density profile from the almost universally made assumption of spherical symmetry. Summary ======= We have studied the problem of deducing the dark halo density from the surface brightness and velocity dispersion profiles of a tracer population. This has immediate application to the dwarf spheroidals satellite galaxies (dSphs) of the Milky Way. If the stellar population generates the gravity field, then the Jeans and Poisson equations give a unique solution for the density, the mass-to-light ratio and the anisotropy of the spherical model, consistent with the observed surface brightness and the line-of-sight velocity dispersion [@Bi82; @To83]. Of course, this does not apply to the case of dSphs, in which the density of the stellar population is dominated by the dark halo density. Now, the problem suffers from the well-known mass-anisotropy degeneracy. The line-of-sight velocity dispersion profiles of the Milky Way dSphs appear to be usually flat [see e.g., @Kl01; @Kl02; @Ko07a; @Wa07], which suggests the simple assumption that the velocity dispersion tensor is isotropic and has a constant value. Then, any inference as to the central behaviour of the dark matter potential is controlled by the assumption as to the light profile of the tracer population. If the light profile is cored, then a dark matter halo density that is itself cored is deduced from Jeans modelling. If the light profile is cusped like an exponential law (or its variants), then a dark halo density that has a milder cusp than isothermal (such as the Navarro-Frenk-White cusp of ${\rho_{\rm D}}\propto r^{-1}$) is deduced. This provides the explanation as to why previous investigators [@Ko07b; @Wi06] have concluded that the data are consistent with both cusps and cores. However, velocity anisotropy in the stellar population has a dramatic effect on the dark halo density recovered from Jeans modelling. If a tracer population has a constant radial velocity dispersion, then the generic solution for the dark halo is always cusped like a singular isothermal sphere (${\rho_{\rm D}}\propto r^{-2}$). Milder dark matter cusps (like ${\rho_{\rm D}}\propto r^{-1}$) and cores are possible, but they are not generic. They can occur only when the condition $\gamma_\ell = 2\beta$ is fulfilled at the centre, where $\beta$ is the anisotropy parameter and $\gamma_\ell$ is the logarithmic slope of the stellar density. Note that many of the commonly used dSph models (such as Plummer or exponential profiles with isotropic velocities) correspond to the special case $\gamma_\ell = 2\beta$ and so any conclusions inferred as to the dark halo law may not be beyond reproach. Finally, even if the assumption as to the constancy of the radial velocity dispersion is discarded, then almost the same theorem holds true. If $\gamma_\ell = 2\beta$ at the centre or if $\sigma_r^2$ falls to zero at the centre, then dark matter cores and milder cusps than isothermal are possible. The generic solution, however, remains the isothermal dark matter cusp, at least within the framework of the spherical symmetric Jeans and Poisson equations. Here, our examples and analysis have shown how Jeans solutions may be telling modellers more about their assumptions rather than the theoretical implications of the data. Of course, there is in principle more information in the discrete velocities than in the velocity dispersion profile and the Jeans equations. The best response to the degeneracy of the problem is to seek further observational constraints (perhaps higher moments from the line profile, see e.g., @MK90) or additional insights on the behaviour of the anisotropy (maybe from simulations or from a detailed analysis on the underlying physics). acknowledgments {#acknowledgments .unnumbered} =============== JA acknowledges that the Dark Cosmology Centre is funded by the Danish National Research Foundation (Danmarks Grundforskningsfond). We thank an anonymous referee for a number of stimulating questions. An J. H., Evans N. W., 2006, ApJ, 642, 752 Binney J., Mamon G. A., 1982, MNRAS, 200, 361 Binney J., Tremaine S., 1987, Galactic Dynamics, Princeton Univ. Press, Princeton Dejonghe H., Merritt D., 1992, ApJ, 391, 531 Evans N. W., 1993, MNRAS, 260, 191 Faber S. M., Lin D. N. C., 1983, ApJ, 266, L17 Gilmore G., Wilkinson M. I., Wyse R. F. G., Kleyna J. T., Koch A., Evans N. W., Grebel E. K., 2007, ApJ, 663, 948 Goerdt T., Moore B., Read J. I., Stadel J., Zemp M., 2006, MNRAS, 368, 1073 Irwin M., Hatzidimitriou D., 1995, MNRAS, 277, 1354 Kent S. M., 1992, ApJ, 387, 181 Kleyna J. T., Wilkinson M. I., Evans N. W., Gilmore G., 2001, ApJ, 563, L115 Kleyna J., Wilkinson M. I., Evans N. W., Gilmore G., Frayn C., 2002, MNRAS, 330, 792 Kleyna J. T., Wilkinson M. I., Gilmore G., Evans N. W., 2003, ApJ, 588, L21 Koch A., et al., 2007a, ApJ, 657, 241 Koch A., et al., 2007b, AJ, 134, 566 Lake G., 1990, MNRAS, 244, 701 Merrifield M. R., Kent S. M., 1990, AJ, 99, 1548 Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563 Sérsic J. L., 1968, Atlas de Galaxias Australes, Obs Astronomico, Cordoba Tonry J. L., 1983, ApJ, 266, 58 Walker M. G., Mateo M., Olszewski E. W., Gnedin O. Y., Wang X., Sen B., Woodroofe M., 2007, ApJ, 667, L53 Wilkinson M. I., Kleyna J., Evans N. W., Gilmore G., 2002, MNRAS, 330, 778 Wilkinson M. I., Kleyna J. T., Evans N. W., Gilmore G. F., Irwin M. J., Grebel E. K., 2004, ApJ, 611, L21 Wilkinson M. I., et al., 2006, EAS Publ. Ser. 20, 105 \[lastpage\] [^1]: E-mail: nwe, walker@ast.cam.ac.uk (NWE, MGW), jinan@nbi.dk (JA) [^2]: The first part of eq. (\[eq:abelbeta\]) is easily verified after re-arranging eq. (\[eq:losvd\]) and applying an inverse Abel transform. The derivation of the second part requires switching the order of a double integral and performing explicit integrations on it.
--- abstract: 'Monte Carlo simulations show that the pulse profile of Čerenkov photons measured near the core of an extensive air shower is sensitive to the secondary muon/electron ratio of the cascade. Čerenkov pulses can easily be measured with a single large area mirror viewed by a photomultiplier tube subtending a small field of view ($\sim 1^{\circ}$). Even for such a simple experiment, exposed to EAS from a range of core locations and arrival directions, strong statistical differences are shown to exist between the pulse parameter distributions of primary protons and those of heavier primary particles. A range of primary energies can be investigated by varying the zenith angle of observations. In this paper, results from simulations of primaries in the energy range 20 TeV to 400 TeV are presented, although in principle the technique could be extended to include the knee of the spectrum. At the lower end of this energy range results can be compared to direct measurements of the composition, while measurements at the upper end can augment results from existing ground based experiments.' address: 'Institute for Cosmic Ray Research, University of Tokyo, Tokyo 188-8502, Japan' author: - 'M.D. Roberts' title: Cosmic ray composition estimation below the knee of the spectrum from the time structure of Čerenkov light in EAS --- Introduction ============ The chemical composition of cosmic rays measured at the Earth is an important key to understanding the production and propagation of cosmic rays. Up to $\sim$ 1TeV per particle the flux of cosmic rays is sufficiently high that direct measurements of high statistical significance can be made with satellite or balloon based detectors. At energies of 100TeV per particle the flux of primaries is so low that direct composition measurements are limited by large statistical uncertainties. Current knowledge of the composition at 100TeV, obtained from direct measurement, is summarized in  [@wat97]. Above 100TeV primary fluxes are such that cosmic rays can only be studied through extensive air showers (EAS), generated as the primaries interact with the earth’s atmosphere. At ground level EAS can be characterized by measuring secondary electrons, muons, hadrons and [[Čerenkov ]{}]{} light. If the primary energy is sufficiently large ($>10^{17}$eV) fluorescence light from nitrogen excitation is also detectable. On average, EAS from primaries of different mass will develop in different ways, leading to composition dependent differences in the secondary observables. In practice, inherent fluctuations in the development of EAS and the complexity of interpreting ground level measurements has limited the success of composition measurement around the knee of the spectrum ($\sim 10^{15}$eV). Historically, the mass resolution of ground based experiments has been so poor that results are expressed as the ratio of light (protons and helium) to heavy (mass $>$ helium) components. Estimates of this ratio at the knee from current experiments vary considerably ($\sim$0.3 to $\sim$0.6  [@wat97]) although there is general agreement that the average composition around the knee becomes heavier with increasing energy. Several new experiments, designed to simultaneously measure many of the secondary observables of EAS, should improve considerably the current knowledge of the cosmic ray composition around the knee of the spectrum  [@dic97; @kla97; @lin98; @for97]. [[Čerenkov ]{}]{} light from extensive air showers {#eas} ================================================== The arrival time distribution of [[Čerenkov ]{}]{} photons from EAS has been studied for a large range of primary particle energies (see, for example [@hes99; @chi99] and references therein). For vertically incident primaries with energy $>$100TeV, which are detectable by ground level particle arrays, the vast majority of [[Čerenkov ]{}]{} photons come from the electromagnetic (EM) component of the cascade. In this case the basic core-distance dependent time structure of the [[Čerenkov ]{}]{} pulse can be described by the simple model outlined in [@hil82]. Most of the [[Čerenkov ]{}]{} emission occurs from energetic particles traveling at speed $c$ near the core of the shower, which can be approximated as a single line of emission. The time structure is determined by a combination of varying distances and refractive index induced delays between the observer and different parts of the cascade. At the core, photons from the bottom of the shower will arrive first, with photons emitted higher up being delayed by the refractive index of the atmosphere. Away from the core the [[Čerenkov ]{}]{} photons emitted at the bottom of the cascade experience greater geometrical delays than those emitted higher up. At the “[[Čerenkov ]{}]{} shoulder”([@pat83]) refractive and geometrical delays cancel and , in this simple model, photons from all parts of the cascade arrive simultaneously. Beyond the [[Čerenkov ]{}]{} shoulder the geometrical delays dominate and the width of the pulse becomes a strong function of core distance. Clearly the greater the longitudinal extent of the shower, the wider the [[Čerenkov ]{}]{} pulse at most core locations. The simple model described above predicts reasonably well the general behavior of [[Čerenkov ]{}]{} pulses from EAS. For real cascades, however, the relationship between core location and [[Čerenkov ]{}]{} pulse width is blurred by the distribution of particle energies and the finite lateral extent of the shower core. The model also ignores the contribution of [[Čerenkov ]{}]{} light from muons. The highest energy muons are created early in the hadronic core of the cascade and can easily survive to produce [[Čerenkov ]{}]{} light down to ground level. This light will arrive in advance of light produced by the EM component of the cascade. The total muon energy of the cascade is carried by relatively few particles leading to a poor efficiency in [[Čerenkov ]{}]{} production compared to the EM component. As the energy of the primary is reduced, however, the relative contribution of the muons to the total [[Čerenkov ]{}]{} yield is increased. This is particularly true for the region inside the shoulder of the lateral distribution, where many of the photons from the the most deeply penetrating part of the EM cascade arrive ([@pat83]). As the primary energy increases, the multiplicative nature of the EM cascade efficiently converts the extra primary energy into large numbers of [[Čerenkov ]{}]{} producing electrons. The cascade develops deeper in the atmosphere, so the [[Čerenkov ]{}]{} light is more concentrated at ground level and suffers less atmospheric absorption than light produced higher in the atmosphere. While a higher energy primary also results in more energy in the muon channel, much of that energy is carried by a few very energetic muons or partly lost to the EM component if the charged pions interact rather than decay. For a vertically incident primary hadron of a few TeV, the simple model of [[Čerenkov ]{}]{} pulse production described previously becomes inadequate. The electromagnetic component of the cascade will develop rapidly, and within $\sim$150m of the core the [[Čerenkov ]{}]{} light produced will appear as a “flash” inasmuch as the duration will be short compared to the duration of the entire pulse. The majority of the time structure of the pulse comes from [[Čerenkov ]{}]{} light from penetrating muons that appears on the leading edge of the pulse. The ratio of light on the leading edge to that in the “flash” will reflect the ratio of muons/electrons in the cascade capable of generating [[Čerenkov ]{}]{} light. The total time spread of [[Čerenkov ]{}]{} photons observed within the shoulder of the lateral distribution is determined by the atmospheric thickness between the EM [[Čerenkov ]{}]{} emission and the observer. If observations are made at sufficiently large zenith angles, the timing separation between EM and muonic [[Čerenkov ]{}]{} light will be maintained for even a very energetic primary. The dependence of the pulse profile on the mass of the primary ============================================================== The effect of primary mass on the shape of the [[Čerenkov ]{}]{} pulse profile can be predicted through general arguments about EAS development: a detailed characterization of pulse profiles, obtained from Monte Carlo simulations, will be presented in section \[montecarlo\]. Assuming maximal or near-maximal fragmentation of the primary nuclei, consider now the differences in the development of the electromagnetic components of EAS generated by protons and iron nuclei of the same total energy. The longitudinal development profiles of proton and iron induced EAS are remarkably similar ( [@lin98]). The individual sub-showers from the nucleons of the iron primary develop and decay more rapidly than the primary proton EAS, but these component nucleons interact at a variety of atmospheric depths effectively elongating the cascade. While the development of the proton and iron cascades will have similar profiles, on average the iron cascades will develop higher in the atmosphere. The transverse momentum of the pions in a cascade increases only slowly with total momentum ( [@wdo94]), so the lateral extent of the secondary particles in the iron cascade will be greater than that of the proton cascade. The combination of these two effects - height of maximum and wider lateral distribution, result in the [[Čerenkov ]{}]{} light from the EM component of the iron induced EAS being more diffuse at ground level than for the proton induced EAS. Over the energy range considered here, the [[Čerenkov ]{}]{} photon density at ground level for a primary iron nucleus is about half that of a primary proton of the same total energy. The arguments used to describe the development of the EM cascade also apply to some extent to the muonic component of the cascade: the muons in the iron cascade tend to be produced higher and with greater lateral spread. The muonic cascade from the iron primary is , however, much more efficient at producing [[Čerenkov ]{}]{} light. The energy of the muonic component of an iron induced cascade is carried by large numbers of relatively low energy muons. The much higher energy interactions at the hadronic core of the proton cascade provides fewer muons with larger average energy. The overall result is that the ratio of the total [[Čerenkov ]{}]{} light that is derived from the muons increases with increasing primary mass. Measuring [[Čerenkov ]{}]{} pulse profiles ========================================== To fully exploit the mass dependent differences between EAS, the detector must be able to collect enough photons to make a detailed pulse profile for those EAS with EM components maximizing high in the atmosphere. The bandwidth of the system must be high, and the field of view sufficiently small that pulse parameterization is not seriously affected by the night sky background. An isochronous large area mirror, such as those used in TeV gamma-ray astronomy, viewed by a single photomultiplier tube would fulfill these conditions. The use of such a system for cosmic ray composition measurement has been described in [@rob98], and examined in detail for VHE cosmic rays (E$<$10TeV) in [@chi99]. At any single zenith angle the range of primary energies that can be investigated is quite limited. The primary energy must be sufficiently high that a large number of [[Čerenkov ]{}]{} photons are available but the steep nature of the primary energy spectrum and the shape of the [[Čerenkov ]{}]{} lateral distribution bias any sample towards lower energy events. The higher the primary energy at a fixed zenith angle the less distinct is the timing separation between the [[Čerenkov ]{}]{} light of muonic and EM origin (see section  \[eas\]). A further consideration is that for the higher energy events, the apparent image size is much larger so that on average less of the total angular distribution of the [[Čerenkov ]{}]{} light is sampled by a narrow FOV detector. Fortunately the limited energy range is easily overcome by observing at a range of zenith angles. The total atmospheric thickness changes from $\sim$1000 gcm$^{-2}$ at zenith to $\sim$36000 gcm$^{-2}$ for horizontal observations. This, in principal, would allow [[Čerenkov ]{}]{} composition measurements over a very large energy range (a few TeV to tens of PeV). Observing at large zenith angles provides increased collection area for the higher energy primaries, and also provides a greater distance over which the [[Čerenkov ]{}]{} emission can occur. This tends to stretch the pulse out, making the timing measurement easier and less affected by systematic uncertainties in the measurement system. A system similar to that described above has been operated on the BIGRAT atmospheric [[Čerenkov ]{}]{} detector. This system comprised a 4m diameter parabolic mirror viewed by a single photomultiplier tube subtending a field of view (FOV) of $\sim 1.0^{\circ}$. The system was designed to be sensitive to the differences between [[Čerenkov ]{}]{} pulses initiated by gamma-rays and cosmic rays for large zenith angle observations. While no detailed composition analysis was performed, it was noted that the shape of the cosmic ray pulse profiles was inconsistent with a pure proton composition ( [@rob98]). Monte Carlo Simulations {#montecarlo} ======================= The Monte Carlo simulations presented here have been made using CORSIKA version 4.5  [@kna95], with GHEISHA code for low energy hadrons and VENUS for high energy hadrons. The EM cascade is fully simulated using the EGS routines and Rayleigh, Mie and ozone absorption processes are modeled for the [[Čerenkov ]{}]{} light. The detector consists of a single 5m diameter isochronous light collector located at 160m above sea level. The mirror is viewed by a single photomultiplier tube with assumed bialkali spectral sensitivity, subtending a full FOV of $1.6^{\circ}$. This FOV has not been rigorously optimized for pulse profile measurement: it is large enough that it can sample most of the angular distribution of the EAS of interest, and small enough to exclude very large-arrival-angle large-core-distance cascades. The photoelectrons detected by the photomultiplier are converted into a pulse by convolving the arrival time of each photoelectron with a simple symmetric detector response function with a rise-time (0-100%) of 2ns. The waveform that is generated is sampled 4 times per nano-second. The night sky background is simulated by adding Poisson distributed photoelectrons to the waveform at an average rate of 2 per nano-second. --------- -------------- ---------------- ----------------------- primary zenith minimum energy maximum core distance (TeV) (m) proton $60^{\circ}$ 15 450 helium $60^{\circ}$ 20 450 oxygen $60^{\circ}$ 20 450 iron $60^{\circ}$ 30 450 proton $70^{\circ}$ 100 720 iron $70^{\circ}$ 200 720 --------- -------------- ---------------- ----------------------- : Summary of the Monte Carlo simulation data set. The maximum arrival direction for all primaries is limited to $2.0^{\circ}$ from the center of the field of view.[]{data-label="tab:mcsum"} In this paper, results of simulations at $60^{\circ}$ and $70^{\circ}$ from zenith will be presented. At $60^{\circ}$ proton, helium, oxygen and iron primaries have been simulated, but only proton and iron at $70^{\circ}$ from zenith. To model a single telescope realistically it is important to include primaries over the full range of energies, core locations and arrival directions to which the instrument is sensitive (see table \[tab:mcsum\] for a summary). For all species an integral spectral index of -1.6 has been assumed. To reduce computing time each shower has been sampled a total of eight times. At $60^{\circ}$ and $70^{\circ}$ from zenith the slant distances are $\sim2$ and $\sim3$ vertical atmospheres respectively. It is possible to extend the energy range of observations up to the knee region by observing at even larger zenith angles, but this is beyond the limitations of the Monte Carlo simulation package used here. CORSIKA v4.5 uses a flat earth/atmosphere and beyond $\sim 70^{\circ}$ this leads to increasing inaccuracies in describing the depth profile of the atmosphere. At extreme zenith angles the atmospheric depth also changes considerably across the full angular acceptance of the detector ($\sim 4^{\circ}$), further complicating the interpretation of results. As the total atmospheric depth traversed by the [[Čerenkov ]{}]{} light increases, the effects of atmospheric absorption become more important; this issue will be addressed in more detail in section \[experimental\]. Fig. \[pulses\] shows the average pulse profiles for proton and iron primaries at $60^{\circ}$ from zenith. The pulses contain between 600 and 900 photoelectrons, but no other selection conditions have been applied. The pulse size selection acts to limit the range of energies (and subsequently core locations) that are present in the sample. The individual contributions to the [[Čerenkov ]{}]{} pulse by the muonic and EM components are also shown. It can be seen that the muonic [[Čerenkov ]{}]{} light is typically well in advance of the light from the EM component and that the muonic/EM [[Čerenkov ]{}]{} light ratio of iron primaries is higher than that of proton primaries. The differences between iron and proton initiated [[Čerenkov ]{}]{} pulse profiles can be seen in simple pulse parameters, such as rise-time (10% to 90% of pulse maximum) and full width at half maximum (FWHM). The distributions of these parameters as a function of core location are shown in Fig. \[fwhm\] and Fig. \[risetime\]. In addition to rise-time and FWHM a third parameter, called LT-ratio (Leading to Trailing signal ratio), will also be defined. The LT-ratio parameter is the ratio of the signal on the leading edge of the pulse to the signal on the trailing edge of the pulse. The signal on the leading and trailing edges are calculated from the sum of photoelectrons arriving in a 10ns period that starts 2.5ns and finishes 12.5ns from the maximum height of the pulse. The LT-ratio parameter is useful for rejecting a small number of events ($\sim$10% of iron and $\sim$5% of protons) where a large muon peak is present on the leading edge of the pulse. This peak can cause a mis-characterization of the pulse by the simplistic determination of the rise-time and FWHM parameters. The relationship between primary composition and rise-time is strongest around the [[Čerenkov ]{}]{} shoulder. The distribution of core locations can be limited to some extent by making a simple FWHM cut (see Fig. \[fwhm\]). Fig. \[params\] shows the distribution of rise-times and other parameters for proton and iron primaries after pulses with FWHM greater than 5.0ns have been rejected. There are clear differences between the [[Čerenkov ]{}]{} pulse profiles of proton and iron initiated EAS and this is reflected in the distributions of the rise-time parameter. Also shown are the rise-time distributions of helium and oxygen primaries at $60^{\circ}$ from zenith (Fig. \[he\_and\_ox\_rt\]), and of proton and iron primaries at $70^{\circ}$ from zenith (Fig. \[param70\]). Many of the difficulties in interpreting EAS data at ground level are due to the fluctuations in shower development. In particular, the depth of first interaction (DOFI) variation for primary protons causes large variations in the secondary particle properties at ground level. Fig. \[depth\] shows that the [[Čerenkov ]{}]{} pulse profile of a primary proton is largely independent of the DOFI. Composition estimation ====================== While clear differences exist between the pulse parameters of various primary species, the interpretation of experimental results leading to a composition estimate over a range of energies will clearly be complex. Even for a narrow range of total pulse sizes at a fixed zenith angle each primary species will have a different distribution of energies, core locations and arrival directions. As with other ground based experiments, correct interpretation of results will rely on accurate modeling of cascade development, atmospheric attenuation and the detector response. The considerable overlap between the rise-time distributions of the various primary species shows that it will be impossible to assign unambiguously primary mass on an event by event basis. Instead, the composition may be inferred by combining the simulated rise-time distributions of individual primary species to reproduce the experimentally observed rise-time distribution. The Monte Carlo simulations allow the ratio of each species derived from such a comparison to be converted directly to a flux. If observations are taken over a range of zenith angles, such that the average energy at each zenith angle increases by a factor of say, 5, the energy spectrum for each primary species can be inferred over a wide range of energies. The assumed spectral index for each species within each energy band can be adjusted by statistical resampling, and the comparison process repeated to achieve consistency between the different energy bands. An example of the accuracy to which the ratios of various primary species can be estimated is shown in Fig. \[final\]. This example, at $60^{\circ}$ from zenith, represents the simplest case, where the cosmic ray flux is assumed to consist of only protons and iron nuclei. The Monte Carlo data set for each species has been divided randomly into two halves. From the first half, a “test distribution” of rise-times has been created, which will represent an experimentally measured sample. If the test distribution is created assuming equal fluxes of proton and iron primaries, after allowing for triggering efficiency, collecting area and event selection, the ratios of events in the sample are (proton:iron=0.79:0.21). The second half of the Monte Carlo rise-time data set has then been repeatedly sampled, allowing the flux ratios of the primary species to vary over all possible values. Each of these “sample distributions” is then compared to the “test distribution” using a Kolmogorov-Smirnov (K-S) test. If the K-S test statistic indicates a probability greater than 90% that the test and sample distributions are drawn from the same parent distribution, then the primary ratios are recorded. The most probable ratio for each species is determined with high precision, but the absolute accuracy is limited - mainly by statistical fluctuations in the test distribution. Fig. \[final\] shows the distribution of most likely ratios of primary species for repeated regeneration of the test distribution. It should be noted that each test and sample distribution is not fully independent, each being drawn from a limited Monte Carlo data set. Each sampled distribution corresponds to only $\sim 10$ hours of actual observations (400 events in each of the test and sample distributions). A reasonable observational data-set of several hundreds of hours duration, and more Monte Carlo simulations, would provide greater flux accuracy than indicated in Fig. \[final\]. The procedure described above can also be applied to a four component cosmic ray flux (proton:helium:oxygen:iron), although the flux accuracy is reduced compared to the two component (proton and iron) fit. In addition, with the limited size of the Monte Carlo data set, a completely unbiased search is not possible, and the range of compositions searched must be limited to avoid local statistical minimums in the differences between the test and sample distributions. Experimental considerations {#experimental} =========================== One of the advantages of using a single mirror/single PMT combination is the ease of calibration of such an experiment. The mirror reflectivity, PMT quantum efficiency, gain and impulse response can all be accurately determined. The background noise to the [[Čerenkov ]{}]{} pulses can be easily monitored and incorporated into Monte Carlo simulations. The greatest source of uncertainty will be in characterizing the atmosphere, and in particular describing the absorption of the [[Čerenkov ]{}]{} light in the atmosphere. Failure to correctly describe the absorption profile of the atmosphere will distort the apparent ratio of light emitted at varying depths from the observation point. Demanding consistency of pulse parameter distributions on a night by night basis should reject nights where the atmosphere is disturbed (significantly different from a molecular atmosphere). In addition to this, atmospheric attenuation could be measured directly through stellar extinction and ground-level standard light sources placed at varying distances from the observatory. Although accurate accounting for absorption is most critical for observations at large zenith, the effects should also be observable for near-zenith observations. It should be possible, therefore, to gauge the accuracy of the absorption estimate and other calibration procedures by comparing the [[Čerenkov ]{}]{} pulse profile estimate of the primary cosmic ray composition with that obtained by direct measurement. This comparison should also be useful in determining the accuracy of the Monte Carlo simulations as a whole. Conclusion ========== Monte Carlo simulations presented in this paper have shown that the temporal distribution of [[Čerenkov ]{}]{} light emitted from EAS is sensitive to the muon/electron ratio of the cascade. Using a single large area mirror coupled to a narrow field of view photo-detector, it is possible to use these pulse profiles to estimate the chemical composition of primary cosmic rays over a large range of energies. The author would like to thank Philip Edwards, Jamie Holder, Bruce Dawson John Patterson, Roger Clay and Gavin Rowell for helpful comments. The author acknowledges the receipt of a JSPS postdoctoral fellowship. [xx]{} V.R. Chitnis, P.N. Bhat, [*Astropart. Phys.*]{} (1999) in press J.E. Dickinson et al., [*$25^{th}$ ICRC, Durban*]{} [**5**]{} (1997) 229 M. Hess et al., [*Astropart. Phys.*]{} (1999) in press A.M. Hillas, [*J. Phys. G:Nucl. Phys.*]{} [**8**]{} (1982) 1475 H.O. Klages et al., [*Nucl. Phys. B (Proc. Suppl.)*]{} [**52B**]{} (1997) 92 J. Knapp, D. Heck, Extensive Air Shower simulation with CORSIKA, v4.5: A user’s guide A. Lindner, [*Astropart. Phys.*]{} [**8**]{} (1998) 235 L.F. Fortson et al., [*$25^{th}$ ICRC, Durban*]{} [**4**]{} (1997) 49 J.R. Patterson, A.M. Hillas, [*J. Phys. G:Nucl. Phys.*]{} [**9**]{} (1983) 1433 M.D. Roberts et al., [*J. Phys. G:Nucl. Phys.*]{} [**24**]{} (1998) 255 A.A. Watson, [*$25^{th}$ ICRC, Durban*]{} [**8**]{} (1997) 257 J. Wdowczyk, [*J. Phys. G:Nucl. Phys.*]{} [**20**]{} (1994) 1001
--- abstract: 'The $\pi^+\Sigma^+$, $\pi^+\Xi^0$, $K^+p$, $K^+n$, and $\overline{K}{}^0 \Xi^0$ scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks is used to perform the chiral extrapolations. To the order we work in the three-flavor chiral expansion, the kaon-baryon processes that we investigate show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be $a_{\pi^+\Sigma^+}=-0.197\pm0.017$ fm, and $a_{\pi^+\Xi^0}=-0.098\pm0.017$ fm, where the comprehensive error includes statistical and systematic uncertainties.' author: - 'A. Torok' - 'S.R. Beane' - 'W. Detmold' - 'T.C. Luu' - 'K. Orginos' - 'A. Parreño' - 'M.J. Savage' - 'A. Walker-Loud' title: | Meson-Baryon Scattering Lengths from\ Mixed-Action Lattice QCD --- -1.5cm 1.2cm .5cm .5cm 0.8cm Introduction ============ Lattice QCD calculations of meson-meson interactions have yielded predictions for physical scattering lengths at the few percent level [@Beane:2007xs; @Beane:2006gj; @Beane:2007uh]. Several reasons underlie this striking accuracy. Firstly, at the level of the lattice calculation, Euclidean-space correlation functions involving pseudoscalar mesons have signal/noise ratios[^1] that do not degrade, or only slowly degrade with time. Therefore, highly accurate fits of both single- and multi-meson properties are possible with currently available supercomputer resources. Recent calculations of multi-meson interactions relevant for the study of pion and kaon condensation have been performed with up to twelve mesons interacting on a lattice [@Beane:2007es; @Detmold:2008fn; @Detmold:2008yn] with no appreciable degradation of signal/noise with time. Secondly, and perhaps more importantly, QCD correlation functions involving Goldstone bosons are subject to powerful chiral symmetry constraints. Since current lattice calculations are carried out at unphysical quark masses, these constraints play an essential role in extrapolating the lattice data to the physical quark masses, as well as to the infinite volume, and continuum limits. Chiral perturbation theory ($\chi$-PT) is the optimal method for implementing QCD constraints due to chiral symmetry, and in essence, provides an expansion of low-energy S-matrix elements in quark masses and powers of momentum [@Bernard:2007zu]. In contrast to the purely mesonic sector, recent studies of baryon-baryon interactions, the paradigmatic nuclear physics process, have demonstrated the fundamental difficulty faced in making predictions for baryons and their interactions [@Beane:2006mx; @Beane:2006gf]. Unlike the mesons, correlation functions involving baryons suffer an exponential degradation of signal/noise at large times [^2] and therefore pose a fundamentally different kind of challenge in extracting signal from data [@Lepage:1989hd]. Furthermore, while baryon interactions are constrained by QCD symmetries like chiral symmetry, the constraints are not nearly as powerful as when there is at least one pion or kaon in the initial or final state. For instance, there is no expectation that the baryon-baryon scattering lengths vanish in the chiral limit as they do in the purely mesonic sector. In nucleon-nucleon scattering, the s-wave interactions are actually enhanced due to the close proximity of a non-trivial fixed point of the renormalization group, which drives the scattering lengths to infinity, thus rendering the effective field theory description of the interaction highly non-perturbative [@Kaplan:1998we]. Given the contrast in difficulty between the purely mesonic and purely baryonic sectors described above, it is clearly of great interest to perform a lattice QCD investigation of the simplest scattering process involving at least one baryon: meson-baryon scattering. While pion-nucleon scattering is the best-studied process, both theoretically and experimentally, its determination on the lattice is computationally prohibitive since it involves annihilation diagrams. At present only a few limiting cases that involve these diagrams are being investigated [@Babich:2009rq]. Combining the lowest-lying $SU(3)$ meson and baryon octets, one can form five meson-baryon elastic scattering processes that do not involve annihilation diagrams. Three of these involve kaons and therefore are, in principle, amenable to an $SU(3)$ heavy-baryon $\chi$-PT (HB$\chi$-PT) analysis [@Jenkins:1990jv] for extrapolation. The remaining two processes involve pions interacting with hyperons and therefore can be analyzed in conjunction with the kaon processes in $SU(3)$ HB$\chi$-PT, or independently using $SU(2)$ HB$\chi$-PT. Meson-baryon scattering has been developed to several non-trivial orders in the $SU(3)$ HB$\chi$-PT expansion in Refs. [@Liu:2006xja; @Liu:2007ct], extending earlier work on kaon-nucleon scattering in Ref. [@Kaiser:2001hr]. A very-recent paper [@Mai:2009ce] has reconsidered the $SU(3)$ HB$\chi$-PT results using a different regularization scheme, and also derived results for pion-hyperon scattering in the $SU(2)$ HB$\chi$-PT expansion. These works make clear that the paucity of experimental data make it is very difficult to assess the convergence of the chiral expansion in the three-flavor case. Further, in the pion-hyperon system, the complete lack of experimental data precludes a separate analysis in the chiral two-flavor expansion. A lattice calculation of meson-baryon scattering analyzed using $\chi$-PT is therefore useful not only in making predictions for low-energy scattering at the physical point, but also for assessing the convergence of the chiral expansion for a range of quark masses at which present-day lattice calculations are being performed. Meson-baryon scattering is also of interest for several indirect reasons. The $K^- n$ interaction is important for the description of kaon condensation in the interior of neutron stars [@KaplanNelson], and meson-baryon interactions are essential input in determining the final-state interactions of various decays that are interesting for standard-model phenomenology (See Ref. [@Lu:1994ex] for an example). Finally, in determining baryon excited states on the lattice, it is clear that the energy levels that represent meson-baryon scattering on the finite-volume lattice must be resolved before progress can be made regarding the extraction of single-particle excitations. The experimental input to existing $\chi$-PT analyses of meson-baryon scattering is extensively discussed in Refs. [@Kaiser:2001hr; @Liu:2006xja; @Liu:2007ct; @Mai:2009ce]. Threshold pion-nucleon scattering information is taken from experiments with pionic hydrogen and deuterium [@Schroder:1999uq; @Schroder:2001rc], and the kaon-nucleon scattering lengths are taken from model-dependent extractions from kaon-nucleon scattering data [@Martin:1980qe]. There is essentially no experimental information available on the pion-hyperon and kaon-hyperon scattering lengths. There have been two quenched lattice QCD studies of meson-baryon scattering parameters: the pioneering work of Ref. [@Fukugita:1994ve] calculated pion-nucleon and kaon-nucleon scattering lengths at heavy pion masses without any serious attempt to extrapolate to the physical point, and Ref. [@Meng:2003gm] calculated the $I=1$ $KN$ scattering length and found a result consistent with the current algebra prediction. In this work we calculate the lowest-lying energy levels for five meson-baryon processes that have no annihilation diagrams: $\pi^+\Sigma^+$, $\pi^+\Xi^0$, $K^+p$, $K^+n$, and $\overline{K}{}^0 \Xi^0$ in a mixed-action Lattice QCD calculation with domain-wall valence quarks on the asqtad-improved coarse MILC configurations with $b\sim 0.125~{\rm fm}$ at four light-quark masses ($m_\pi\sim 291$, $352$, $491$ and $591$ MeV), and at two light quark masses ($m_\pi\sim 320$ and $441$ MeV) on the fine MILC configurations with $b\sim 0.09~{\rm fm}$, with substantially less statistics on the fine ensembles. We extract the s-wave scattering lengths from the two-particle energies, and analyze the five processes using $SU(3)$ HB$\chi$-PT. We find a rather conclusive lack of convergence in the three-flavor chiral expansion. We then consider $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ using $SU(2)$ HB$\chi$-PT and find that we are able to make reliable predictions of the scattering lengths at the physical point. We find $$\begin{aligned} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.017~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.098\pm 0.017~{\rm fm} \ , \label{eq:MP}\end{aligned}$$ where the errors encompass statistical and systematic uncertainties. The leading order $\chi$-PT (current algebra) predictions for the scattering lengths are given by [@Weinberg:1966kf]: $$\begin{aligned} a_{\pi^+\Sigma^+}&=& -0.2294~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.1158~{\rm fm} \ . \label{eq:CA}\end{aligned}$$ Ultimately, either the chiral extrapolation should be performed after a continuum limit has been taken, or one should use the mixed-action extension of HB$\chi$-PT to perform the chiral extrapolations  [@Tiburzi:2005is; @Chen:2007ug]. However, our results on the fine MILC configurations are statistics-limited and not yet sufficiently accurate to make this a useful exercise. Further, the explicit extrapolation formulas for the meson-baryon scattering lengths have not yet been determined in mixed-action $\chi$-PT. Despite these limitations, we expect the corrections from finite lattice spacing to be small for two principle reasons. Firstly, the meson-baryon scattering lengths are protected by chiral symmetry and therefore the (approximate) chiral symmetry of the domain wall valence fermions used in this work protects the scattering lengths from additive renormalization, which can be explicitly seen in the construction of the mixed-action baryon Lagrangian in Ref. [@Chen:2007ug]. The mixed-action corrections do not appear until next-to-next-to leading order in the chiral expansion of the meson-baryon scattering lengths. Secondly, our previous experience with this mixed-action lattice QCD program leads us to expect that discretization effects will be well-encompassed within the overall errors we quote. In our precise calculation of meson-meson scattering, the predicted mixed-action corrections  [@Chen:2005ab; @Chen:2006wf] were smaller than the uncertainties on a given ensemble [@Beane:2007xs; @Beane:2007uh]. This paper is organized as follows. In section \[sec:MBSP\] we isolate the five meson-baryon processes with no annihilation diagrams that are calculated in this work. We briefly review the standard Lüscher method for extracting the scattering amplitude from two-particle energy levels in a finite volume in section \[sec:finvol\]. Particulars regarding the mixed-action lattice calculation and fitting methods are provided in section \[sec:MAdetails\]. Additional details can be found in Ref. [@Beane:2008dv]. Mixing between two of the meson-baryon channels with the same quantum numbers is discussed in section \[sec:MChAm\]. In section \[sec:su3CE\] we consider chiral extrapolations of the lattice data using $SU(3)$ HB$\chi$-PT, and in section \[sec:su2CE\] we analyze the pion-hyperon lattice data using $SU(2)$ HB$\chi$-PT. Finally, we conclude in section \[sec:conc\]. Meson-Baryon Scattering Processes {#sec:MBSP} ================================= It is a straightforward exercise to construct the six scattering channels involving the lowest-lying octet mesons and baryons that do not have annihilation diagrams, and to determine their isospin. [^3] The particle content, isospin, and valence quark content of these meson-baryon states are shown in Table \[tab:quarks1\]. Particles        Isospin       Quark Content ---------------------------- ---------------- ------------------ $\pi^+\Sigma^+$ 2 $uuu\bar{d}s$ $\pi^+\Xi^0$ 3/2 $uu\bar{d}ss$ $K^+p$ 1 $uuud\bar{s}$ $K^+n$ 0 [*and*]{} 1 $uudd\bar{s}$ $\overline{K}{}^0\Sigma^+$ 3/2 $uu\bar{d}ss$ $\overline{K}{}^0\Xi^0$ 1 $u\bar{d}sss$ : Particle content, isospin, and valence quark structure of the meson-baryon states calculated in this work. As is clear from the valence quark content, these meson-baryon states have no annihilation diagrams.[]{data-label="tab:quarks1"} We adopt the notation of Ref. [@Liu:2006xja], denoting the threshold T-matrix in the isospin basis as $T^{(I)}_{\phi B}$, where $I$ is the isospin of the meson-baryon combination, $\phi$ is the meson, and $B$ is the baryon. The five elastic meson-baryon scattering processes that we consider are then in correspondence with the isospin amplitudes according to $$\begin{aligned} T_{\pi^+\Sigma^+}=T^{(2)}_{\pi \Sigma}\ &;& \qquad T_{\pi^+\Xi^0}=T^{(3/2)}_{\pi \Xi} \ ; \nonumber \\ T_{K^+p}=T^{(1)}_{KN} \ ; \qquad T_{K^+n}&=&\frac{1}{2}(T^{(1)}_{KN}+T^{(0)}_{KN}) \ ; \qquad T_{\overline{K}{}^0\Xi^0}=T^{(1)}_{\overline{K}\Xi} \ . \nonumber\\ \label{eq:Tmatrices}\end{aligned}$$ These threshold T-matrices are related to the scattering lengths $a_{\phi B}$ through $$T_{\phi B}=4\pi\left(1+\frac{m_\phi}{m_B}\right) a_{\phi B} \ , \label{eq:Tanda}$$ where $m_\phi$ is the meson mass and $m_B$ is the baryon mass. Finite-Volume Calculation of Scattering Amplitudes {#sec:finvol} ================================================== The s-wave scattering amplitude for two particles below inelastic thresholds can be determined using Lüscher’s method [@luscher_formula], which entails a measurement of one or more energy levels of the two-particle system in a finite volume. For two particles with masses $m_\phi$ and $m_B$ in an s-wave, with zero total three momentum, and in a finite volume, the difference between the energy levels and those of two non-interacting particles can be related to the inverse scattering amplitude via the eigenvalue equation [@luscher_formula] $$\begin{aligned} p\cot\delta(p) \ =\ \frac{1}{\pi L}\ {\bf S}\left(\,\frac{p L}{2\pi}\,\right)\ \ , \label{eq:energies}\end{aligned}$$ where $\delta(p)$ is the elastic-scattering phase shift, and the regulated three-dimensional sum is $$\begin{aligned} {\bf S}\left(\,{\eta}\, \right)\ \equiv \ \sum_{\bf j}^{ |{\bf j}|<\Lambda} \frac{1}{|{\bf j}|^2-\eta^2}\ -\ {4 \pi \Lambda} \ \ \ . \label{eq:Sdefined}\end{aligned}$$ The sum in Eq. (\[eq:Sdefined\]) is over all triplets of integers ${\bf j}$ such that $|{\bf j}| < \Lambda$ and the limit $\Lambda\rightarrow\infty$ is implicit [@Beane:2003da]. This definition is equivalent to the analytic continuation of zeta-functions presented by Lüscher [@luscher_formula]. In Eq. (\[eq:energies\]), $L$ is the length of the spatial dimension in a cubically-symmetric lattice. The energy eigenvalue, $E_n$, and its deviation from the sum of the rest masses of the particle, $\Delta E_n$, are related to the center-of-mass momentum $p_n$, a solution of Eq. (\[eq:energies\]), by $$\begin{aligned} \Delta E_n \ & \equiv & E_n\ -\ m_\phi \ - \ m_B \ =\ \sqrt{\ p_n^2\ +\ m_\phi^2\ } \ +\ \sqrt{\ p_n^2\ +\ m_B^2\ } \ -\ m_\phi\ - \ m_B \ ; \nonumber\\ & = & \frac{p_n^2}{2 \mu_{\phi B}}\ +\ ... \ \ \ , \label{eq:energieshift}\end{aligned}$$ where $\mu_{\phi B}$ is the reduced mass of the meson-baryon system. In the absence of interactions between the particles, $|p\cot\delta|=\infty$, and the energy levels occur at momenta ${\bf p} =2\pi{\bf j}/L$, corresponding to single-particle modes in a cubic cavity with periodic boundary conditions. Expanding Eq. (\[eq:energies\]) about zero momenta, $p\sim 0$, one obtains the familiar relation [^4] $$\begin{aligned} \Delta E_0 & = & -\frac{2\pi a}{\mu_{\phi B} L^3} \left[\ 1\ +\ c_1 \frac{a}{L}\ +\ c_2 \left( \frac{a}{L} \right)^2 \ \right ] \ +\ {\cal O}\left(\frac{1}{L^6}\right) \ \ , \label{eq:luscher_a}\end{aligned}$$ with $$\begin{aligned} c_1 & = & \frac{1}{\pi} \sum_{{\bf j}\ne {\bf 0}}^{ |{\bf j}|<\Lambda} \frac{1}{|{\bf j}|^2}\ -\ 4 \Lambda \ \ =\ -2.837297 \ \ \ ,\ \ \ c_2\ =\ c_1^2 \ -\ \frac{1}{\pi^2} \sum_{{\bf j}\ne {\bf 0}} \frac{1}{|{\bf j}|^4} \ =\ 6.375183 \ ,\end{aligned}$$ and $a$ is the scattering length, defined by $$\begin{aligned} a & = & \lim_{p\rightarrow 0}\frac{\tan\delta(p)}{p} \ . \label{eq:scatt}\end{aligned}$$ As the finite-volume lattice calculation cannot achieve $p=0$ (except in the absence of interactions), in quoting a lattice value for the scattering length extracted from the ground-state energy level, it is important to determine the error associated with higher-order range corrections. Lattice Calculation and Data Analysis {#sec:MAdetails} ===================================== In calculating the meson-baryon scattering lengths, the mixed-action lattice QCD scheme was used in which domain-wall quark [@Kaplan:1992bt; @Shamir:1992im; @Shamir:1993zy; @Shamir:1998ww; @Furman:1994ky] propagators are generated from a smeared source on $n_f = 2+1$ asqtad-improved [@Orginos:1999cr; @Orginos:1998ue] rooted, staggered sea quarks [@Bernard:2001av]. To improve the chiral symmetry properties of the domain-wall quarks, hypercubic-smearing (HYP-smearing) [@Hasenfratz:2001hp; @DeGrand:2002vu; @DeGrand:2003in] was used in the gauge links of the valence-quark action. In the sea-quark sector, there has been significant debate regarding the validity of taking the fourth root of the staggered fermion determinant at finite lattice spacing [@Durr:2004as; @Durr:2004ta; @Creutz:2006ys; @Bernard:2006zw; @Bernard:2006vv; @Creutz:2007nv; @Bernard:2006ee; @Bernard:2006qt; @Creutz:2007yg; @Creutz:2007pr; @Durr:2006ze; @Hasenfratz:2006nw; @Shamir:2006nj; @Sharpe:2006re]. While there is no proof, there are arguments to suggest that taking the fourth root of the fermion determinant recovers the contribution from a single Dirac fermion. The results of this paper assume that the fourth-root trick recovers the correct continuum limit of QCD. The present calculations were performed predominantly with the coarse MILC lattices with a lattice spacing of $b\sim 0.125$ fm, and a spatial extent of $L\sim 2.5$ fm. On these configurations, the strange quark was held fixed near its physical value while the degenerate light quarks were varied over a range of masses corresponding to the pion masses shown in Table \[tab:MILCcnfs\]. See Ref. [@Beane:2008dv] for further details. Results were also obtained on a coarse MILC ensemble with a spatial extent of $L\sim 3.5$ fm. However, this data is statistics limited. In addition, calculations were performed on two fine MILC ensembles at $L\sim 2.5$ fm with $b\sim 0.09$ fm. On the coarse MILC lattices, Dirichlet boundary conditions were implemented to reduce the original time extent of 64 down to 32, which saved a nominal factor of two in computational time. While this procedure leads to minimal degradation of a nucleon signal, it does limit the number of time slices available for fitting meson properties. By contrast, on the fine MILC ensembles, anti-periodic boundary conditions were implemented and all time slices are available. -------------------------------------------------------------------------------------------------------------------------------------------------- Ensemble $m_\pi$(MeV) $b m_l$ $b m_s$ $b m^{dwf}_l$ $ b m^{dwf}_s $ $10^3 \# of props \times bm_{res}$ [^5] ---------------------------------- -------------- --------- --------- --------------- ----------------- ----------------------- ------------------ ([*i*]{}) 2064f21b676m007m050 291 0.007 0.050 0.0081 0.081 $1.604\pm 0.038$ 1039 $\times$ 24 ([*ii*]{}) 2064f21b676m010m050 352 0.010 0.050 0.0138 0.081 $1.552\pm 0.027$ 769 $\times$ 24 ([*iii*]{}) 2064f21b679m020m050 491 0.020 0.050 0.0313 0.081 $1.239\pm 0.028$ 486 $\times$ 24 ([*iv*]{}) 2064f21b681m030m050 591 0.030 0.050 0.0478 0.081 $0.982\pm 0.030$ 564 $\times$ 24 ([*v*]{}) 2864f21b676m010m050 352 0.010 0.050 0.0138 0.081 $1.552\pm 0.027$ 128 $\times$ 8 ([*vi*]{}) 2896f21b709m0062m031 320 0.0062 0.031 0.0080 0.0423 $0.380\pm 0.006$ 1001 $\times$ 8 ([*vii*]{}) 2896f21b709m0124m031 441 0.0124 0.031 0.0080 0.0423 $0.380\pm 0.006$ 513 $\times$ 3 -------------------------------------------------------------------------------------------------------------------------------------------------- : The parameters of the MILC gauge configurations and domain-wall propagators used in this work. The subscript $l$ denotes light quark (up and down), and $s$ denotes the strange quark. The superscript $dwf$ denotes the bare-quark mass for the domain-wall fermion propagator calculation. The last column is the number of configurations times the number of sources per configuration. Ensembles ([*i*]{})-([*iv*]{}) have $L\sim 2.5$ fm and $b\sim 0.125$ fm; Ensemble ([*v*]{}) has $L\sim 3.5$ fm and $b\sim 0.125$ fm; Ensembles ([*vi*]{}),([*vii*]{}) have $L\sim 2.5$ fm and $b\sim 0.09$ fm.[]{data-label="tab:MILCcnfs"} The correlation function that projects onto the zero momentum state for the meson-baryon system is $$C_{\phi B}(t)={\cal P}_{ij}\sum_{{\bf x,y}}\langle \phi^{\dagger}(t,{\bf x}) \overline{B_i}(t,{\bf y}) \phi(0,{\bf 0}) B_j(0,{\bf 0})\rangle \ ,$$ where ${\cal P}_{ij}$ is a positive-energy projector. For instance, in the case of $K^+ p$, the interpolating operators for the $K^+$ and the proton are $$\begin{aligned} \phi(t,{\bf x})&=&K^+(t,{\bf x})=\overline{s}(t,{\bf x})\gamma_5 u(t,{\bf x}) \ ; \nonumber \\ B_i(t,{\bf x})&=&p_i(t,{\bf x})=\epsilon_{abc}u_i^a(t,{\bf x})\left( u^{b\mathrm{T}}(t,{\bf x})C\gamma_5 d^c(t,{\bf x})\right) \ .\end{aligned}$$ The masses of the mesons and baryons are extracted using the assumed form of the large-time behavior of the single particle correlators as a function of time. As $t\rightarrow \infty$, the ground state dominates; however, fluctuations of the correlator increase with respect to the ground state. The meson and baryon two-point correlators, $C_{\phi}(t)$ and $C_{B}(t)$, behave as $$C_{\phi}(t) \ \rightarrow \ {\cal A_\mathrm{1}}\ e^{-m_{\phi} \ t}, \qquad C_{B}(t) \ \rightarrow \ {\cal A_\mathrm{2}}\ e^{-m_{B} \ t}\ , \label{eq:correlator}$$ respectively, in the limits $t\rightarrow\infty$ and $L\rightarrow\infty$. In relatively large lattice volumes the energy difference between the interacting and non-interacting meson-baryon states is a small fraction of the total energy, which is dominated by the masses of the mesons and baryons [@Beane:2007xs]. In order to extract this energy difference the ratio of correlation functions, $G_{\phi B}(t)$, is formed $$G_{\phi B}( t) \equiv \frac{C_{\phi B}( t)}{C_{\phi}(t) C_{B}(t)} \ = \ \sum_{n=0}^\infty\ {\cal D}_n\ e^{-\Delta E_n\ t} \ , \label{eq:ratio_correlator}$$ where $\Delta E \equiv \Delta E_0$ is the desired energy shift. With $\Delta E$, and the extracted masses of the meson and baryon, the scattering length can be calculated using Eqs. (\[eq:energies\]) and (\[eq:energieshift\]), or, if $a<<L$, from Eq. (\[eq:luscher\_a\]). For the meson-baryon scattering lengths calculated in this work, the difference between the exact and perturbative eigen-equations is negligible. A variety of fitting methods have been used, including standard chi-square minimization fits to one and two exponentials. Generalized effective energy plots are particularly useful for analyzing the lattice data and for estimating systematic errors [@Beane:2009ky]. These plots are constructed by taking the ratio of the correlators at times $t$, and $t+n_J$ (where $n_J$ is an integer) $$m_{\phi,B}^{\mathrm{eff}}=\frac{1}{n_J}\mathrm{log} \left(\frac{C_{\phi,B}(t)}{C_{\phi,B}(t+n_J)}\right), \qquad \Delta E_{\phi B}^{\mathrm{eff}}=\frac{1}{n_J}\mathrm{log} \left(\frac{G_{\phi B}(t)}{G_{\phi B}(t+n_J)}\right) \ . \label{eq:effscatteq}$$ With $n_J=1$, the standard effective mass and energy plots are recovered. Generalized effective masses form a system of linear equations for each $n_J$ over the time interval where the data is fit. For instance, if the interval is given by $\Delta t=t_2-t_1$, then there is one equation for $m^\mathrm{eff}$ at each $t$, for any $n_J$ that fits within $\Delta t$. The equations can be solved for $m^\mathrm{eff}$ by casting them into the form of the so-called normal equation [@Dahl]. Since each $n_J$ constitutes a different effective mass plot, the number of degrees of freedom is increased significantly. This method provides a fitting routine that is faster than standard least-squares fitting. Additional details regarding the utility of generalized effective mass and energy plots can be found in Ref. [@Beane:2009gs]. The interpolating operator at the source is constructed from gauge-invariantly-smeared quark field operators, while at the sink, the interpolating operator is constructed from either local quark field operators, or from the same smeared quark field operators used at the source, leading to two sets of correlation functions. For brevity, we refer to the two sets of correlation functions that result from these source and sink operators as [*smeared-point*]{} (SP) and [*smeared-smeared*]{} (SS) correlation functions, respectively. By forming a linear combination of the SP and SS correlation functions, $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$, we are able to remove the first excited state, thus gaining early time slices for fitting [@Beane:2009gs]. This effect is illustrated in Fig. \[fig:m010pisigeffSPSS\], which is the effective $\Delta E_{\pi^+\Sigma^+}$ plot for coarse MILC ensemble ([*ii*]{}). We plot $C^{\mathrm{(SS)}}$, $C^{\mathrm{(SP)}}$, and $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$ with $\alpha$ tuned to remove the first excited state. The effective energies, effective masses, and energy splittings are plotted for coarse MILC ensemble ([*ii*]{}) in Figs. \[fig:energylevels\], \[fig:m010single\], and \[fig:m010two\]. All of the necessary quantities needed for extraction of the scattering lengths are contained in Table \[tab:latticequant\], which also contains the sum of meson and baryon masses at each quark mass. Fig. \[fig:m010Savageplot\] shows the results for all five processes, and the behavior of Eq. (\[eq:energies\]), versus the interaction energy, presented in terms of the dimensionless quantities $p\cot\delta/m_\pi$ and $\Delta E/m_\pi$. The curve shown in Fig. \[fig:m010Savageplot\] is $p\cot\delta/m_\pi$ for the case of $m_\phi=m_K$, and $m_B=m_p$, as $\Delta E/m_\pi$ is varied. ${\bf S}(\eta)$ in Eq. (\[eq:Sdefined\]) is a function of the meson and baryon masses, so there will be a unique curve for each combination of $m_\phi$ and $m_B$. Consequently, the $K^+p$, and $K^+n$ data points fall on this curve. ![Effective $\Delta E_{\pi^+\Sigma^+}$ plot for coarse MILC ensemble ([*ii*]{}) from correlation functions $C^{\mathrm{(SS)}}$, $C^{\mathrm{(SP)}}$ and $C^{\mathrm{(SS)}} \ - \ \alpha C^{\mathrm{(SP)}}$. By taking the linear combination with $\alpha$ tuned to remove the first excited state, earlier time slices are gained for fitting.[]{data-label="fig:m010pisigeffSPSS"}](pisigmam010multi.eps){width="75.00000%"} Quantity m007 ([*i*]{}) m010 ([*ii*]{}) m020 ([*iii*]{}) m030 ([*iv*]{}) ------------------------ ----------------- ----------------- ------------------ ----------------- $m_{\pi}$ 0.18384(31)(03) 0.22305(25)(08) 0.31031(38)(95) 0.37513(44)(13) $m_{k}$ 0.36783(32)(42) 0.37816(26)(11) 0.40510(33)(37) 0.43091(66)(16) $m_{p}$ 0.6978(61)(08) 0.7324(31)(10) 0.8069(22)(14) 0.8741(16)(05) $m_{\Sigma}$ 0.8390(22)(03) 0.8531(19)(08) 0.8830(18)(17) 0.9213(13)(03) $m_{\Xi}$ 0.8872(13)(16) 0.9009(13)(10) 0.9233(18)(04) 0.9461(14)(08) $f_{\pi}$ 0.09257(16) 0.09600(14) 0.10208(14) 0.10763(32) $f_{K}$ 0.10734(10) 0.10781(18) 0.10976(17) 0.11253(31) $\Delta E_{\pi\Sigma}$ 0.0150(14)(08) 0.0148(08)(13) 0.0111(10)(08) 0.0100(10)(11) $\Delta E_{\pi\Xi}$ 0.00646(64)(98) 0.0062(05)(12) 0.00431(68)(43) 0.00421(76)(60) $\Delta E_{K p}$ 0.0140(22)(30) 0.0146(15)(13) 0.0092(10)(51) 0.0087(16)(16) $\Delta E_{K n}$ 0.0057(18)(16) 0.0051(14)(09) 0.0036(09)(12) 0.0028(10)(11) $\Delta E_{K\Xi}$ 0.0118(08)(13) 0.0125(05)(14) 0.0085(08)(31) 0.0086(16)(16) $a_{\pi\Sigma}$ -2.12(16)(09) -2.36(09)(15) -2.30(15)(13) -2.36(18)(19) $a_{\pi\Xi}$ -1.08(09)(14) -1.19(09)(20) -1.08(15)(09) -1.20(18)(15) $a_{Kp}$ -2.80(32)(44) -2.95(21)(19) -2.3(0.2)(1.0) -2.27(31)(32) $a_{Kn}$ -1.41(37)(34) -1.33(30)(21) -1.05(22)(30) -0.89(27)(31) $a_{K\Xi}$ -2.62(13)(21) -2.77(08)(23) -2.18(15)(63) -2.29(30)(32) $m_{\pi}+m_{p}$ 0.8817(61) 0.9555(31) 1.1172(23) 1.2492(18) $m_{\pi}+m_{\Sigma}$ 1.0229(23) 1.0761(20) 1.1933(19) 1.2964(15) $m_{\pi}+m_{\Xi}$ 1.0710(14) 1.1240(14) 1.2336(19) 1.3212(16) $m_{K}+m_{p}$ 1.0657(61) 1.1106(31) 1.2119(23) 1.3050(19) $m_{K}+m_{\Sigma}$ 1.2069(23) 1.2312(20) 1.2881(19) 1.3522(16) $m_{K}+m_{\Xi}$ 1.2550(14) 1.2791(15) 1.3284(19) 1.3770(17) : Lattice calculation results from the four coarse MILC ensembles which enter the analysis of the meson-baryon scattering lengths. The first uncertainty is statistical and the second uncertainty is systematic due to fitting. All quantities are in lattice units.[]{data-label="tab:latticequant"} ![$p\cot\delta/m_\pi$ versus $\Delta E_{\phi B}/m_\pi$ for the five elastic scattering processes from coarse MILC ensemble ([*ii*]{}). The curve shown is $p\cot\delta/m_\pi$ for the case of $m_\phi=m_K$, and $m_B=m_p$.[]{data-label="fig:m010Savageplot"}](Savage_plot_m010.eps){width="75.00000%"} The Mixed Channel {#sec:MChAm} ================= As is clear from Table I, the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ states carry the same global quantum numbers, and therefore couple to the same energy-eigenstates in the finite lattice volume. For energies above both kinematic thresholds, a determination of the three scattering parameters associated with these states (two phases and one mixing-angle) requires a coupled-channel analysis. Therefore, three energy levels above both kinematic thresholds must be determined in the lattice calculation to fully characterize scattering in this kinematic regime. In the present lattice volumes, the two-particle energies in these channels are close to the respective kinematic thresholds, and the energy of the lower-lying $\pi^+\Xi^0$ state (which is below the $\overline{K}{}^0\Sigma^+$ threshold) is determined by the low-energy elastic scattering parameters, making it amenable to analysis using Eqs. (\[eq:energies\]), (\[eq:Sdefined\]), (\[eq:energieshift\]) and (\[eq:luscher\_a\]). A priori, one would expect both the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ interpolating operators to couple to a common ground state (dominantly the $\pi^+ \Xi^0$ state), with a $\overline{K}{}^0\Sigma^+$-related level as the first excited state (for the lattice volumes considered here, the non-interacting $\pi^+\Xi^0$ system with two units of relative momentum has an energy considerably above the $\overline{K}{}^0\Sigma^+$ threshold). Interestingly, within our statistical and systematic uncertainties, we find distinct energy levels from the two interpolating operators. This is consistent with strong coupling to the color-singlet constituents of the interpolating operator and only very weak couplings to states that require color rearrangement (see Fig. \[fig:energylevels\]). While this is suggestive that mixing between the states is small, a definitive interpretation requires an extraction of three energy levels above the kinematic thresholds of the $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$, and below the next kinematic threshold, in order to determine the three scattering parameters. The optimal way to extract these levels is to use the variational method [@Michael:1985ne; @Luscher:1990ck], which requires the full matrix of correlation functions to be calculated, and diagonalized. The extraction of the scattering parameters would then proceed via an extension of the variational method to the coupled-channel scenario [@Detmold:2004qn; @He:2005ey]. Due to our incomplete knowledge of the three mixed-channel energy levels, we do not attempt to extract any $\overline{K}{}^0\Sigma^+$ scattering parameters in this work. SU(3) HB$\chi$PT Extrapolation {#sec:su3CE} ============================== Scattering Length Formulas {#sec:scattLextrap} -------------------------- The scattering lengths of the five meson-baryon processes listed in Eq. (\[eq:Tmatrices\]) are, to $\mathcal{O}(m_\pi^3)$ in $SU(3)$ HB$\chi$-PT [@Liu:2006xja; @Liu:2007ct], $$\begin{aligned} a_{\pi^+\Sigma^+}=\frac{1}{4 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{2m_\pi}{f_\pi^2} + \frac{2m_\pi^2}{f_\pi^2}C_1 + \mathcal{Y}_{\pi^+\Sigma^+}(\mu ) + 8 h_{123}(\mu )\frac{m_\pi^3}{f_\pi^2} \bigg] \ ; \label{eq:apisigfull}\end{aligned}$$ $$\begin{aligned} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f_\pi^2} + \frac{m_\pi^2}{f_\pi^2}C_{01} + \mathcal{Y}_{\pi^+\Xi^0}(\mu ) + 8 h_1(\mu )\frac{m_\pi^3}{f_\pi^2} \bigg] \ ; \label{eq:apixifull}\end{aligned}$$ $$\begin{aligned} a_{K^+ p}=\frac{1}{4 \pi}\frac{m_N}{m_K+m_N} \bigg[ -\frac{2m_K}{f_K^2} + \frac{2m_K^2}{f_K^2}C_1 + \mathcal{Y}_{K^+ p}(\mu ) + 8 h_{123}(\mu )\frac{m_K^3}{f_K^2} \bigg] \ ; \label{eq:akpfull}\end{aligned}$$ $$\begin{aligned} a_{K^+ n}=\frac{1}{4 \pi}\frac{m_N}{m_K+m_N} \bigg[ -\frac{m_K}{f_K^2} + \frac{m_K^2}{f_K^2}C_{01} + \mathcal{Y}_{K^+ n}(\mu ) + 8 h_1(\mu )\frac{m_K^3}{f_K^2} \bigg] \ ; \label{eq:aknfull}\end{aligned}$$ $$\begin{aligned} a_{\overline{K}{}^0 \Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_K+m_\Xi} \bigg[ -\frac{2m_K}{f_K^2} + \frac{2m_K^2}{f_K^2}C_1 + \mathcal{Y}_{\overline{K}{}^0 \Xi^0}(\mu ) + 8 h_{123}(\mu )\frac{m_K^3}{f_K^2} \bigg] \ , \label{eq:akxifull}\end{aligned}$$ where we have defined $C_{01}\equiv C_0+C_1$ and $h_{123}\equiv h_1-h_2+h_3$, and the loop functions are given by $$\begin{aligned} \mathcal{Y}_{\pi^+\Sigma^+}(\mu )&=&\frac{m_\pi^2}{2\pi^2 f_\pi^4}\bigg\{-m_\pi\bigg(\frac32-2\ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu}\bigg) \nonumber \\ && -\sqrt{m_K^2-m_\pi^2}\arccos\frac{m_\pi}{m_K} + \frac{\pi}{2}\bigg[3F^2 m_\pi-\frac13 D^2 m_\eta\bigg]\bigg\} \ ; \label{eq:pisigloop}\end{aligned}$$ $$\begin{aligned} \mathcal{Y}_{\pi^+ \Xi^0}(\mu )&=&\frac{m_\pi^2}{4\pi^2 f_\pi^4} \bigg\{ -m_\pi \bigg( \frac32 -2\ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu}\bigg) -\sqrt{m_K^2-m_\pi^2}\bigg(\pi + \arccos\frac{m_\pi}{m_K}\bigg) \nonumber\\ && +\frac{\pi}{4}\bigg[3(D-F)^2 m_\pi-\frac13(D+3F)^2 m_\eta\bigg]\bigg\} \ ; \label{eq:pixiloop}\end{aligned}$$ $$\begin{aligned} \mathcal{Y}_{K^+p}(\mu )&=&\frac{m_K^2}{4\pi^2 f_K^4}\bigg\{m_K \bigg(-3+2\ln\frac{m_\pi}{\mu} + \ln\frac{m_K}{\mu}+3 \ln\frac{m_\eta}{\mu} \bigg) \nonumber \\ && +2\sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt {m_K^2-m_\pi^2}}{m_\pi} -3\sqrt{m_\eta^2-m_K^2}\arccos\frac{m_K}{m_\eta} \nonumber\\ && - \frac{\pi}{6} (D-3F)\bigg[ 2(D+F) \frac{m_\pi^2}{m_\eta+m_\pi} +(D+5F) m_\eta \bigg] \bigg\} \ ; \label{eq:kploop}\end{aligned}$$ $$\begin{aligned} \mathcal{Y}_{K^+n}(\mu )&=&\frac{\mathcal{Y}_{K^+p}}{2} + \frac{3m_K^2}{8\pi^2 f_K^4}\bigg\{m_K \bigg( \ln\frac{m_\pi}{\mu}-\ln\frac{m_K}{\mu} \bigg) + \sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt{m_K^2-m_\pi^2}}{m_\pi}\nonumber\\ && + \frac{\pi}{3} (D-3F) \bigg[(D+F) \frac{m_\pi^2}{m_\eta+m_\pi} +\frac16(7D+3F) m_\eta \bigg] \bigg\} \ ; \label{eq:knloop}\end{aligned}$$ $$\begin{aligned} \mathcal{Y}_{\overline{K}{}^0\Xi^0}^{(1)}(\mu )&=&\frac{m_K^2}{4\pi^2 f_K^4}\bigg\{m_K \bigg(-3+2\ln\frac{m_\pi}{\mu} + \ln\frac{m_K}{\mu}+3 \ln\frac{m_\eta}{\mu} \bigg) \nonumber \\ && +2\sqrt{m_K^2-m_\pi^2} \ln\frac{m_K+\sqrt {m_K^2-m_\pi^2}}{m_\pi} -3\sqrt{m_\eta^2-m_K^2}\arccos\frac{m_K}{m_\eta} \nonumber\\ && - \frac{\pi}{6} (D+3F)\bigg[ 2(D-F) \frac{m_\pi^2}{m_\eta+m_\pi} +(D-5F) m_\eta \bigg] \bigg\} \ . \label{eq:kxiloop}\end{aligned}$$ In what follows, we choose $\mu=\Lambda_\chi=4\pi f_\pi$ and evaluate $f_\pi$ at its lattice physical value [@Beane:2005rj], and we take $m_\eta$ from the Gell-Mann-Okubo formula. These choices modify the chiral expansion at $\mathcal{O}(m_\pi^4)$ and are therefore consistent to the order we are working. The first mixed-action modification to these HB$\chi$-PT extrapolation formulas appear as corrections to these loop functions, $\mathcal{Y}_{\phi B}$, and to the corresponding counterterms which absorb the scale dependence. Some of the mesons propagating in the loops appear as mixed valence-sea combinations, and thus the corresponding meson masses appearing in these functions are heavier by a known amount [@Orginos:2007tw]. The precise form of the predicted corrections require a computation of the scattering processes with mixed-action/partially quenched $\chi$-PT. Our physical parameters are consistent with Ref. [@Mai:2009ce] (note that our decay constant convention differs by $\sqrt{2}$). Namely, $f_\pi=130.7~{\rm MeV}$, $m_\pi=139.57~{\rm MeV}$, $f_K=159.8~{\rm MeV}$, $m_K=493.68~{\rm MeV}$, $m_N=938~{\rm MeV}$, $m_\Sigma=1192~{\rm MeV}$ and $m_\Xi=1314~{\rm MeV}$. The axial couplings, $D$ and $F$, for coarse MILC ensembles ([*ii*]{})-([*iv*]{}) are taken from the mixed-action calculation of Ref. [@Lin:2007ap], and we extrapolate for coarse MILC ensemble ([*i*]{}) using these values. Extrapolation to the Physical Point ----------------------------------- For the purposes of fitting and visualization, it is useful to construct from the scattering lengths the functions $\Gamma^{(1,2)}$ which are polynomials in $m_\phi$. For the $\pi^+\Sigma^+$, $K^+p$, and $\overline{K}{}^0\Xi^0$ processes one defines[^6] $$\begin{aligned} \Gamma_{LO}^{(1)}\equiv-\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1 \ ; \label{eq:GammaLHSLO1}\end{aligned}$$ $$\begin{aligned} \Gamma_{NLO}^{(1)}\equiv-\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1-C_1 m_\phi \ ; \label{eq:GammaLHSNLO1}\end{aligned}$$ $$\begin{aligned} \Gamma_{NNLO}^{(1)}\equiv -\frac{2 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg) + \frac{f_\phi^2}{2m_\phi}\mathcal{Y}_{\phi B}(\Lambda_{\chi}) =1-C_1 m_\phi-4h_{123}(\Lambda_{\chi}) m_\phi^2 \ , \label{eq:GammaLHSNNLO1}\end{aligned}$$ and for the $\pi^+\Xi^0$, and $K^+n$ processes one defines $$\begin{aligned} \Gamma_{LO}^{(2)}\equiv-\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1 \ ; \label{eq:GammaLHSLO2}\end{aligned}$$ $$\begin{aligned} \Gamma_{NLO}^{(2)}\equiv-\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg)=1-C_{01} m_\phi \ ; \label{eq:GammaLHSNLO2}\end{aligned}$$ $$\begin{aligned} \Gamma_{NNLO}^{(2)}\equiv -\frac{4 \pi a f_\phi^2}{m_\phi}\bigg(1 + \frac{m_\phi}{m_B}\bigg) + \frac{f_\phi^2}{m_\phi}\mathcal{Y}_{\phi B}(\Lambda_{\chi}) =1-C_{01} m_\phi-8h_1(\Lambda_{\chi}) m_\phi^2 \ . \label{eq:GammaLHSNNLO2}\end{aligned}$$ Notice that the left-hand sides of these equations are given entirely in terms of lattice-determined quantities, all evaluated under Jackknife, whereas the right-hand side provides a convenient polynomial fitting function. Plots of $\Gamma_{NLO}$ formed from the lattice data (all ensembles listed in Table \[tab:MILCcnfs\]) versus the Goldstone masses are given in Fig. \[fig:KostasAlldata\]. We see evidence in this plot that the fine and large-volume coarse data are statistically limited as compared to the coarse data. Therefore, we include only the coarse data in our fits. The fine data is, however, indicative that lattice-spacing effects are small. In the three-flavor chiral expansion, we have an overdetermined system at both NLO and NNLO. While there are five observables, there are two Low Energy Constants (LECs) at NLO, $C_0$ and $C_{01}$, and two LECs at NNLO, $h_1$ and $h_{123}$. Fits of the LECs from each process at NLO are given in Table \[tab:LECpi\] and the corresponding values of the scattering lengths are given in Table \[tab:scattLpisig\]. At NLO, the LECs are of natural size, and provide a consistent extraction within uncertainties. Correspondingly, the scattering lengths appear to deviate perturbatively from the LO values. The perturbative behavior of the scattering lengths at NLO is evident from the plots of $\Gamma_{NLO}$ versus the Goldstone masses given in Fig. \[fig:KostasNLONNLO\]. Clearly the deviations of the lattice data from unity are consistent with a perturbative expansion. At NNLO the situation changes dramatically. This is clear from the plots of $\Gamma_{NNLO}$ versus the Goldstone masses given in Fig. \[fig:KostasNLONNLO\]. The shift of the value of $\Gamma$ from NLO to NNLO is dependent on the renormalization scale $\mu$. With the choice $\mu=\Lambda_\chi$ one would expect this shift to be perturbative. However, this is not the case and therefore loop corrections are very large at the scale $\Lambda_\chi$. There are many strategies that one may take to fit the LECs in the overdetermined system. Here we fit the LECs to the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ data, and then use these LECs to predict the kaon processes. Therefore, in Fig. \[fig:KostasNLONNLO\], only (a) and (b) are fits. The fit LECs are given in Table \[tab:LECpi\]. While the NNLO LECs $h_1$ and $h_{123}$ appear to be of natural size, the NLO LECs $C_0$ and $C_{01}$ are unnaturally large and therefore are countering the large loop effects. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table \[tab:scattLpisig\] and appear to be perturbative. Table \[tab:scattLpisig\] also gives the extrapolated kaon-baryon scattering lengths with the LECs determined from the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ data. The resulting NNLO predictions deviate by at least 100% from the LO values. Other fitting strategies lead to this same conclusion: the kaon-baryon scattering lengths are unstable against chiral corrections in the three-flavor chiral expansion, over the range of light-quark masses that we consider. Quantity NLO fit each process NNLO fit $\pi^+\Sigma^+$,$\pi^+\Xi^0$ ------------------------------ ------------------------- --------------------------------------- -- $C_1(\pi^+\Sigma^+)$ 0.66(04)(11) GeV$^{-1}$ 3.51(18)(25) GeV$^{-1}$ $C_{01}(\pi^+\Xi^0)$ 0.69(06)(22) GeV$^{-1}$ 7.44(29)(69) GeV$^{-1}$ $C_1(K^+ p)$ 0.44(09)(23) GeV$^{-1}$ - $C_{01}(K^+ n)$ 0.56(11)(27) GeV$^{-1}$ - $C_1(\overline{K}{}^0\Xi^0)$ 0.50(06)(14) GeV$^{-1}$ - $h_1$ - -0.59(08)(14) GeV$^{-2}$ $h_{123}$ - -0.42(10)(10) GeV$^{-2}$ : $SU(3)$ LECs fit from each process at NLO, and from $\pi^+\Sigma^+$, and $\pi^+\Xi^0$ at NNLO. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature.[]{data-label="tab:LECpi"} Quantity LO (fm) NLO fit (fm) NLO (NNLO fit) (fm) NNLO (fm) ----------------- --------- ---------------- --------------------- ---------------- $a_{\pi\Sigma}$ -0.2294 -0.208(01)(03) -0.117(06)(08) -0.197(06)(08) $a_{\pi\Xi}$ -0.1158 -0.105(01)(04) 0.004(05)(11) -0.096(05)(12) $a_{Kp}$ -0.3971 -0.311(18)(44) 0.292(35)(48) -0.154(51)(63) $a_{Kn}$ -0.1986 -0.143(10)(27) 0.531(28)(68) 0.128(42)(87) $a_{K\Xi}$ -0.4406 -0.331(12)(31) 0.324(39)(54) -0.127(57)(70) : $SU(3)$ extrapolated scattering lengths using the LECs from Table \[tab:LECpi\]. The first uncertainty in parentheses is statistical, and the second is the statistical and systematic uncertainty added in quadrature. Note that the NLO (NNLO fit) column is using $C_1,C_{01}$ from the NNLO fit to $\pi^+\Sigma^+$,$\pi^+\Xi^0$.[]{data-label="tab:scattLpisig"} SU(2) HB$\chi PT$ Extrapolation {#sec:su2CE} =============================== Given the poor convergence seen in the three-flavor chiral expansion due to the large loop corrections, it is natural to consider the two-flavor theory with the strange quark integrated out. In this way, $\pi\Sigma$ and $\pi\Xi$ may be analyzed in an expansion in $m_\pi$ with no fear of corrections that scale as powers of $m_K$. The detailed matching of LECs between the three- and two-flavor theories is described in detail in Ref. [@Mai:2009ce]. We make use of the formulation of the $\pi\Sigma$ and $\pi\Xi$ T-matrices from [@Mai:2009ce] to perform the two-flavor chiral extrapolations for $a_{\pi^+\Sigma^+}$, and $a_{\pi^+\Xi^0}$. As pointed out in Ref. [@Mai:2009ce], there are two representations of the pion-hyperon scattering lengths that are equivalent up to omitted higher orders in the chiral expansion; one contains a chiral logarithm, and the other is purely a polynomial in $m_\pi$. Using both forms provides a useful check on the systematics of the chiral extrapolation. Scattering Length Formulas I ---------------------------- To $\mathcal{O}(m_\pi^3)$ in the two-flavor chiral expansion, $a_{\pi^+\Sigma^+}$ and $a_{\pi^+\Xi^0}$ are given by [@Mai:2009ce] $$\begin{aligned} a_{\pi^+\Sigma^+}=\frac{1}{4 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{2m_\pi}{f_\pi^2} + \frac{2m_\pi^2}{f_\pi^2} {\mathrm{C}}_{\pi^+\Sigma^+} +\frac{m_\pi^3}{\pi^2 f_\pi^4}\log{\frac{m_\pi}{\mu}} + \frac{2m_\pi^3}{f_\pi^2}{h}_{\pi^+\Sigma^+}(\mu ) \bigg] \ ; \label{eq:apisigSU2}\end{aligned}$$ $$\begin{aligned} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f_\pi^2} + \frac{m_\pi^2}{f_\pi^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{2\pi^2 f_\pi^4}\log{\frac{m_\pi}{\mu}} + \frac{m_\pi^3}{f_\pi^2}{h}_{\pi^+\Xi^0}(\mu ) \bigg] \label{eq:apixiSU2} \ ,\end{aligned}$$ where the explicit forms —in terms of Lagrangian parameters— of the LECs ${\mathrm{C}}_{\pi^+\Sigma^+}$, ${h}_{\pi^+\Sigma^+}$, ${\mathrm{C}}_{\pi^+\Xi^0}$ and ${h}_{\pi^+\Xi^0}$ are given in Ref. [@Mai:2009ce]. As in the three flavor case, the mixed-action modification to the $SU(2)$ scattering length formula would begin with corrections to the $m_\pi^3 \ln (m_\pi)$ terms, with the mixed valence-sea pions having the known additive mass shift [@Orginos:2007tw]. We again choose $\mu=\Lambda_\chi=4\pi f_\pi$ and evaluate $f_\pi$ at its lattice physical value. In analogy with the three-flavor case, we define $$\begin{aligned} \Gamma_{LO}\equiv 1 \ ; \label{eq:GammaLHSLO1su2}\end{aligned}$$ $$\begin{aligned} \Gamma_{NLO}\equiv 1-C_{\pi^+ B} m_\pi \ ; \label{eq:GammaLHSNLO1su2}\end{aligned}$$ $$\begin{aligned} \Gamma_{NNLO}\equiv 1-C_{\pi^+ B} m_\pi-h_{\pi^+ B}(\Lambda_{\chi}) m_\pi^2 \ , \label{eq:GammaLHSNNLO1su2}\end{aligned}$$ where $B$ is either $\Sigma^+$ or $\Xi^0$. In Fig. \[fig:KostasSU2\] we give plots of $\Gamma_{NLO}$ and $\Gamma_{NNLO}$ versus the pion mass for the two-flavor case. Clearly the deviations of $\Gamma$ from unity are consistent with a perturbative expansion at both NLO and NNLO, showing that the loop corrections are much smaller at the scale $\Lambda_\chi$ than in the three-flavor case. All extracted LECs are of natural size and given in Table \[tab:LECpisu2I\]. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table \[tab:scattLpisigsu2I\]. The results are consistent with what was found in the three-flavor extrapolation. The NLO and NNLO LECs are highly correlated in the NNLO fit. Fig. \[fig:errellipseK\] shows the 68% and 95% confidence interval error ellipses in the $h$-${\mathrm{C}}$ plane for both ${\pi^+\Sigma^+}$ and ${\pi^+\Xi^0}$. Exploring the full 95% confidence interval error ellipse in the $h$-${\mathrm{C}}$ plane yields $$\begin{aligned} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.017~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.098\pm 0.017~{\rm fm} \ . \label{eq:MP2}\end{aligned}$$ These are the numbers that we quote as our best determinations of the pion-hyperon scattering lengths. 0.2cm 0.2cm ![The 68% (light) and 95% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and $\pi^+\Xi^0$ (right) processes using Eqs. (\[eq:apisigSU2\]) and (\[eq:apixiSU2\]).[]{data-label="fig:errellipseK"}](ell_Pi_Sigma_SU2.eps "fig:"){width="0.49\linewidth"}![The 68% (light) and 95% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and $\pi^+\Xi^0$ (right) processes using Eqs. (\[eq:apisigSU2\]) and (\[eq:apixiSU2\]).[]{data-label="fig:errellipseK"}](ell_Pi_Xi_SU2.eps "fig:"){width="0.49\linewidth"} Scattering Length Formulas II ----------------------------- Ref. [@Mai:2009ce] makes the interesting observation that replacing $f_\pi$ with its chiral limit value, $f$, yields $$\begin{aligned} a_{\pi^+\Sigma^+}=\frac{1}{2 \pi}\frac{m_\Sigma}{m_\pi+m_\Sigma} \bigg[ -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2} {\mathrm{C}}_{\pi^+\Sigma^+} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Sigma^+} \bigg], \qquad h'_{\pi^+\Sigma^+}=\frac{4}{f^2}\ell_4^r+ h_{\pi^+\Sigma^+} \ ; \label{eq:apisig2param}\end{aligned}$$ $$\begin{aligned} a_{\pi^+\Xi^0}=\frac{1}{4 \pi}\frac{m_\Xi}{m_\pi+m_\Xi} \bigg[ -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Xi^0} \bigg],\qquad h'_{\pi^+\Xi^0}=\frac{4}{f^2}\ell_4^r + h_{\pi^+\Xi^0} \ , \label{eq:apixi2param}\end{aligned}$$ where $\ell_4^r$ is the LEC which governs the pion mass dependence of $f_\pi$ [@Colangelo:2001df]. Note that the chiral logs have canceled, and in this form, valid to order $m_\pi^3$ in the chiral expansion, the scattering lengths have a simple polynomial dependence on $m_\pi$. Taking the standard value $f=122.9$ MeV [@Colangelo:2001df; @Mai:2009ce] and refitting the LECs yields the results tabulated in Table \[tab:LECpisu2II\]. The extrapolated $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths are given in Table \[tab:scattLpisigsu2II\]. These results are clearly consistent with what was found in the two-flavor extrapolation with the chiral logarithm explicit. Fig. \[fig:errellipseU\] shows the 68% and 95% confidence interval error ellipses in the $h$-${\mathrm{C}}$ plane for both ${\pi^+\Sigma^+}$ and ${\pi^+\Xi^0}$. Exploring the full 95% confidence interval error ellipse in the $h$-${\mathrm{C}}$ plane yields $$\begin{aligned} a_{\pi^+\Sigma^+}&=& -0.197 \pm 0.011~{\rm fm} \ ;\\ a_{\pi^+\Xi^0}&=& -0.102 \pm 0.004~{\rm fm} \ . \label{eq:MPU}\end{aligned}$$ Comparison of these determinations with those of Eq. (\[eq:MP2\]) give an estimate of the systematic error due to truncation of the chiral expansion at order $m_\pi^3$. We have also “pruned” the data; that is, we have redone all fits omitting the heaviest mass ensemble. While this procedure inflates the errors, we see very little shift in the central values. 0.2cm 0.2cm ![The 68% (light) and 95% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and $\pi^+\Xi^0$ (right) processes using Eqs. (\[eq:apisig2param\]) and (\[eq:apixi2param\]).[]{data-label="fig:errellipseU"}](ellX_Pi_Sigma_SU2.eps "fig:"){width="0.49\linewidth"}![The 68% (light) and 95% (dark) confidence interval error ellipses for fits for the $\pi^+\Sigma^+$ (left), and $\pi^+\Xi^0$ (right) processes using Eqs. (\[eq:apisig2param\]) and (\[eq:apixi2param\]).[]{data-label="fig:errellipseU"}](ellX_Pi_Xi_SU2.eps "fig:"){width="0.49\linewidth"} In order to plot the scattering length versus $m_\pi$, we define $$\begin{aligned} \overline{a}_{\pi^+\Sigma^+}=a_{\pi^+\Sigma^+}\left(\frac{m_\pi+m_\Sigma}{m_\Sigma} \right) =\frac{1}{2\pi}\left( -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2} {\mathrm{C}}_{\pi^+\Sigma^+} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Sigma^+} \right) \ ; \label{eq:abarpisigSU2}\end{aligned}$$ $$\begin{aligned} \overline{a}_{\pi^+\Xi^0}=a_{\pi^+\Xi^0}\left(\frac{m_\pi+m_\Xi}{m_\Xi} \right)=\frac{1}{4\pi}\left( -\frac{m_\pi}{f^2} + \frac{m_\pi^2}{f^2}{\mathrm{C}}_{\pi^+\Xi^0} + \frac{m_\pi^3}{f^2} h'_{\pi^+\Xi^0} \right) \ . \label{eq:abarpixiSU2}\end{aligned}$$ In Fig. \[fig:aSU2\] we plot the scattering lengths versus the pion mass. The shaded bands in these plots correspond to the standard error in the determination of the LECs, as given in Table \[tab:LECpisu2II\]. Additional systematic errors arising from the specific lattice formulation that we employ are discussed in detail in Ref. [@Beane:2007xs], and are expected to be well encompassed by our error bars. As discussed in section \[sec:finvol\], there is a systematic error in extracting the scattering length from the phase shift. We find that range corrections affect the scattering length at the 5% level for $\pi^+\Sigma^+$, and at the 1% level for $\pi^+\Xi^0$. Finally, we reiterate that there are unquantified systematic errors due to finite-volume and lattice-spacing effects, however, these errors are likely encompassed by our quoted errors. Conclusions {#sec:conc} =========== In this paper we have presented the first fully-dynamical lattice QCD calculation of meson-baryon scattering. While the phenomenologically most-interesting case of pion-nucleon scattering involves annihilation diagrams, and therefore, requires more resources than we currently have available, we have calculated the ground-state energies of $\pi^+\Sigma^+$, $\pi^+\Xi^0$, $K^+p$, $K^+n$, and $\overline{K}{}^0 \Xi^0$, which involve no annihilation diagrams. An analysis of the scattering lengths of these two-body systems using HB$\chi$PT has led us to conclude that the three-flavor chiral expansion does not converge over the range of light quark masses that we investigate. While the kaon-baryon scattering lengths appear perturbative at NLO, a comparison of NNLO with NLO calls into question the convergence of the three-flavor chiral expansion. Therefore, we do not quote values for the kaon-baryon scattering lengths at the physical point. On the other hand, the $\pi^+\Sigma^+$ and $\pi^+\Xi^0$ scattering lengths appear to have a well-controlled chiral expansion in two-flavor HB$\chi$PT. Our results, $a_{\pi^+\Sigma^+}=-0.197\pm0.017$ fm, and $a_{\pi^+\Xi^0}=-0.098\pm0.017$ fm, deviate from the LO (current algebra) predictions at the one- and two-sigma level, respectively. We look forward to confirmation of these predictions from other lattice QCD calculations and possibly from future experiments. The HB$\chi$PT analyses performed in this work support a general observation about convergence in the three-flavor chiral expansion, at least for the processes studied here. As the pion masses considered in this lattice calculation are comparable to the physical kaon mass, the distinct convergence patterns of the two- and three-flavor chiral expansions found in this work are suggestive that the breakdown in the three-flavor case is not due to the relative largeness of the strange-quark mass as compared to the light quark masses, but rather due to some other enhancement in the coefficients of the loop contributions, possibly related to a scaling with powers of $n_f$, the number of flavors. While in this paper we have not considered the lowest-lying baryon decuplet, one interesting process for future study is the $\pi^-\Omega^-$ system. It does not involve disconnected diagrams since the pions have no valence quarks with the same flavor as the $\Omega^-$ constituents. It has been argued that there is a bound state [@Wang:2006jg] in this channel, and therefore, it would be of interest to determine whether this state appears bound on the lattice at the available quark masses. Acknowledgments =============== We thank U.G. Meißner for useful discussions, and R. Edwards and B. Joo for help with the QDP++/Chroma programming environment [@Edwards:2004sx] with which the calculations discussed here were performed. We gratefully acknowledge the computational time provided by NERSC (Office of Science of the U.S. Department of Energy, No. DE-AC02-05CH11231), the Institute for Nuclear Theory, Centro Nacional de Supercomputación (Barcelona, Spain), Lawrence Livermore National Laboratory, and the National Science Foundation through Teragrid resources provided by the National Center for Supercomputing Applications and the Texas Advanced Computing Center. Computational support at Thomas Jefferson National Accelerator Facility and Fermi National Accelerator Laboratory was provided by the USQCD collaboration under [*The Secret Life of a Quark*]{}, a U.S. Department of Energy SciDAC project ([ http://www.scidac.gov/physics/quarks.html]{}). The work of MJS was supported in part by the U.S. Dept. of Energy under Grant No. DE-FG03-97ER4014. The work of KO and WD was supported in part by the U.S. Dept. of Energy contract No. DE-AC05-06OR23177 (JSA) and DOE grant DE-FG02-04ER41302. KO and AWL were supported in part by the Jeffress Memorial Trust, grant J-813 and DOE OJI grant DE-FG02-07ER41527. The work of SRB and AT was supported in part by the National Science Foundation CAREER grant No. PHY-0645570. Part of this work was performed under the auspices of the US DOE by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. The work of AP was partly supported by the EU contract FLAVIAnet MRTN-CT-2006-035482, by the contract FIS2008-01661 from MEC (Spain) and FEDER and by the Generalitat de Catalunya contract 2005SGR-00343. [99]{} S. R. Beane, T. C. Luu, K. Orginos, A. Parreño, M. J. Savage, A. Torok and A. Walker-Loud, Phys. Rev.  D [**77**]{}, 014505 (2008) \[arXiv:0706.3026 \[hep-lat\]\]. S. R. Beane, P. F. Bedaque, T. C. Luu, K. Orginos, E. Pallante, A. Parreño and M. J. Savage, Phys. Rev.  D [**74**]{}, 114503 (2006) \[arXiv:hep-lat/0607036\]. S. R. Beane, T. C. Luu, K. Orginos, A. Parreño, M. J. Savage, A. Torok and A. Walker-Loud \[NPLQCD Collaboration\], Phys. Rev.  D [**77**]{}, 094507 (2008) \[arXiv:0709.1169 \[hep-lat\]\]. S. R. Beane, W. Detmold, T. C. Luu, K. Orginos, M. J. Savage and A. Torok, Phys. Rev. Lett.  [**100**]{}, 082004 (2008) \[arXiv:0710.1827 \[hep-lat\]\]. W. Detmold, M. J. Savage, A. Torok, S. R. Beane, T. C. Luu, K. Orginos and A. Parreno, Phys. Rev.  D [**78**]{}, 014507 (2008) \[arXiv:0803.2728 \[hep-lat\]\]. W. Detmold, K. Orginos, M. J. Savage and A. Walker-Loud, Phys. Rev.  D [**78**]{}, 054514 (2008) \[arXiv:0807.1856 \[hep-lat\]\]. For a recent review, see V. Bernard, Prog. Part. Nucl. Phys.  [**60**]{}, 82 (2008) \[arXiv:0706.0312 \[hep-ph\]\]. S. R. Beane, P. F. Bedaque, K. Orginos and M. J. Savage, Phys. Rev. Lett.  [**97**]{}, 012001 (2006) \[arXiv:hep-lat/0602010\]. S. R. Beane, P. F. Bedaque, T. C. Luu, K. Orginos, E. Pallante, A. Parreño and M. J. Savage \[NPLQCD Collaboration\], Nucl. Phys.  A [**794**]{}, 62 (2007) \[arXiv:hep-lat/0612026\]. S. R. Beane [*et al.*]{}, arXiv:0903.2990 \[hep-lat\]. G. P. Lepage, “The Analysis Of Algorithms For Lattice Field Theory,” Invited lectures given at TASI’89 Summer School, Boulder, CO, Jun 4-30, 1989. D. B. Kaplan, M. J. Savage and M. B. Wise, Nucl. Phys.  B [**534**]{}, 329 (1998) \[arXiv:nucl-th/9802075\]. See, for instance, R. Babich, R. Brower, M. Clark, G. Fleming, J. Osborn and C. Rebbi, PoS [**LATTICE2008**]{}, 160 (2008) \[arXiv:0901.4569 \[hep-lat\]\]. E. E. Jenkins and A. V. Manohar, “Baryon chiral perturbation theory using a heavy fermion Lagrangian,” Phys. Lett.  B [**255**]{}, 558 (1991). Y. R. Liu and S. L. Zhu, Phys. Rev.  D [**75**]{}, 034003 (2007) \[arXiv:hep-ph/0607100\]. Y. R. Liu and S. L. Zhu, Eur. Phys. J.  C [**52**]{}, 177 (2007) \[arXiv:hep-ph/0702246\]. N. Kaiser, Phys. Rev.  C [**64**]{}, 045204 (2001) \[Erratum-ibid.  C [**73**]{}, 069902 (2006)\] \[arXiv:nucl-th/0107006\]. M. Mai, P. C. Bruns, B. Kubis and U. G. Meißner, arXiv:0905.2810 \[hep-ph\]. D. B. Kaplan and A. E. Nelson, preprint HUTP-86/A023; Phys. Lett.  B [**175**]{} (1986) 57; Phys. Lett.  B [**192**]{}, 193 (1987); Nucl. Phys.  A [**479**]{}, 273 (1988); Nucl. Phys.  A [**479**]{}, 285 (1988); M. Lu, M. B. Wise and M. J. Savage, Phys. Lett.  B [**337**]{}, 133 (1994) \[arXiv:hep-ph/9407260\]. H. C. Schroder [*et al.*]{}, Phys. Lett.  B [**469**]{}, 25 (1999). H. C. Schroder [*et al.*]{}, Eur. Phys. J.  C [**21**]{}, 473 (2001). A. D. Martin, Nucl. Phys.  B [**179**]{}, 33 (1981). M. Fukugita, Y. Kuramashi, M. Okawa, H. Mino and A. Ukawa, Phys. Rev.  D [**52**]{}, 3003 (1995) \[arXiv:hep-lat/9501024\]. G. w. Meng, C. Miao, X. n. Du and C. Liu, Int. J. Mod. Phys.  A [**19**]{}, 4401 (2004) \[arXiv:hep-lat/0309048\]. S. Weinberg, Phys. Rev. Lett.  [**17**]{}, 616 (1966). B. C. Tiburzi, Phys. Rev.  D [**72**]{}, 094501 (2005) \[arXiv:hep-lat/0508019\]. J. W. Chen, D. O’Connell and A. Walker-Loud, JHEP [**0904**]{}, 090 (2009) \[arXiv:0706.0035 \[hep-lat\]\]. J. W. Chen, D. O’Connell, R. S. Van de Water and A. Walker-Loud, Phys. Rev.  D [**73**]{}, 074510 (2006) \[arXiv:hep-lat/0510024\]. J. W. Chen, D. O’Connell and A. Walker-Loud, Phys. Rev.  D [**75**]{}, 054501 (2007) \[arXiv:hep-lat/0611003\]. S. R. Beane, K. Orginos and M. J. Savage, Int. J. Mod. Phys.  E [**17**]{}, 1157 (2008) \[arXiv:0805.4629 \[hep-lat\]\]. K. Huang and C. N. Yang, Phys. Rev.  [**105**]{}, 767 (1957); H. W. Hamber, E. Marinari, G. Parisi and C. Rebbi, Nucl. Phys. B [**225**]{}, 475 (1983); M. L[ü]{}scher, Commun. Math. Phys.  [**105**]{}, 153 (1986); M. L[ü]{}scher, Nucl. Phys. B [**354**]{}, 531 (1991). S. R. Beane, P. F. Bedaque, A. Parreño and M. J. Savage, Phys. Lett.  B [**585**]{}, 106 (2004) \[arXiv:hep-lat/0312004\]. D. B. Kaplan, Phys. Lett.  B [**288**]{}, 342 (1992) \[arXiv:hep-lat/9206013\]. Y. Shamir, Phys. Lett.  B [**305**]{}, 357 (1993) \[arXiv:hep-lat/9212010\]. Y. Shamir, Nucl. Phys.  B [**406**]{}, 90 (1993) \[arXiv:hep-lat/9303005\]. V. Furman and Y. Shamir, Nucl. Phys.  B [**439**]{}, 54 (1995) \[arXiv:hep-lat/9405004\]. Y. Shamir, Phys. Rev.  D [**59**]{}, 054506 (1999) \[arXiv:hep-lat/9807012\]. K. Orginos, D. Toussaint and R. L. Sugar, Phys. Rev. D [**60**]{}, 054503 (1999). K. Orginos and D. Toussaint, Phys. Rev. D [**59**]{}, 014501 (1999). C. W. Bernard [*et al.*]{}, Phys. Rev. D [**64**]{}, 054506 (2001). A. Hasenfratz and F. Knechtli, Phys. Rev. D [**64**]{}, 034504 (2001). T. A. DeGrand, A. Hasenfratz and T. G. Kovacs, Phys. Rev. D [**67**]{}, 054501 (2003). T. A. DeGrand, Phys. Rev. D [**69**]{}, 014504 (2004). M. Creutz, arXiv:hep-lat/0603020. C. Bernard, Phys. Rev.  D [**73**]{}, 114503 (2006) \[arXiv:hep-lat/0603011\]. C. Bernard, M. Golterman, Y. Shamir and S. R. Sharpe, Phys. Lett.  B [**649**]{}, 235 (2007) \[arXiv:hep-lat/0603027\]. M. Creutz, Phys. Lett.  B [**649**]{}, 241 (2007). C. Bernard, M. Golterman and Y. Shamir, Phys. Rev.  D [**73**]{}, 114511 (2006) \[arXiv:hep-lat/0604017\]. C. Bernard, M. Golterman and Y. Shamir, PoS [**LAT2006**]{}, 205 (2006) \[arXiv:hep-lat/0610003\]. M. Creutz, Phys. Lett.  B [**649**]{}, 230 (2007) \[arXiv:hep-lat/0701018\]. M. Creutz, arXiv:0704.2016 \[hep-lat\]. S. Dürr, C. Hoelbling and U. Wenger, Phys. Rev. D [**70**]{}, 094502 (2004). S. Dürr and C. Hoelbling, Phys. Rev. D [**71**]{}, 054501 (2005) \[arXiv:hep-lat/0411022\]. S. Dürr and C. Hoelbling, Phys. Rev.  D [**74**]{}, 014513 (2006) \[arXiv:hep-lat/0604005\]. A. Hasenfratz and R. Hoffmann, Phys. Rev.  D [**74**]{}, 014511 (2006) \[arXiv:hep-lat/0604010\]. Y. Shamir, Phys. Rev.  D [**75**]{}, 054503 (2007) \[arXiv:hep-lat/0607007\]. S. R. Sharpe, PoS [**LAT2006**]{}, 022 (2006) \[arXiv:hep-lat/0610094\]. G. Dahlquist and Å. Björck, *Numerical Methods*, 1st ed., Prentice-Hall, 1974. S. R. Beane [*et al.*]{}, arXiv:0905.0466 \[hep-lat\]. C. Michael, Nucl. Phys. B [**259**]{}, 58 (1985). M. Luscher and U. Wolff, Nucl. Phys.  B [**339**]{}, 222 (1990). W. Detmold and M. J. Savage, QCD, ” Nucl. Phys. A [**743**]{}, 170 (2004) \[arXiv:hep-lat/0403005\]. S. He, X. Feng and C. Liu, JHEP [**0507**]{}, 011 (2005) \[arXiv:hep-lat/0504019\]. S. R. Beane, P. F. Bedaque, K. Orginos and M. J. Savage \[NPLQCD Collaboration\], Phys. Rev.  D [**73**]{}, 054503 (2006) \[arXiv:hep-lat/0506013\]. K. Orginos and A. Walker-Loud, Phys. Rev.  D [**77**]{}, 094505 (2008) \[arXiv:0705.0572 \[hep-lat\]\]. H. W. Lin and K. Orginos, Phys. Rev.  D [**79**]{}, 034507 (2009) \[arXiv:0712.1214 \[hep-lat\]\]. G. Colangelo, J. Gasser and H. Leutwyler, Nucl. Phys.  B [**603**]{}, 125 (2001) \[arXiv:hep-ph/0103088\]. W. L. Wang, F. Huang, Z. Y. Zhang, Y. W. Yu and F. Liu, Eur. Phys. J.  A [**32**]{}, 293 (2007) \[arXiv:nucl-th/0612007\]. R. G. Edwards and B. Joo \[SciDAC Collaboration and LHPC Collaboration and UKQCD Collaboration\], Nucl. Phys. Proc. Suppl.  [**140**]{}, 832 (2005) \[arXiv:hep-lat/0409003\]. [^1]: Here the signal is the Monte Carlo estimate of the quantum correlation function evaluated on the lattice, while the noise represents the statistical fluctuations in the correlation function. [^2]: A recent high-statistics study of baryon correlation functions on anisotropic clover lattices has found that the exponential decay with time of signal/noise occurs only [*asymptotically*]{} in time, and therefore, the signal/noise problem in baryon correlation functions is not nearly as severe as previously thought [@Beane:2009ky]. [^3]: The $\pi^+\Xi^0$ and $\overline{K}{}^0\Sigma^+$ systems have the same quantum numbers, and therefore require a mixed channel analysis in order to extract the $\overline{K}{}^0\Sigma^+$ scattering length. This is discussed in Section \[sec:MChAm\]. [^4]: In order to be consistent with the meson-baryon literature, we have chosen to use the “particle physics” definition of the scattering length, as opposed to the “nuclear physics” definition, which is opposite in sign. [^5]: Computed by the LHP collaboration for the coarse ensembles. [^6]: Here we use the standard notation, LO = leading order, NLO = next-to-leading order and so on.
--- abstract: 'In this work, we present an effective field theory to describe a two-component Fermi gas near a $d$-wave interaction resonance. The effective field theory is renormalizable by matching with the low energy $d$-wave scattering phase shift. Based on the effective field theory, we derive universal properties of the Fermi gas by the operator product expansion method. We find that beyond the contacts defined by adiabatic theorems, the asymptotic expressions of the momentum distribution and the Raman spectroscopy involve two extra contacts which provide additional information of correlations of the system. Our formalism sets the stage for further explorations of many-body effects in a $d$-wave resonant Fermi gas. Finally we generalize our effective field theory for interaction resonances of arbitrary higher partial waves.' author: - Pengfei Zhang - Shizhong Zhang - Zhenhua Yu title: 'Effective theory and universal relations for Fermi gases near a $d$-wave interaction resonance ' --- *Introduction.* Correlations of $d$-wave symmetry are of fundamental interest in modern physics. One outstanding example is the $d$-wave Cooper pairing in high-$T_c$ superconductors which provides a paradigmatic case of strongly correlated electron systems [@HTC]. In cold atom systems, strong $d$-wave correlations can also be generated close to a $d$-wave Feshbach resonance, as has been demonstrated experimentally in Cr [@Pfau; @Gorceix]. While it is generally believed that, compared with $s$-wave resonance, atomic gases close to higher partial wave resonances suffer more rapid atom loss, recent spectroscopic measurements around a $p$-wave Feshbach resonance indicate that quasi-equilibrium states of such systems exist and their universal properties can be investigated [@Thywissen]. Theoretically, however, many-body physics with resonant $d$-wave interactions has been rarely studied and, in particular, an appropriate minimal model is still lacking. In this work, we consider a two-component Fermi gas near a $d$-wave interaction resonance. We construct an effective low-energy field theory, the bare coupling constants of which are renormalized by matching with the $d$-wave scattering phase shift $\cot\delta(k)=-1/(Dk^5)-1/(vk^3)-1/(Rk)$. The super volume $D$, the effective volume $v$ and the effective range $R$ are the minimal set of parameters that is needed to parametrize the inter-fermion interactions. Furthermore, we use the effective theory, combined with the operator product expansion (OPE) method, to derive universal properties of the Fermi gas when the average inter-particle distance is much larger than the range $r_0$ associated with the inter-fermion interaction. We find that the universal behaviour of the system is governed by five quantities, three of which are related to the variation of the system energy with respect to the three $d$-wave scattering parameters, analogous to the contacts defined in the case of $s$- and $p$-wave case [@Tan2008; @Braaten2008a; @Zhang2009; @Werner2009; @Braaten2010; @Valiente2011; @Valiente2012; @Ueda2015; @Yu2015; @Qi2016; @Cui2016]. However, we find that the sub-leading terms of the tails of momentum distribution and Raman spectroscopy involve two new contacts, which further characterise the correlations of the system at short distances. Our effective field theory provides a minimal model for studying other many-body physics of Fermi gases near a $d$-wave resonance. We show that the $d$-wave contacts reveal much richer correlation structures than the $s$-wave case. Finally we generalize our formalism for resonant interactions to arbitrary higher partial waves. *Effective field theory.* To describe the low energy degrees of freedom close to a $d$-wave interaction resonance, we adopt a Lagrangian field theory and requires that the Lagrangian density to obey the following symmetry requirements: (1) Rotation symmetry. (2) Galilean invariance such that the scattering of two fermions in vacuum does not depend on their center of mass momentum. In addition, we aim to establish a *local* effective field theory, which should be *renormalizable* in the low energy limit in terms of the minimal set of scattering parameters $D,v,R$, describing the $d$-wave scattering phase shift. The Lagrangian density of the effective field theory that we construct for the system up to a momentum cutoff $\Lambda$ is given by $$\begin{aligned} \mathcal{L}= &\sum_{i=1}^{2}\psi_{i}^{\dagger}\left(i\partial_{t}+\dfrac{\nabla^{2}}{2M}\right)\psi_{i}+\sum_{m=-\ell}^{\ell}\bar g(d_{\ell m}^{\dagger}\mathcal Y_{m}+h.c.) \nonumber\\ +&\eta\sum_{m=-\ell}^{\ell} d_{\ell m}^{\dagger}\left[i\partial_{t}+\dfrac{\nabla^{2}}{4M}+\bar z\left(i\partial_{t}+\dfrac{\nabla^{2}}{4M}\right)^{2}-\bar\nu\right]d_{\ell m} \label{L}\end{aligned}$$ where $\ell=2$ and the operator $\mathcal Y_{m}$ is given by $$\begin{aligned} \mathcal Y_{m}=\dfrac{1}{4}\sum_{a,b=x,y,z}&C^{m}_{ab}[(\partial_{a}\psi_{1})(\partial_{b}\psi_{2})-(\partial_{a}\partial_{b}\psi_{1})\psi_{2}\notag \\ &+(\partial_{b}\psi_{1})(\partial_{a}\psi_{2})-\psi_{1}(\partial_{a}\partial_{b}\psi_{2})]. \label{Y}\end{aligned}$$ The field operator $\psi_{i}$ is the annihilation operator for fermions in state $|i\rangle$. $M$ is the mass of the fermions. We take $\hbar=1$ throughout. The dimer fields $d_{\ell m}$ of azimuthal quantum number $m$ mediate the $d$-wave interaction between the two fermions, which we assume to be isotropic. $C^{m}_{ab}$ are the Clebsch-Gordon coefficients when transforming $k_{i}k_{j}/k^{2}$ to the spherical harmonics $\sqrt{4\pi}Y_{2m}(\hat{k})$. In terms of $a_{i,\mathbf k}$ and $b_{\ell m,\mathbf k}$, the Fourier transformations of the operators $\psi_{i}$ and $d_{\ell m}$, the fermion-dimer coupling in the Lagrangian $L=\int d{\bf r}\mathcal{L}$ \[the second term in Eq. (\[L\])\] takes the form $$\begin{aligned} L_{fd}=\bar g\sqrt{\frac{4\pi}V}\sum_{m=-\ell}^\ell\sum_{\mathbf p,\mathbf k} [k^\ell Y_{\ell m}(\hat k)b^\dagger_{\ell m,\mathbf p}a_{1,\frac{\mathbf p}{2}+\mathbf k} a_{2,\frac{\mathbf p}{2}-\mathbf k} +h.c.]\label{Lfd},\end{aligned}$$ where $V$ is the volume of the system. Since we focus on the effects of the $d$-wave resonance, we neglect possible background scatterings of either $s$- or $p$-wave symmetry, and those due to direct couplings between the fermions. The term proportional to $\eta=\pm 1$ describes the energy of a single dimer, with $\bar\nu$ being its detuning. Unlike the case for $p$-wave scattering, an extra term proportional to the bare coupling constant $\bar z$ is constructed in order to renormalize the effective range $R$ \[see Eq. (\[r\])\], while still respecting the Galilean invariance. As will be shown later, it is necessary to take $\eta=-1$ in order to achieve a renormalizable theory. The effective field theory in Eq. (\[L\]) differs from that for the $s$-wave and $p$-wave resonance models, and it is worthwhile to point out the differences. In the $s$-wave case, Kaplan was the first to use an $s$-wave dimer field $b_{00,\mathbf k}$ to describe the non-relativistic scattering between nucleons with a large $s$-wave ($\ell=0$) scattering length $a_s$ [@Kaplan1997]. In this case, the zero-range limit $\Lambda\to\infty$ is well defined with the choice $\eta=1$ and $\bar{z}=0$ by matching the scattering matrix with the $s$-wave phase shift expansion $k\cot\delta_s(k)=-1/a_s$. The same resonance model was constructed independently by Kokkelmans *et. al.* for atoms close to an $s$-wave Feshbach resonance [@Kokkelmans2002], for which the dimer field $b_{00,\mathbf k}$ naturally represents the closed channel molecules. Different from the $s$-wave case, low-energy scattering in the $p$-wave channel is described by two parameters, $k^3\cot\delta_p(k)=-1/v_p-k^2/R_p$ [@Braaten2012p]. Here $v_p$ is the $p$-wave scattering volume and $R_p$ is the $p$-wave effective range. In this case, however, to obtain a renormalizable theory with finite $v_p$ and $R_p$ in the low energy limit, one has to take $\eta=-1$. This means that the free dimer field $b_{1m,\mathbf k}$ becomes [*ghost*]{} field with negative norm [@Braaten2012p]. However, such negative norm is only relevant at a much higher energy, of order of $\Lambda^2$, which is irrelevant for the low-energy physics described by the scattering phase shift $\delta_p(k)$. In the $d$-wave interaction resonance, it is first important to note that the low-energy scattering phase shift must be retained up to order $k^4$, namely $k^5\cot\delta_d(k)=-1/D-k^2/v-k^4/R$; the three interaction parameters $D$, $v$ and $R$ are the minimal set. This is because across the resonance, while the magnitude of $D$ can be tuned to be much larger than the interaction range $r_0$, $v/r_0^3$ and $R/r_0$ are typically of order unity. Taking the zero limit $v\to0$ or (and) $R\to0$ would lead to the noninteracting limit, i.e., $\delta(k)\to0$, which cannot describe the original interacting system. In contrast, it is safe to take the zero limit of the expansion coefficients of order higher than $k^4$ in $k^5\cot\delta(k)$. Now, we note that in Eq. (\[L\]), the term $d_{\ell m}^{\dagger}(i\partial_{t})d_{\ell m}$ corresponds to the total energy of two scattering fermions, and the term $d_{\ell m}^{\dagger}(-\nabla^2/4M)d_{\ell m}$ corresponds to the center of mass energy. The combination $d_{\ell m}^{\dagger}(i\partial_{t}+\nabla^2/4M)d_{\ell m}$ thus corresponds to the relative scattering energy. As a result, we explicitly construct the extra term $\bar{z} d_{\ell m}^{\dagger}(i\partial_{t}+\nabla^2/4M)^2d_{\ell m}$ in Eq. (\[L\]) to match the $k^4$-dependence of $k^5\cot\delta_d(k)$ for $d$-wave resonances. Note that by construction, the Lagrangian Eq. (\[L\]) maintains explicitly the Galilean invariance. The renormalizability of Eq. (\[L\]) is manifested by calculating the $T$-matrix, $T({\bf P},{\bf k},{\bf k}',\Omega)$, of scattering between two fermions with relative incoming (outgoing) momentum $2{\bf k}$ ($2{\bf k}'$) and total momentum ${\bf P}$. Due to the Galilean invariance of Eq. (\[L\]), one only needs to calculate in the center of mass frame, and the $T$-matrix is given by $$T_m({\bf 0}, {\mathbf{k}},{\mathbf{k'}}, \Omega)=-4\pi\bar g^2 k^4Y_{2m}(\hat{\bf k})Y^*_{2m}(\hat{\bf k}')\mathcal D({\bf 0},\Omega),$$ where $|{\bf k}|=|{\bf k}'|$ due to energy conservation and $\hat{\bf k}={\bf k}/|{\bf k}|$ and $\hat{\bf k}'={\bf k'}/|{\bf k'}|$. $\mathcal{D}({\bf P},\Omega)$ is the full dimer propagator, given in Fig. \[dia1\](a) $$\begin{aligned} &\mathcal D^{-1}({\bf P},\Omega)\nonumber\\ &=\bar{\mathcal{D}}^{-1}({\bf P},\Omega)-\frac{\bar g^2}{2\pi^2}\int_0^\Lambda dq\frac{q^6}{\Omega-P^2/4M-q^2/M},\end{aligned}$$ where $\bar{\mathcal{D}}({\bf P},\Omega)$ is the bare dimer propagator given by $$\begin{aligned} \bar{\mathcal{D}}(P,\Omega)=\frac{E_{p,+}-E_{p,-}}{\eta\bar z}\left(\frac1{\Omega-E_{p,+}}-\frac1{\Omega-E_{p,-}}\right),\end{aligned}$$ with $E_{p,\pm}=P^2/4M-(1\mp\sqrt{1+4\bar\nu\bar z})/2\bar z$ the dimers’ normal mode energies. In the case $1+4\bar\nu\bar z>0$, there always exits one branch of $\bar{\mathcal{D}}(P,\Omega)$ with negative weight corresponding to the presence of *ghost* fields [@Braaten2012], [*irrespective*]{} of the sign of $\eta$. The appearance of ghost fields is inevitable due to the requirement to renormalize not only $v$ but also $R$ for $d$-wave interactions \[see Eqs. (\[v\]) and (\[r\])\] [@Braaten2012p]. In the case $1+4\bar\nu\bar z<0$, the poles of $\bar{\mathcal{D}}(P,\Omega)$ move away from the real axis into the complex plane and by itself seems problematic. However, the low energy observables predicted by the full coupled effective field theory remains valid (see below). In Table \[tab\], we summarize the main differences between our $d$-wave effective field theory with the $s$- and $p$-wave cases. [m[1.2cm]{} m[0.8 cm]{} m[2 cm]{} m[0.8 cm]{} m[0.8 cm]{} m[1.8 cm]{}]{} & $\ell$ & minimal parameters & $\eta$ & $\bar{z}$ & [ghost field]{}\ & $0$ & $a_s$ & $1$ & 0 & No\ [$p$-wave]{} & $1$ & $v_p, R_p$ & $-1$ & 0 & Yes\ [$d$-wave]{} & $2$ & $D, v, R$ & $-1$ & $\neq 0$ & Yes\ Matching $T_m({\bf 0}, k\hat{{\bf k}},k\hat{{\bf k}}', k^2/M+i0)$ with $\cot\delta_d(k)=-1/Dk^5-1/vk^3-1/Rk$ in the limit $k\to 0$, we find the renormalization conditions: $$\begin{aligned} \frac1{D} &=-\eta\frac{4\pi \bar\nu}{\bar g^2 M}+\frac{2\Lambda^5}{5\pi},\label{d}\\ \frac1{v} &=\eta\frac{4\pi }{\bar g^2 M^2}+\frac{2\Lambda^3}{3\pi},\label{v}\\ \frac1{R} &=\eta\frac{4\pi \bar z}{\bar g^2 M^3}+\frac{2\Lambda}\pi.\label{r}\end{aligned}$$ To keep values of $D$, $v$ and $R$ finite while taking the limit $\Lambda\to\infty$, we require $\eta=-1$. Otherwise if $\eta=1$, from Eq. (\[v\]), $|v|<3\pi/2\Lambda^3$ and approaches zero. In fact, it turns out not possible to construct a purely fermionic model with contact inter-fermion interactions which reproduces the correct $d$-wave low energy scattering amplitude with finite parameters $v$ and $R$ in the limit $\Lambda\to\infty$. Thus it is crucial to introduce the dimer field with the concomitant appearance of the ghost field which, however, does not alter the low energy physics. The applicable regime of our effective field theory can be analysed from the pole structure of $T_m$ in terms of the renormalized parameters $$\begin{aligned} &T_m({\bf 0}, k{\mathbf{\hat k}},k{\mathbf{\hat k'}}, \Omega)\nonumber\\ &=-\frac{16\pi^2 k^4Y_{2m}(\mathbf{\hat k})Y^*_{2m}(\mathbf{\hat k'})/M}{1/D+M\Omega/v+(M\Omega)^2/R+i(M\Omega)^{5/2}}.\label{rt}\end{aligned}$$ For simplicity, let us consider the limit $1/D\to 0^+$. The real pole of $T_m$ at $\Omega\to0^-$ with positive weight $\sim v$ corresponds to a physical two-fermion bound state approaching threshold. However, since typically $v\sim r_0^3$ and $R\sim r_0$, there are other complex poles at energies $|\Omega|\sim1/Mr_0^2$, which apparently violate the unitary condition on the $S$-matrix. The origin of these unphysical poles is the truncation of $\cot\delta_d(k)$. However, as long as we are only interested in energy scales much smaller than $1/Mr_0^2$ as we shall do in the following, our effective field theory should give physically valid results. ![Feynman diagrams for: (a) the $T$-matrix for two fermions; (b) the matrix element of $\psi^{\dagger}_i(\mathbf{R}+\mathbf{r}/{2})\psi_i(\mathbf{R}-\mathbf{r}/{2})$ ; (c) the matrix element of dimer bilinears; (d) the diagram for the Raman spectrum. In these diagrams, the wavy lines represent the propagators for the bare dimer fields, the solid lines represent the propagators for the bare fermion fields and the crosses represent the operators which are inserted.[]{data-label="dia1"}](diagram1.pdf){width="3"} *$D$-wave contacts.* Effective field theory has served as an ideal formalism to elucidate the universal aspects of quantum gases [@Nishida2006; @Braaten2008b]; in particular, the derivation of universal relations involving the so-called contacts using the operator product expansion (OPE) [@Braaten2008a; @Braaten2010; @Wilson; @Braaten2008b; @Braaten2008c; @Son2010; @Braaten2011; @Hofmann2011; @Zwerger2011; @Braaten2012; @Goldberger2012]. This is an operator relation for the product of two operators at small separation [@Wilson; @Peskin] $$\begin{aligned} {O}_{i}\left(\mathbf{R}+\dfrac{\mathbf{r}}{2}\right){O}_{j}\left(\mathbf{R}-\dfrac{\mathbf{r}}{2}\right) =\sum_{l} f^{ij}_{l}(\mathbf{r}){O}_{l}(\mathbf{R}) \label{ope} \end{aligned}$$ where $O_i$ are the [*local*]{} operators and $f^{ij}_l(\mathbf r)$ are the expansion functions. A similar expansion can also be carried out in the time domain. OPE is an ideal tool to explore short-range physics, $r_0\ll r\ll n^{-1/3}$ in a field theory context. Here $n$ is the average density. In the case of $d$-wave interactions, we first define three contact densities (operators) as the derivatives of the Lagrangian density $\mathcal L$ with respect to $D^{-1}$, $v^{-1}$ and $R^{-1}$, by using Eqs. (\[d\]) to (\[r\]) $$\begin{aligned} \frac{\hat{\mathcal C}_D}{M} &\equiv\frac{\delta\mathcal L}{\delta (D^{-1})}=\frac{M\bar g^2}{4\pi} \sum_m d^\dagger_{\ell m} d_{\ell m},\label{cd}\\ \frac{\hat{\mathcal C}_v}{M} &\equiv \frac{\delta\mathcal L}{\delta (v^{-1})} =\frac{M^2\bar g^2}{4\pi}\sum_m d^\dagger_{\ell m}\left(i\partial_t+\frac{\nabla^2}{4M}\right) d_{\ell m},\label{cv}\\ \frac{\hat{\mathcal C}_R}{M} &\equiv \frac{\delta\mathcal L}{\delta (R^{-1})}=\frac{M^3\bar g^2}{4\pi}\sum_m d^\dagger_{\ell m}\left(i\partial_t+\frac{\nabla^2}{4M}\right)^2 d_{\ell m}.\label{cr}\end{aligned}$$ Note that we have used the equation of motion satisfied by $d_{\ell m}$ to obtain the concise expression of Eq. (\[cv\]). While $\hat{\mathcal C}_D$ is proportional to the total dimer density, $\hat{\mathcal C}_v$ and $\hat{\mathcal C}_R$ can be considered as proportional to the ones weighted by the powers of the internal energy of the dimers. A similar structure has been found for $p$-wave contacts [@Yu2015]. In addition, as we will see from the tails of the momentum distribution and the Raman spectroscopy, it is also useful to introduce two extra $d$-wave contact densities as $$\begin{aligned} \frac{\hat{\mathcal C}_{D,P}}{M} &\equiv\frac{M^2\bar g^2}{4\pi}\sum_m d^\dagger_{\ell m}\left(-\frac{\nabla^2}{4M}\right) d_{\ell m}\label{cdp},\\ \frac{\hat{\mathcal C}_{v,P}}{M} &\equiv\frac{M^3\bar g^2}{4\pi}\sum_m d^\dagger_{\ell m}\left(i\partial_t+\frac{\nabla^2}{4M}\right)\left(-\frac{\nabla^2}{4M}\right) d_{\ell m}\label{cvp},\end{aligned}$$ which, compared with Eqs. (\[cd\]) and (\[cv\]), are further weighted by the kinetic energy of the dimers, and encapsulate additional information of correlations at short distances. The spatial integration of the expectation values of the contact densities are defined as the $d$-wave contacts: $C_D=\int d\mathbf r \langle\hat{\mathcal C}_D\rangle$, $C_v=\int d\mathbf r \langle\hat{\mathcal C}_v\rangle$, $C_R=\int d\mathbf r \langle\hat{\mathcal C}_R\rangle$, $C_{D,P}=\int d\mathbf r\langle\hat{\mathcal C}_{D,P}\rangle$, and $C_{v,P}=\int d\mathbf r \langle\hat{\mathcal C}_{v,P}\rangle$. From Eqs. (\[cd\]-\[cr\]), one can write down the adiabatic theorems, $$\frac{\partial F}{\partial \alpha^{-1}}=-\frac{C_\alpha}{M};~~\alpha=D, v, R,\label{df}$$ where $F$ is the free energy of the system. To illustrate the use of the effective field theory, we now derive some universal relations between the introduced contacts and various physical observables. *Short distance expansion.* The tails of the momentum distribution can be extracted from the one-body density matrix $\rho_i(\mathbf R,\mathbf r)=\langle\psi_i^{\dagger}(\mathbf{R}+\mathbf{r}/2)\psi_i(\mathbf{R}-\mathbf{r}/2)\rangle$ and can be measured experimentally by the time-of-flight technique [@Jin2010; @Jin2014]. To relate $\rho_i(\mathbf R,\mathbf r)$ with the $d$-wave contacts, we calculate the OPE by matching the matrix elements of operators from an incoming state $|I\rangle$ with two fermions of different species having momentum $\mathbf P/2+k \hat{\mathbf{k}}$ and $\mathbf P/2-k \hat{\mathbf{k}}$ to an outgoing state $|F\rangle$ with two fermions having momentum $\mathbf P/2+k \hat{\mathbf{k}}'$ and $\mathbf P/2-k \hat{\mathbf{k}}'$. The total energy of the fermion pair is $E=P^2/4M+k^2/M$. Since we are interested in the rotationally invariant case, we will average over the direction of the total momentum $\mathbf P$. The case without rotational invariance can be calculated similarly. The matrix element of $\rho_i$ is given by the diagram shown in Fig. \[dia1\](b) and the result is $$\begin{aligned} & \langle F|\rho_i(\mathbf R,\mathbf r)|I\rangle=4\pi M^2\bar g^{4}k^4 \sum_{m}Y_{2m}(\hat{\mathbf k})Y^{*}_{2m}(\hat{\mathbf k}')\mathcal D^{2}(P,E)\nonumber\\ &\times\left[\delta(\mathbf{r})+\dfrac{k^2}{2\pi r}-\dfrac{3r(k^4+P^2k^2/18)}{8\pi}\right]+{\rm const.}+o(\mathbf r).\label{rho}\end{aligned}$$ Likewise, we calculate the matrix elements of the contact densities according to the diagrams shown in Fig. \[dia1\](a, b). We find $$\begin{aligned} \langle F|\hat{\mathcal C}_D|I\rangle &=M^2\bar g^{4}k^4\sum_m Y_{2m}(\hat{\mathbf k})Y^{*}_{2m}(\hat{\mathbf k}')\mathcal D^{2}(P,E),\label{ncd}\\ \langle F|\hat{\mathcal C}_v|I\rangle &=k^2\langle F|\hat{\mathcal C}_D|I\rangle,\label{ncv}\\ \langle F|\hat{\mathcal C}_R|I\rangle &=k^4\langle F|\hat{\mathcal C}_D|I\rangle,\label{ncr}\\ \langle F|\hat{\mathcal C}_{D,P}|I\rangle &=P^2\langle F|\hat{\mathcal C}_D|I\rangle/4,\label{ncdk}\\ \langle F|\hat{\mathcal C}_{v,P}|I\rangle &=P^2k^2\langle F|\hat{\mathcal C}_D|I\rangle/4.\label{ncvk}\end{aligned}$$ After Fourier transforming Eq. (\[rho\]) and matching with Eqs. (\[ncd\]) to (\[ncvk\]), we find that the momentum distribution $n_i(\mathbf q)$ of the $i$th species has a tail in the large $q$-limit ($n^{1/3}\ll q\ll 1/r_0$) $$\begin{aligned} n_{i}(\mathbf q)=\frac{1}{V}\left[\dfrac{C_{D}}{2\pi^{2}}+\dfrac{C_{v}}{\pi^{2}q^{2}}+\dfrac{9C_{R}+2C_{v,P}}{6\pi^{2}q^{4}}\right],\label{tail}\end{aligned}$$ whose magnitude depends on the $d$-wave contact densities. The presence of the additional quantity $C_{v,P}$, which can not be derived from the adiabatic theorems (\[df\]), in the momentum tail can be understood in the following way. Let us consider a single pair of interacting fermions. In the center of mass frame of the pair where $C_{v,P}$ is zero according to Eqs. (\[cvp\]) and (\[tail\]), the momentum tail $n_{\rm com}(\mathbf q)$ involves only $C_\alpha$ for $\alpha=D,v,R$. However, when we switch to a reference frame moving with a relative velocity $\mathbf u$, the momentum tail of the pair in this new frame should be $n(\mathbf q)=n_{\rm com}(\mathbf q-m\mathbf u)$. Expansion of $n_{\rm com}(\mathbf q-m\mathbf u)$ to order $1/q^4$ leads to an extra term $\sim u^2 C_v$ in $n(\mathbf q)$, which is exactly the generally nonzero $C_{v,P}$ term in Eq. (\[tail\]) in this case. Note that the Galilean invariance garrauntees $C_D$ and $C_v$ having the same values in different reference frames \[cf. Eq. (\[df\])\]. Quantities similar to $C_{v,P}$ have been introduced for $p$-wave interactions in three dimensions [@Yu2015; @Peng2016x; @Yi2016x]. The tails of the momentum distribution $n_{i}(\mathbf q)$ seems to yield a divergent number of fermions. Actually, by the $U(1)$ gauge invariance of Eq. (\[L\]), the conserved total particle number is given by $$\begin{aligned} \hat{N}=&\int d\mathbf r\Big\{\sum _{i=1,2}\psi_{i}^{\dagger}\psi_{i}\nonumber\\ -&\sum_{m}(d_{m}^{\dagger}\left[1+\bar z \left(2i\partial_{t}+{\nabla^2}/{2M}\right)\right]d_{m}+h.c.)\Big\}.\end{aligned}$$ Using the renormalization relations (\[d\]), (\[v\]) and (\[r\]), one can verify that the divergent part of $n_{i}(\mathbf q)$ at large $q$ is cancelled by the dimer terms; the dimer terms can be considered as counterterms to the fermion densities. Note that the factor $\bar z \left(2i\partial_{t}+{\nabla^2}/{2M}\right)$ is due to the expansion of the bare dimer fields in terms of their normal modes. *Short distance and time expansion.* Single-particle spectral function, which reveals fundamental properties of an interacting many-body system, such as pairing and pseudo-gap phenomena, can be measured using Raman spectroscopy in atomic gases [@Gaebler2010; @Feld2011]. When two Raman lasers of frequency $\omega_1$ and $\omega_2$ and wave-vector $\mathbf k_1$ and $\mathbf k_2$ are applied, atoms can be excited from the initial internal state $|2\rangle$ to the final internal state $|3\rangle$ by absorbing energy $\omega=|\omega_1-\omega_2|$ and momentum $\mathbf q=\mathbf k_1-\mathbf k_2$. The resultant number of atoms transferred to state $|3\rangle$ is, by the Fermi golden rule, proportional to the rate $$\begin{aligned} I_{\rm Ra}(\mathbf{q},\omega)=&-\dfrac{1}{\pi}{\rm Im}\Pi_{\rm Ra}(\mathbf{q},\omega), \\ \Pi_{\rm Ra}(\mathbf{q},\omega)=&-iV\int dtd\mathbf{r} \,e^{i\omega t-i\mathbf{q}\cdot\mathbf{r}}\langle T \mathcal Q_{23}(\mathbf{r},t)\mathcal Q_{23}^{\dagger}(\mathbf{0},0)\rangle,\end{aligned}$$ with $\mathcal Q_{23}(\mathbf{r},t)\equiv \psi^{\dagger}_{3}(\mathbf{r},t)\psi_{2}(\mathbf{r},t)$. By calculating the OPE of $\mathcal Q_{23}(\mathbf{r},t)\mathcal Q_{23}^{\dagger}(\mathbf{0},0)$ in both the time and space domain, we find for $\omega>\epsilon_q\equiv q^2/2M$: $$\begin{aligned} &\frac\pi M I_{\rm Ra}(\mathbf{q},\omega) ={\left(M\omega-\dfrac{q^2}{4}\right)^{1/2}{C}_{D}}-\dfrac{q^2C_{D,P}}{3(4M\omega-q^2)^{3/2}}\nonumber \\ &+\left[\dfrac{q}{\sqrt{4M\omega-q^2}}+4\sinh^{-1}\left(\dfrac{q}{\sqrt{4M\omega-2q^2}}\right)\right]\frac{{C}_{v}}q\nonumber \\ &+\dfrac{2q^2(7q^4-40q^2M\omega+60M^2\omega^2)}{3(2M\omega-q^2)^2(4M\omega-q^2)^{5/2}}C_{v,P}\nonumber \\&+\dfrac{q^4-20q^2M\omega+60M^2\omega^2}{(2M\omega-q^2)^2(4M\omega-q^2)^{3/2}}{C}_{R}.\label{ra}\end{aligned}$$ For $\epsilon_q>\omega>\epsilon_q/2$, $I_{\rm Ra}(\mathbf{q},\omega)$ is given by Eq. (\[ra\]) with the factor $\sinh^{-1}[q/\sqrt{4M\omega-2q^2}]$ replaced by $\cosh^{-1}[q/\sqrt{-4M\omega+2q^2}]$. $I_{\rm Ra}(\mathbf{q},\omega)=0$ when $\omega<\epsilon_q/2$. In the limit $q\to 0$, $I_{\rm Ra}(\mathbf{0},\omega)$ gives the radio-frequency response and involves only $C_v, C_D$ and $C_R$. The presence of $C_{D,P}$ and $C_{v,P}$ in Eq. (\[ra\]) can also be understood from a Galilean covariance argument similar to the one given below Eq. (\[tail\]). [*Discussion*]{}. The construction of the effective field theory Eq. (\[L\]) for $d$-wave resonance suggests a general procedure for resonances of arbitrary higher partial waves. Consider a two-component Fermi gas with short-range interactions, the phase shift in the $\ell$-th scattering channel can be written as $k^{2\ell+1}\cot\delta_\ell(k)=-\sum_{\alpha=0}^{\ell} k^{2\alpha}/a_{\ell\alpha}+O(k^{2\ell+2})$ in the low energy limit. To reproduce the phase shift, we need only to generalize the dimer field term in Eq.(\[L\]) to $$\mathcal{L}_d=\sum_{m=-\ell}^{\ell} \sum_{\alpha=0}^{\ell}d_{\ell m}^{\dagger}\bar z_{\ell \alpha}\left(i\partial_{t}+\dfrac{\nabla^{2}}{4M}\right)^{\alpha}d_{\ell m},$$ and assume $L_{fd}$ to be the form of Eq. (\[Lfd\]) with the factor $\bar g\sqrt{4\pi/V}$ replaced by $4\pi/\sqrt{MV}$, which amounts to a rescaling of the dimer field $d_{\ell m}$. The relation between parameters $\{\bar z_{\ell \alpha}\}$ to the physical scattering parameters $\{a_{\ell\alpha}\}$ can be established similarly by matching the scattering $T$-matrix to that of $k^{2\ell+1}\cot\delta_\ell(k)$. One finds $$\begin{aligned} \frac1{a_{\ell\alpha}}=\bar z_{\ell\alpha}M^\alpha+\frac2{\pi}\frac{\Lambda^{2(\ell-\alpha)+1}}{2(\ell-\alpha)+1},\end{aligned}$$ for $0\le\alpha\le\ell$. For fixed $\{a_{\ell\alpha}\}$, the zero range limit $\Lambda\to\infty$ is attainable only if $\bar z_{\ell\alpha}$ are all negative. Our formalism sets the stage for the exploration of universal aspects of both few-body and many-body physics close to a higher partial wave resonance. Further important questions remain to be investigated, including the effects of long-range and multi-body interactions. *Acknowledgment.* We thank Hui Zhai, Ling-Fong Li, Zheyu Shi and Yusuke Nishida for helpful discussions. This work is supported by Tsinghua University Initiative Scientific Research Program, NSFC Grant No. 11474179. SZ is supported by Hong Kong Research Grants Council (General Research Fund, HKU 17306414 and Collaborative Research Fund, HKUST3/CRF/13G) and the Croucher Innovation Awards. [99]{} See, for example, C. C. Tsuei and J. R. Kirtley, Phase-Sensitive Tests of Pairing Symmetry in Cuprate Superconductors (eds Bennemann, K. H. & Ketterson, J. B.) Vol 2 (Springer Verlag, Berlin, 2008). J. Werner, A. Griesmaier, S. Hensler, J. Stuhler, T. Pfau, A. Simoni, and E. Tiesinga, Phys. Rev. Lett. **94**, 183201 (2005). Q. Beaufils, A. Crubellier, T. Zanon, B. Laburthe-Tolra, E. Maréchal, L. Vernac, and O. Gorceix, Phys. Rev. A **79**, 032706 (2009). C. Luciuk, S. Trotzky, S. Smale, Zhenhua Yu, Shizhong Zhang, J. H. Thywissen, Nature Physics (2016), doi:10.1038/nphys3670. S. Tan, Ann. Phys. **323**, 2952; [*ibid*]{}. **323**, 2971; [*ibid*]{}. **323**, 2987 (2008). E. Braaten and L. Platter, Phys. Rev. Lett. [**100**]{}, 205301 (2008). S. Zhang and A. J. Leggett, Phys. Rev. A [**79**]{}, 023601 (2009). F. Werner, L. Tarruell, and Y. Castin, Eur. Phys. J. B [**68**]{}, 401 (2009). E. Braaten, D. Kang, and L. Platter, Phys. Rev. Lett. **104**, 223004 (2010). M. Valiente, N.T. Zinner, and K. Mølmer, Phys. Rev. A **84**, 063626 (2011). M. Valiente, N.T. Zinner, and K. Mølmer, Phys. Rev. A **86**, 043616 (2012). S.M. Yoshida and M. Ueda, Phys. Rev. Lett. **115**, 135303 (2015). Z. Yu, J. H. Thywissen, and S. Zhang, Phys. Rev. Lett. **115**, 135304 (2015). M. He, S. Zhang, H.M. Chan, and Q. Zhou, Phys. Rev. Lett. **116**, 045301 (2016). Xiaoling Cui, arXiv:1605.04363 (2016). D. Kaplan, Nucl. Phys. B **494**, 471 (1997). S.J.J.M.F. Kokkelmans, J.N. Milstein, M.L. Chiofalo, R. Walser, and M.J. Holland, Phys. Rev. A **65**, 053617 (2002). E. Braaten, P. Hagen, H.-W. Hammer, and L. Platter, Phys. Rev. A **86**, 012711 (2012). Y. Nishida and D. T. Son, Phys. Rev. Lett. [**97**]{}, 050403 (2006). E. Braaten, M. Kusunoki, D. Zhang, Ann. Phys. **323**, 1770 (2008). K.G. Wilson, Phys. Rev. **179**, 1499 (1969). E. Braaten, D. Kang, and L. Platter, Phys. Rev. A **78**, 053606 (2008). D.T. Son and E.G. Thompson, Phys. Rev. A **81**, 063634 (2010). E. Braaten, D. Kang, and L. Platter, Phys. Rev. Lett. **106**, 153005 (2011). J. Hofmann, Phys. Rev. A **84**, 043603 (2011). M. Barth, and W. Zwerger, Ann. Phys.  **326**, 2544 (2011). C. Langmack, M. Barth, W. Zwerger, and E. Braaten, Phys. Rev. Lett. **108**, 060402 (2012). W.D. Goldberger and Z.U. Khandker, Phys. Rev. A **85**, 013624 (2012). M.E. Peskin and D.V. Schroeder, *An Introduction To Quantum Field Theory*, 1st ed. (Westview Press, Boulder, 1995). J.T. Stewart, J.P. Gaebler, T.E. Drake, and D.S. Jin, Phys. Rev. Lett. **104**, 235301 (2010). P. Makotyn, C.E. Klauss, D.L. Goldberger, E.A. Cornell, and D.S. Jin, Nat. Phys. **10**, 116 (2014). Shi-Guo Peng, Xia-Ji Liu, and Hui Hu, arXiv:1607.03989. Fang Qin, Xiaoling Cui, and Wei Yi, arXiv:1610.00223. J. P. Gaebler, J. T. Stewart, T. E. Drake, D. S. Jin, A. Perali, P. Pieri, and G. C. Strinati, Nat. Phys. [**6**]{}, 569 (2010). M. Feld, B. Fröhlich, E. Vogt, M. Koschorreck, and M. Köhl, Nature [**480**]{} 75 (2011).
--- abstract: | Cameron-Liebler sets of $k$-spaces were introduced recently in [@Ferdinand.]. We list several equivalent definitions for these Cameron-Liebler sets, by making a generalization of known results about Cameron-Liebler line sets in $\operatorname{PG}(n,q)$ and Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(2k+1,q)$. We also present a classification result.\ author: - 'A. Blokhuis, M. De Boeck, J. D’haeseleer' title: 'Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$' --- **Keywords**: Cameron-Liebler set, Grassmann graph. **MSC 2010 codes**: 51E20, 05B25, 05E30, 51E14, 51E30. Introduction ============ In [@CL] Cameron and Liebler introduced specific line classes in $\operatorname{PG}(3,q)$, when investigating the orbits of the projective groups $\operatorname{PGL}(n+1,q)$. These line sets $\mathcal{L}$ have the property that every line spread $\mathcal{S}$ in $\operatorname{PG}(3,q)$ has the same number of lines in common with $\mathcal{L}$. A lot of equivalent definitions for these sets of lines are known. An overview of the equivalent definitions can be found in [@phdDrudge Theorem $3.2$].\ After a large number of results regarding Cameron-Liebler sets of lines in the projective space $\operatorname{PG}(3, q)$, Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(2k+1, q)$ [@CLkclas], and Cameron-Liebler line sets in $\operatorname{PG}(n, q)$ [@phdDrudge] were defined. In addition, this research started the motivation for defining and investigating Cameron-Liebler sets of generators in polar spaces [@CLpol] and Cameron-Liebler classes in finite sets [@CLset]. In fact Cameron-Liebler sets could be introduced for any distance-regular graph. This has been done in the past under various names: boolean degree 1 functions, completely regular codes of strength 0, ... We refer to the introduction of [@Ferdinand.] for an overview. Note that the definitions do not always coincide, e.g. for polar spaces.\ One of the main reasons for studying Cameron-Liebler sets is that there are several equivalent definitions for them, some algebraic, some geometrical (combinatorial) in nature. In this paper we investigate Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$. In Section $2$ we give several equivalent definitions for these Cameron-Liebler sets of $k$-spaces. Several properties of these Cameron-Liebler sets are given in the third section.\ The main question, independent of the context where Cameron-Liebler sets are investigated, is always the same: for which values of the parameter $x$ there exist Cameron-Liebler sets and what are the examples corresponding to a given parameter $x$.\ For the Cameron-Liebler line sets, classification results and non-trivial examples were discussed in [@CL6; @CL; @CL10; @phdDrudge; @CL19; @CL191; @CL20; @CL21; @CL22; @CL25; @CL26; @CL33]. The strongest classification result is given in [@CL26], which proves that there exists a constant $c>0$ so that there are no Cameron-Liebler line sets in $\operatorname{PG}(3,q)$ with parameter $2<x<cq^{4/3}$. In [@phdDrudge; @CL6; @CL19; @CL20] the constructions of two non-trivial Cameron-Liebler line sets with parameter $x=\frac{q^2+1}{2}$ and $x=\frac{q^2-1}{2}$ were given. Classification results for Cameron-Liebler sets of generators in polar spaces were given in [@CLpol] and for Cameron-Liebler classes of sets, a complete classification was given in [@CLset]. Regarding the Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(2k+1,q)$, the classification results are described in [@Klaus; @CLkclas].\ If $q \in \{2,3,4,5\}$ a complete classification is known for Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$, see [@Ferdinand.]. In this article the authors show that the only Cameron-Liebler sets in this context are the trivial Cameron-Liebler sets.\ In the last section, we use the properties from Section \[sec3\] to give the following classification result: there is no Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q), n>3k+1$ with parameter $2\leq x\leq q^{\frac{n}{2}-\frac{k^2}{4}-\frac{3k}{4}-\frac{3}{2}}(q-1)^{\frac{k^2}{4}-\frac{k}{4}+\frac{1}{2}}\sqrt{q^2+q+1}$. The characterization theorem ============================ Note first that we will always work with projective dimensions and that vectors are regarded as column vectors. Let $\Pi_k$ be the collection of $k$-subspaces in $\operatorname{PG}(n,q)$ for $0 \leq k \leq n$ and let $A$ be the incidence matrix of the points and the $k$-spaces of $\operatorname{PG}(n,q)$: the rows of $A$ are indexed by the points and the columns by the $k$-spaces.\ We define $A_i$ as the incidence matrix of the relation $R_i=\{ (\pi,\pi')| \pi,\pi' \in \Pi_k, \dim(\pi \cap \pi') = k-i\}, 0\leq i\leq k+1$. These relations $R_0, R_1, \dots, R_{k+1}$ form the Grassmann association scheme $J_q(n+1,k+1)$. Remark that $A_0 = I$ and $\sum_{i=0}^{k+1} A_i = J$ where $I$ and $J$ are the identity matrix and all-one matrix respectively. We denote the all-one vector by $j$. Note that the Grassmann graph for $k$-spaces in $\operatorname{PG}(n,q)$ has incidence matrix $A_1$.\ It is known that there is an orthogonal decomposition $V_0 \perp V_1 \perp \dots \perp V_{k+1}$ of $\mathbb{R}^{\Pi_k}$ in common eigenspaces of $A_0,A_1,\dots, A_{k+1}$. In the following lemmas and theorems, we denote the disjointness matrix $A_{k+1}$ also by $K$ since the corresponding graph is a Kneser graph. For more information about the Grassmann schemes we refer to [@BCN Section $9.3$] and [@meagen Section $9$].\ We will use the *Gaussian binomial coefficient* ${{\genfrac{[}{]}{0pt}{}{a}{b}}}_q$ for $a,b\in \mathbb{N}$ and prime power $q \geq 2$: $$\begin{aligned} {{\genfrac{[}{]}{0pt}{}{a}{b}}}_q = {\frac{(q^{a}-1)\cdots (q^{a-b+1}-1)}{(q^{b}-1)\cdots (q^{}-1)}}.\end{aligned}$$ The Gaussian binomial coefficient ${{\genfrac{[}{]}{0pt}{}{a}{b}}}_q$ is equal to the number of $b$-spaces of the vector space $\mathbb{F}^a_q$, or in the projective context, the number of $(b-1)$-spaces in the projective space $\operatorname{PG}(a-1,q)$. If the field size $q$ is clear from the context, we will write ${{\genfrac{[}{]}{0pt}{}{a}{b}}}$ instead of ${{\genfrac{[}{]}{0pt}{}{a}{b}}}_q$. The following counting result will be used several times in this article. \[lemmadisjunct\] The number of $j$-spaces disjoint to a fixed $m$-space in $\operatorname{PG}(n,q)$ equals $q^{(m+1)(j+1)}{\genfrac{[}{]}{0pt}{}{n-m}{j+1}}$. To end the introduction of this section, we give the definition of a $k$-spread and a partial $k$-spread of $\operatorname{PG}(n,q)$. A *partial $k$-spread* of $\operatorname{PG}(n,q)$ is a collection of $k$-spaces which are mutually disjoint. A *$k$-spread* in $\operatorname{PG}(n,q)$ is a partial $k$-spread in $\operatorname{PG}(n,q)$ that partitions the point set of $\operatorname{PG}(n,q)$. Remark that a $k$-spread of $\operatorname{PG}(n,q)$ exists if and only if $k+1$ divides $n+1$, and necessarily contains $\frac{q^{n+1}-1}{q^{k+1}-1}$ elements ([@segre2]). Before we start with proving some equivalent definitions for a Cameron-Liebler set of $k$-spaces, we give some lemmas and definitions that we will need in the characterization Theorem \[theodef\]. \[eigenvallem\] Consider the Grassmann scheme defined by $J_q(n+1,k+1)$. The eigenvalue $P_{ji}$ of the distance-$i$ relation for $V_j$ is given by: $$\begin{aligned} \label{eigenval} P_{ji} = \sum\limits_{s=\max{(0,j-i)}}^{\min{(j,k+1-i)}} (-1)^{j+s}{\genfrac{[}{]}{0pt}{}{j}{s}}\begin{bmatrix}n-k+s-j \\ n-k-i\end{bmatrix} \begin{bmatrix} k+1-s \\ i\end{bmatrix} q^{i(i+s-j)+\frac{(j-s)(j-s-1)}{2}}.\end{aligned}$$ \[lemma2\] If $P_{1i}, i\geq 1,$ is the eigenvalue of $A_i$ corresponding to $V_j$, then $j=1$. We need to prove that $P_{1i} \neq P_{ji}$ for $q$ a prime power and $j>1$. We will first introduce $\phi_i(j) = \max\left\{a\mid q^a|P_{ji} \right\}$, which is the exponent of $q$ in the factorization of $P_{ji}$. Note that it is sufficient to show that $\phi_i(j)$ is different from $\phi_i(1)$ for all $i$. By Lemma \[eigenvallem\] we see that ${\phi_i(j)}= \min\left\{i(i+s-j)+\frac{(j-s)(j-s-1)}{2}|\max\{0,j-i\} \leq s \leq \min\{j,k+1-i\}\right\}$ unless there are two or more terms with a power of $q$ with minimal exponent as factor and that have zero as sum. If $s$ is the integer in $\{ \max\{0,j-i\},\dots, \min\{j,k+1-i\}\}$, and closest to $j-i-\frac{1}{2}$, then $f_{ij}(s)=i(i+s-j)+\frac{(j-s)(j-s-1)}{2}$ is minimal. - If $j \leq i$, we see that $f_{ij}(s)$ is minimal for $s=0$. Then we find ${\phi_i(j)}=\frac{j^2}{2}-(i+\frac{1}{2})j+i^2$. We see that for a fixed $i$, $\phi_i(k-1)>\phi_i(k), k\leq i$. Note that the minimal value for $f_{ij}(s)$ is reached for only one $s$. - If $j \geq i$, we see that $f_{ij}(s)$ is minimal for $s=j-i$. Then we find ${\phi_i(j)}={\binom{i}{2}}$. Again we note that the minimal value for $f_{ij}(s)$ is reached for only one $s$. We can conclude the following inequality for a given $i\geq 1$: $$\begin{aligned} \phi_i(1)>\phi_i(2)>\dots >\phi_i(i)=\phi_i(i+1)=\dots=\phi_i(k+1)\end{aligned}$$ This implies the statement for $i\neq 1$.\ For $i = 1$ we have $P_{11}=-{\genfrac{[}{]}{0pt}{}{k+1}{1}}+{\genfrac{[}{]}{0pt}{}{n-k}{1}}{\genfrac{[}{]}{0pt}{}{k}{1}}q$ and $P_{j1}=-{\genfrac{[}{]}{0pt}{}{j}{1}}{\genfrac{[}{]}{0pt}{}{k-j+2}{1}}+{\genfrac{[}{]}{0pt}{}{n-k}{1}}{\genfrac{[}{]}{0pt}{}{k+1-j}{1}}q$, so we can see that they are different if $j \neq n+1$. This is always true since $j\in \{1,\dots ,k+1\}$ and $k<n$. Note that for $j\geq 1$ it was already known that $|P_{ji}| \leq |P_{1i}|$. This weaker result was given in [@brouwer Proposition $5.4(ii)$]. \[lemmaaantaldisjunct\] Let $\pi$ be a $k$-dimensional subspace in $\operatorname{PG}(n,q)$ with $\chi_\pi$ the characteristic vector of the set $\{\pi \}$. Let $\mathcal{Z}$ be the set of all $k$-subspaces in $\operatorname{PG}(n,q)$ disjoint from $\pi$ with characteristic vector $\chi_\mathcal{Z}^{}$, then $$\begin{aligned} \chi_\mathcal{Z}^{} -q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} j -\chi_\pi \right) \in \ker(A).\end{aligned}$$ Let $v_\pi$ be the incidence vector of $\pi$ with its positions corresponding to the points of $\operatorname{PG}(n,q)$. Note that $A\chi_\pi = v_\pi$. We have that $A\chi_\mathcal{Z}^{} = q^{k^2+k}{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}(j-v_\pi)$ since $\mathcal{Z}$ is the set of all $k$-subspaces disjoint to $\pi$ and every point not in $\pi$ is contained in $q^{k^2+k}{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}$ $k$-spaces skew to $\pi$ (see Lemma \[lemmadisjunct\]). The lemma now follows from $$\begin{aligned} &\chi_\mathcal{Z}^{} -q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} j -\chi_\pi \right) \in \ker(A)\\ \Leftrightarrow\quad & A\chi_\mathcal{Z}^{} =q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} Aj -A\chi_\pi \right)\;. \qedhere\end{aligned}$$ A *switching set* is a partial $k$-spread $\mathcal{R}$ for which there exists a partial $k$-spread $\mathcal{R}'$ such that $\mathcal{R} \cap \mathcal{R}' = \emptyset$, and $\cup \mathcal{R} = \cup \mathcal{R}'$, in other words, $\mathcal{R}$ and $\mathcal{R}'$ have no common members and cover the same set of points. We say that $\mathcal{R}$ and $\mathcal{R}'$ are a *pair of conjugate switching sets*. The following lemma is a classical result in design theory. \[lemmaAfullrow\] Let $D$ be a $2$-design with incidence matrix $M$, then $M$ has full row rank. The following lemma gives the relation between the common eigenspaces $V_0$ and $V_1$ of the matrices $A_i,i \in \{0,\dots, k+1\}$ and the row space of the matrix $A$. For the proof we refer to [@meagen Theorem 9.1.4]. \[lemmaAgelijkaanV\_0V\_1\] For the Grassmann scheme $J_q(n+1,k+1)$ we have that $\operatorname{Im}(A^T)=V_0 \perp V_1$ and $V_0 = \langle j \rangle$. We want to make a combination of a generalization of Theorem $3.2$ in [@phdDrudge] and Theorem $3.7$ in [@CLkclas] to give several equivalent definitions for a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$. \[theodef\] Let $\mathcal{L}$ be a non-empty set of $k$-spaces in $\operatorname{PG}(n,q), n\geq 2k+1$ with characteristic vector $\chi$, and $x$ so that $|\mathcal{L}|=x\begin{bmatrix} n\\k\end{bmatrix}$. Then the following properties are equivalent. 1. $\chi \in$ $\operatorname{Im}(A^T)$. 2. $\chi \in (\ker(A))^\perp$. 3. For every $k$-space $\pi$, the number of elements of $\mathcal{L}$ disjoint from $\pi$ is $(x-\chi(\pi)){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}$. 4. The vector $v=\chi -x\frac{q^{k+1}-1}{q^{n+1}-1} j$ is a vector in $V_1$. 5. $\chi \in V_0 \perp V_1$. 6. For an $i\in \{1, \dots, k+1\}$ and a given $k$-space $\pi$, the number of elements of $\mathcal{L}$, meeting $\pi$ in a $(k-i)$-space is given by: $$\begin{aligned} \begin{cases} \left( (x-1) \frac{q^{k+1}-1}{q^{k-i+1}-1}+q^{i}\frac{q^{n-k}-1}{q^{i}-1}\right) q^{i(i-1)}\begin{bmatrix} n-k-1 \\ i-1 \end{bmatrix} \begin{bmatrix} k\\i \end{bmatrix}& \mbox{if } \pi \in \mathcal{L}\\ x \begin{bmatrix} n-k-1 \\ i-1\end{bmatrix} \begin{bmatrix} k+1 \\ i \end{bmatrix}q^{i(i-1)} & \mbox{if }\pi \notin \mathcal{L} \end{cases}\;. \end{aligned}$$ 7. for every pair of conjugate switching sets $\mathcal{R}$ and $\mathcal{R}'$, we have that $|\mathcal{L} \cap \mathcal{R}| = |\mathcal{L} \cap \mathcal{R}'|$. If $\operatorname{PG}(n,q)$ has a $k$-spread, then the following property is equivalent to the previous ones. 1. $|\mathcal{L}\cap \mathcal{S}| = x$ for every $k$-spread $\mathcal{S}$ in $\operatorname{PG}(n,q)$. We first prove that properties $1,2,3,4,5$ are equivalent by proving the following implications: - $1 \Leftrightarrow 2$: This follows since $\operatorname{Im}(B^T)$ $= (\ker(B))^\perp$ for every matrix $B$. - $2 \Rightarrow 3$: We assume that $\chi \in (\ker(A))^\perp$. Let $\pi \in \Pi_k$ and $\mathcal{Z}$ the set of $k$-spaces disjoint to $\pi$. By Lemma \[lemmaaantaldisjunct\], we know that $$\begin{aligned} &\chi_\mathcal{Z}^{} -q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} j -\chi_\pi \right) \in \ker(A)\\ \Leftrightarrow & \ \chi_\mathcal{Z}^{} \cdot \chi -q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} j \cdot \chi -\chi_\pi \cdot \chi \right)=0 \\ \Leftrightarrow & \ |\mathcal{Z}\cap \mathcal{L}| -q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix} \left(\begin{bmatrix} n\\k\end{bmatrix}^{-1} |\mathcal{L}| - \chi(\pi) \right)=0 \\ \Leftrightarrow & \ |\mathcal{Z}\cap \mathcal{L}| =(x-\chi(\pi)) q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix}\;.\end{aligned}$$ The last equality shows that the number of elements of $\mathcal{L}$, disjoint from $\pi$ is $(x-\chi(\pi)) q^{k^2+k}\begin{bmatrix}n-k-1\\k\end{bmatrix}$. - $3 \Rightarrow 4$: By expressing proposition $3$ in vector notation, we find that $K\chi = (xj-\chi){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}$ and since $Kj = q^{(k+1)^2}{\genfrac{[}{]}{0pt}{}{n-k}{k+1}}$ we see that $v=\chi -x\frac{q^{k+1}-1}{q^{n+1}-1} j$ is an eigenvector of $K$: $$\begin{aligned} Kv=& \ K\left(\chi -x\frac{q^{k+1}-1}{q^{n+1}-1} j\right) \\ =& \ (x j - \chi){\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} - x\frac{q^{k+1}-1}{q^{n+1}-1}q^{(k+1)^2}\begin{bmatrix} n-k \\ k+1 \end{bmatrix} j\\ =& \ {\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k} \left( x j -\chi -x \frac{q^{n+1}-q^{k+1}}{q^{n+1}-1} j \right)\\ =& \ -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k} \left( \chi -x \frac{q^{k+1}-1}{q^{n+1}-1} j \right) \\ =& \ P_{1,k+1} v\;.\end{aligned}$$ By using Lemma \[lemma2\] for $i=k+1$, we know that $v\in V_1$. - $4 \Rightarrow 5$: This follows since $V_0=\langle j\rangle$ (see Lemma \[lemmaAgelijkaanV\_0V\_1\]). - $5 \Rightarrow 1$: This follows from Lemma \[lemmaAgelijkaanV\_0V\_1\]. Now we show that the properties $6,7$ and $8$ are also equivalent to the other properties by showing the following implications. - $4 \Rightarrow 6$: The matrix $A_i$ corresponds to the relation $R_i$. This implies that $(A_i \chi)_\pi$ gives the number of $k$-spaces in $\mathcal{L}$ that intersect $\pi$ in a $(k-i)$-space. $$\begin{aligned} A_i \chi =& A_iv+x\frac{q^{k+1}-1}{q^{n+1}-1}A_i j= P_{1i}v+x\frac{q^{k+1}-1}{q^{n+1}-1}P_{0i}j \\ =& \left( - \begin{bmatrix}n-k-1 \\i-1 \end{bmatrix}\begin{bmatrix}k+1 \\ i\end{bmatrix}q^{i(i-1)}+\begin{bmatrix}n-k \\ i\end{bmatrix}\begin{bmatrix}k \\i \end{bmatrix}q^{i^2} \right) \left(\chi - x\frac{q^{k+1}-1}{q^{n+1}-1} j\right) \\&+ x\frac{q^{k+1}-1}{q^{n+1}-1} \begin{bmatrix}n-k\\i\end{bmatrix} \begin{bmatrix}k+1 \\ i\end{bmatrix} q^{i^2} j\\ =& \left( \begin{bmatrix}n-k \\ i\end{bmatrix}\begin{bmatrix} k\\i \end{bmatrix}q^{i^2}-\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix} n-k-1 \\i-1 \end{bmatrix}q^{i(i-1)} \right) \chi \\ &+ x{\frac{q^{k+1}-1}{q^{n+1}-1}}q^{i(i-1)}\left( {\genfrac{[}{]}{0pt}{}{n-k-1}{i-1}}{\genfrac{[}{]}{0pt}{}{k+1}{i}}-{\genfrac{[}{]}{0pt}{}{n-k}{i}}{\genfrac{[}{]}{0pt}{}{k}{i}}q^i + {\genfrac{[}{]}{0pt}{}{n-k}{i}}{\genfrac{[}{]}{0pt}{}{k+1}{i}}q^i \right) j\\ =& \left( \begin{bmatrix}n-k \\ i\end{bmatrix}\begin{bmatrix} k\\i \end{bmatrix}q^{i^2}-\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix} n-k-1 \\i-1 \end{bmatrix}q^{i(i-1)} \right) \chi \\ &+ x{\frac{q^{k+1}-1}{q^{n+1}-1}}q^{i(i-1)}{\genfrac{[}{]}{0pt}{}{n-k-1}{i-1}} {\genfrac{[}{]}{0pt}{}{k}{i}} \left( {\frac{q^{k+1}-1}{q^{k-i+1}-1}}-{\frac{q^{n-k}-1}{q^{i}-1}}q^i + {\frac{q^{n-k}-1}{q^{i}-1}}{\frac{q^{k+1}-1}{q^{k-i+1}-1}}q^i \right) j\\ =& \left( \begin{bmatrix}n-k \\ i\end{bmatrix}\begin{bmatrix} k\\i \end{bmatrix}q^{i^2}-\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix} n-k-1 \\i-1 \end{bmatrix}q^{i(i-1)} \right) \chi + x\begin{bmatrix}n-k-1 \\ i-1\end{bmatrix}\begin{bmatrix}k+1\\i\end{bmatrix}q^{i(i-1)}j\end{aligned}$$ Remark that this proves the implication for every $i \in \{1,\dots, k+1\}$. - $6 \Rightarrow 4$: We follow the approach of Lemma $3.5$ in [@CLkclas] where we look for an eigenvalue of $A_i$ and we define $\beta_i = x\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix}n-k-1\\i-1\end{bmatrix}q^{i(i-1)}$.\ From property $6$ we know that $$\begin{aligned} A_i \chi &= x\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix}n-k-1\\i-1\end{bmatrix}q^{i(i-1)} (j-\chi) + \left( (x-1) \frac{q^{k+1}-1}{q^{k-i+1}-1}+q^{i}\frac{q^{n-k}-1}{q^{i}-1}\right) q^{i(i-1)}\begin{bmatrix} n-k-1 \\ i-1 \end{bmatrix} \begin{bmatrix} k\\i \end{bmatrix} \chi \\ &=\left( \begin{bmatrix}n-k \\ i\end{bmatrix}\begin{bmatrix} k\\i \end{bmatrix}q^{i^2}-\begin{bmatrix}k+1 \\ i\end{bmatrix}\begin{bmatrix} n-k-1 \\i-1 \end{bmatrix}q^{i(i-1)} \right) \chi + x\begin{bmatrix}n-k-1 \\ i-1\end{bmatrix}\begin{bmatrix}k+1\\i\end{bmatrix}q^{i(i-1)}j\\ &=P_{1i} \chi + \beta_i j\;.\end{aligned}$$ Then we can see that $v_i = \chi + \frac{\beta_i}{P_{1i} - P_{0i}}j$ is an eigenvector for $A_i$ with eigenvalue $P_{1i}$: $$\begin{aligned} A_i\left(\chi + \frac{\beta_i}{P_{1i} - P_{0i}}j\right) =& P_{1i} \chi + \beta_i j + \frac{\beta_i}{P_{1i} - P_{0i}}P_{0i} j \\ =& P_{1i} \left(\chi + \frac{\beta_i}{P_{1i} - P_{0i}}j\right). \end{aligned}$$ By Lemma \[lemma2\] we know that $\chi + \frac{\beta_i}{P_{1i} - P_{0i}}j = \chi -x\frac{q^{k+1}-1}{q^{n+1}-1} j \in V_1$. We show that property $8$ is equivalent if $\operatorname{PG}(n,q)$ has a $k$-spread. - $2 \Rightarrow 8$: Let $\mathcal{S}$ be a $k$-spread in $\operatorname{PG}(n,q)$ and $\chi_\mathcal{S}^{}$ its characteristic vector. Then is $\chi_\mathcal{S}^{}-{{\genfrac{[}{]}{0pt}{}{n}{k}}}^{-1}j \in \ker(A)$. Since $\chi \in (\ker(A))^\perp$, this implies that $0=\chi\cdot \left(\chi_\mathcal{S}^{}-{{\genfrac{[}{]}{0pt}{}{n}{k}}}^{-1}j \right) = |\mathcal{L}\cap \mathcal{S}|-|\mathcal{L}|{{\genfrac{[}{]}{0pt}{}{n}{k}}}^{-1}$, so $|\mathcal{L}\cap \mathcal{S}| = |\mathcal{L}|{{\genfrac{[}{]}{0pt}{}{n}{k}}}^{-1} =x$. - $8 \Rightarrow 3$: Suppose that $\operatorname{PG}(n,q)$ contains $k$-spreads. We know that the group PGL$(n+1,q)$ acts transitive on the couples of pairwise disjoint $k$-spaces. Let $n_i$, for $i=1,2$ be the number of $k$-spreads that contain $i$ fixed pairwise disjoint $k$-spaces. This number only depends on $i$, and not on the chosen $k$-spaces.\ Let $\pi$ be a fixed $k$-space. The number of couples $(\pi',\mathcal{S})$, with $\mathcal{S}$ a spread that contains $\pi$ and $\pi'$ is equal to $q^{(k+1)^2}{\genfrac{[}{]}{0pt}{}{n-k}{k+1}} \cdot n_2 = n_1 \cdot \left(\frac{q^{n+1}-1}{q^{k+1}-1}-1\right)$, which implies that $n_1/n_2 = q^{k(k+1)}{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}$.\ By counting the number of couples $(\pi',\mathcal{S})$, with $\mathcal{S}$ a spread that contains $\pi$ and $\pi'$, and where $\pi'\in \mathcal{L}$, we find that the number of $k$-spaces in $\mathcal{L}$, disjoint to a fixed $k$-space $\pi$, is given by $(x-\chi(\pi))n_1/n_2 = (x-\chi(\pi))q^{k(k+1)}{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}$. To end this proof, we show that property $7$ is equivalent with the other properties. - $2 \Rightarrow 7$: Let $\chi_\mathcal{R}^{}$ and $\chi_{\mathcal{R}'}^{}$ be the characteristic vectors of the pair of conjugate switching sets $\mathcal{R}$ and $\mathcal{R}'$ respectively. As $\mathcal{R}$ and $\mathcal{R}'$ cover the same set of points, we find: $\chi_\mathcal{R}^{}-\chi_{\mathcal{R}'}^{} \in \ker(A)$. This implies that $0=\chi \cdot (\chi_\mathcal{R}^{}-\chi_{\mathcal{R}'}^{})=\chi \cdot \chi_\mathcal{R}^{}-\chi \cdot \chi_{\mathcal{R}'}^{}$, so that $\chi \cdot \chi_\mathcal{R}^{}=|\mathcal{L} \cap \mathcal{R}| = |\mathcal{L} \cap \mathcal{R}'|=\chi \cdot \chi_{\mathcal{R}'}^{}$ - $7 \Rightarrow 1$: We first show that property $7$ implies the other properties if $n=2k+1$. For any two $k$-spreads $\mathcal{S}_1,\mathcal{S}_2$ the sets $\mathcal{S}_1 \setminus \mathcal{S}_2$ and $\mathcal{S}_2 \setminus \mathcal{S}_1 $ form a pair of conjugate switching sets. So $|\mathcal{L} \cap (\mathcal{S}_1 \setminus \mathcal{S}_2)|=|\mathcal{L}\cap (\mathcal{S}_2 \setminus \mathcal{S}_1 )|$, which implies that $|\mathcal{L} \cap \mathcal{S}_1|=|\mathcal{L}\cap \mathcal{S}_2|=c$.\ Now we prove that this constant $c$ equals $x=|\mathcal{L}|{\genfrac{[}{]}{0pt}{}{2k+1}{k}}^{-1}$. Let $n_i$, for $i=0,1$, be the number of $k$-spreads containing $i$ fixed pairwise disjoint $k$-spaces. This number only depends on $i$, and not on the chosen $k$-spaces. The number of couples $(\pi,\mathcal{S})$, with $\mathcal{S}$ a spread that contains $\pi$ is equal to ${\genfrac{[}{]}{0pt}{}{2k+2}{k+1}} \cdot n_1 = n_0 \cdot \frac{q^{2k+2}-1}{q^{k+1}-1}$, which implies that $n_0/n_1 = {\genfrac{[}{]}{0pt}{}{2k+1}{k}}$.\ By counting the number of couples $(\pi,\mathcal{S})$, with $\mathcal{S}$ a spread that contains $\pi$, and where $\pi\in \mathcal{L}$, we find, that the number of $k$-spaces in $\mathcal{L}\cap \mathcal{S}$ equals $|\mathcal{L}| n_1/n_0=|\mathcal{L}| {\genfrac{[}{]}{0pt}{}{2k+1}{k}} ^{-1}=x$. This implies proposition $8$, and hence, proposition $1$. Now we prove that implication $7 \Rightarrow 1$ also holds if $n>2k+1$. Given a subspace $\tau$ in $\operatorname{PG}(n,q)$, we will use the notation $A_{|\tau}$ for the submatrix of $A$, where we only have the rows, corresponding with the points of $\tau$, and the columns corresponding with the $k$-spaces in $\tau$. We know that the matrix $A_{|\tau}$ has full rank by Lemma \[lemmaAfullrow\].\ Let $\Pi$ be a $(2k+1)$-dimensional subspace in $\operatorname{PG}(n,q)$. By proposition $7$, we know that for every two $k$-spreads $\mathcal{R},\mathcal{R}'$ in $\Pi$, we have $|\mathcal{L}\cap \mathcal{R}|=|\mathcal{L}\cap \mathcal{R}'|$ since $\mathcal{R}\setminus \mathcal{R}'$ and $\mathcal{R}'\setminus \mathcal{R}$ are conjugate switching sets. This implies that $\chi_{\mathcal{L}|\Pi}^{} \in \operatorname{Im}\left(A_{|\Pi}^T\right)$ by the arguments above applied for the $(2k+1)$-space $\Pi$. So, there is a linear combination of the rows of $A_{\Pi}$ equal to $\chi_{\mathcal{L}|\Pi}$. This linear combination is unique since $A_{|\Pi}$ has full row rank.\ Now we want to show that the linear combination of $\chi_\mathcal{L}^{}$ is uniquely defined by the vectors $\chi_{\mathcal{L}|\Pi}$, with $\Pi$ going over all $(2k+1)$-spaces in $\operatorname{PG}(n,q)$. We show, for every two $(2k+1)$-spaces $\Pi,\Pi'$, that the coefficients of a row corresponding to a point in $\Pi \cap \Pi'$ in the linear combination of $\chi_{\mathcal{L}|\Pi}$ and $\chi_{\mathcal{L}|\Pi'}$ are equal. Suppose $\chi_{\mathcal{L}|\Pi}^{} = a_1r_1+a_2r_2+\dots a_kr_k+a_{k+1}r_{k+1}+\dots +a_lr_l$ and $\chi_{\mathcal{L}|\Pi'}^{} = b_{k+1}r_{k+1}+\dots b_{l}r_{l}+b_{l+1}r_{l+1}+\dots +a_sr_s$, where $r_1, \dots, r_k, \dots r_l$ and $r_{k+1}, \dots, r_l, \dots r_s$ are the rows corresponding with the points of $\Pi$ and $\Pi'$, respectively. Remark that we only look at the columns corresponding with the $k$-spaces in $\Pi$ and $\Pi'$, respectively.\ We now look at the space $\Pi \cap \Pi'$, and to the corresponding columns in $A$. Recall that $A_{|\Pi\cap\Pi'}$ also has full row rank, so the linear combination that gives $\chi_{\mathcal{L}|(\Pi\cap\Pi')}^{}$ is unique, and equal to the ones corresponding with $\Pi$ and $\Pi'$, restricted to $\Pi \cap \Pi'$. This proves that $a_i=b_i$ for $k+1 \leq i \leq s$. Here we also used the fact that the entry in $A$, corresponding with a point of $\Pi \setminus \Pi'$ or $\Pi' \setminus \Pi$ and a $k$-space in $\Pi \cap \Pi'$ is zero.\ By using all $(2k+1)$-spaces, we see that $\chi_\mathcal{L}^{}$ is uniquely defined, and by construction $\chi_\mathcal{L}^{} \in \operatorname{Im}(A^T)$. Note that we only used that proposition $7$ holds for conjugate switching sets inside a $(2k+1)$-dimensional subspace. A set $\mathcal{L}$ of $k$-spaces in $\operatorname{PG}(n,q)$ that fulfills one of the statements in Theorem \[theodef\] (and consequently all of them) is called a *Cameron-Liebler set of $k$-spaces* in $\operatorname{PG}(n,q)$ with parameter $x=|\mathcal{L}|{{\genfrac{[}{]}{0pt}{}{n}{k}}}^{-1}$. Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$ were introduces before in [@Ferdinand.] as we mentioned in the introduction. Remark that the definition we present here is consistent with the definition in [@Ferdinand.] since the definition given in that article is statement 5. from the previous theorem. Note that the parameter $x$ of a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$ is not necessarily an integer, while the parameter of Cameron-Liebler line sets in $\operatorname{PG}(3,q)$ and the parameter of Cameron-Liebler sets of generators in polar spaces are integers. We end this section with showing an extra property of Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$. Let $\mathcal{L}$ be a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$, then we find the following equality for every point $P$ and every $i$-dimensional subspace $\tau$ with $P\in \tau$ and $i\geq k+1$: $$\begin{aligned} |[P]_k \cap \mathcal{L}|+\frac{{\genfrac{[}{]}{0pt}{}{n-1}{k}}(q^k-1)}{{\genfrac{[}{]}{0pt}{}{i-1}{k}}(q^i-1)}|[\tau]_k \cap \mathcal{L}|=\frac{{\genfrac{[}{]}{0pt}{}{n-1}{k}}}{{\genfrac{[}{]}{0pt}{}{i-1}{k}}}|[P,\tau]_k \cap \mathcal{L}|+\frac{q^k-1}{q^n-1}| \mathcal{L}|\;.\end{aligned}$$ Where $[P]_k$, $[\tau]_k$ and $[P,\tau]_k$ denote the set of all $k$-subspaces through $P$, the set of all $k$-subspaces in $\tau$ and the set of all $k$-subspaces in $\tau$ through $P$, respectively. Let $\chi_{[P]}$, $\chi_{[\tau]}$ and $\chi_{[P,\tau]}$ be the characteristic vectors of $[P]_k$, $[\tau]_k$ and $[P,\tau]_k$, respectively, and define $$v=\chi_{[P]}+\frac{{\genfrac{[}{]}{0pt}{}{n-1}{k}}(q^k-1)}{{\genfrac{[}{]}{0pt}{}{i-1}{k}}(q^i-1)} \chi_{[\tau]}-\frac{{\genfrac{[}{]}{0pt}{}{n-1}{k}}}{{\genfrac{[}{]}{0pt}{}{i-1}{k}}} \chi_{[P,\tau]} -\frac{q^k-1}{q^n-1}j\;.$$ By calculating $(Av)_{P'}$ for every point $P'$, we see that $Av=0$. This implies that $v\in \ker(A)$. Let $\chi$ be the characteristic vector of $\mathcal{L}$. By definition 2 in Theorem \[theodef\] we know that $\chi \in (\ker(A))^\perp$, so by calculating $\chi \cdot v$ the lemma follows. For $k=1$, Drudge showed in [@phdDrudge] that this property is an equivalent definition for a Cameron-Liebler line set in $\operatorname{PG}(n,q)$. Properties of Cameron-Liebler sets of k-spaces in PG(n,q) {#sec3} ========================================================= We start with some properties of Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$ that can easily be proved. \[basislemma4\] Let $\mathcal{L}$ and $\mathcal{L}'$ be two Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$ with parameters $x$ and $x'$ respectively, then the following statements are valid. 1. $0 \leq x \leq {\frac{q^{n+1}-1}{q^{k+1}-1}}$. 2. The set of all $k$-spaces in $\operatorname{PG}(n,q)$ not in $\mathcal{L}$ is a Cameron-Liebler set of $k$-spaces with parameter ${\frac{q^{n+1}-1}{q^{k+1}-1}}-x$. 3. If $\mathcal{L} \cap \mathcal{L}' = \emptyset$ then $\mathcal{L} \cup \mathcal{L}'$ is a Cameron-Liebler set of $k$-spaces with parameter $x+x'$. 4. If $\mathcal{L} \subseteq \mathcal{L}'$ then $\mathcal{L} \setminus \mathcal{L}'$ is a Cameron-Liebler set of $k$-spaces with parameter $x-x'$. We present some examples of Cameron-Liebler $k$-sets in $\operatorname{PG}(n,q)$. \[voorbeeldCL\] The set of all $k$-spaces through a point $P$ is a Cameron-Liebler set of $k$-spaces with parameter $1$ since the characteristic vector of this set is the row of $A$ corresponding to the point $P$. We will call this set of $k$-spaces the *point-pencil through $P$*.\ By definition 3 in Theorem \[theodef\], we can see that the set of all $k$-spaces in a fixed hyperplane is a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$ with parameter $\frac{q^{n-k}-1}{q^{k+1}-1}$. Remark that this parameter is not an integer if $k+1 \nmid n+1$, or equivalent, if $\operatorname{PG}(n,q)$ does not contain a $k$-spread. In [@Klaus] several properties of Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(2k+1,q)$ were given. We will first generalize some of these results to use them in section \[nieuwhfdst\]. \[driedisjunct\] Let $\pi$ and $\pi'$ be two disjoint $k$-subspaces in $\operatorname{PG}(n,q)$ with $\Sigma = \langle \pi,\pi' \rangle$, and let $P$ be a point in $\Sigma \setminus (\pi \cup \pi')$ and $P'$ be a point not in $\Sigma$. Then there are $W(q,n,k)$ $k$-spaces disjoint to $\pi$ and $\pi'$, there are $W_\Sigma(q,n,k)$ $k$-spaces disjoint to $\pi$ and $\pi'$ through $P$ and there are $W_{\bar{\Sigma}}(q,n,k)$ $k$-spaces disjoint to $\pi$ and $\pi'$ through $P'$.\ Here, $W(q,n,k), W_{{\Sigma}}(q,n,k), W_{\bar{\Sigma}}(q,n,k)$ are given by: $$\begin{aligned} W(q,n,k)&= \sum_{i=-1}^k W_i(q,n,k) \\ W_\Sigma(q,n,k) &= \frac{1}{(q^{k+1}-1)^2} \sum_{i=0}^k W_i(q,n,k) (q^{i+1}-1) \\ W_{\bar{\Sigma}}(q,n,k) &= \frac{1}{q^{n+1}-q^{2k+2}} \sum_{i=-1}^{k-1} W_i(q,n,k) (q^{k+1}-q^{i+1}) \\ W_i(q,n,k)&= \begin{cases} q^{2k^2+k+ \frac{3i^2}{2}-\frac{i}{2}-3ik}{\genfrac{[}{]}{0pt}{}{n-2k-1}{k-i}}{\genfrac{[}{]}{0pt}{}{k+1}{i+1}}\prod_{j=0}^i (q^{k-j+1}-1) & \text{if } i \geq 0\\ q^{2(k+1)^2}{\genfrac{[}{]}{0pt}{}{n-2k-1}{k+1}} & \text{if } i=-1 \end{cases}\;.\end{aligned}$$ To count the number of $k$-spaces $\pi''$, that are disjoint to $\pi$ and $\pi'$, we first count the number of possible intersections $\pi'' \cap \Sigma$.\ We count the number of $i$-spaces in $\Sigma$, disjoint to $\pi$ and $\pi'$ by double counting $((P_0,P_1,\dots, P_i),\sigma_i)$. Here $\sigma_i$ is an $i$-space in $\Sigma$, disjoint to $\pi$ and $\pi'$, and the points $P_0,P_1, \dots, P_i$ form a basis of $\sigma_i$. For the ordered basis $(P_0,P_1, \dots, P_i)$ we have $\prod_{j=0}^{i} \frac{q^{2j}(q^{k-j+1}-1)^2}{q-1}$ possibilities since there are ${\genfrac{[}{]}{0pt}{}{2k+2}{1}}-2{\genfrac{[}{]}{0pt}{}{k+j+1}{1}}+{\genfrac{[}{]}{0pt}{}{2j}{1}}=\frac{q^{2j}(q^{k-j+1}-1)^2}{q-1}$ possibilities for $P_j$ if $P_0,P_1,\dots,P_{j-1}$ are given.\ By a similar argument, we find that the number of ordered bases $(P_0,P_1, \dots, P_i)$ for a given $\sigma_i$ is $\prod_{j=0}^{i} \frac{q^{j}(q^{i-j+1}-1)}{q-1}$. In this way we find that the number of $i$-spaces in $\Sigma$, disjoint to $\pi$ and $\pi'$ is given by: $$\begin{aligned} \frac{\prod_{j=0}^{i} \frac{q^{2j}(q^{k-j+1}-1)^2}{q-1}}{\prod_{j=0}^{i} \frac{q^{j}(q^{i-j+1}-1)}{q-1}} =\prod_{j=0}^{i}\frac{ q^{j}(q^{k-j+1}-1)^2}{q^{i-j+1}-1}= q^{\binom{i+1}{2}}{\genfrac{[}{]}{0pt}{}{k+1}{i+1}}\prod_{j=0}^i(q^{k-j+1}-1).\end{aligned}$$ Now we count, for a given $i$-space $\sigma_i$ in $\Sigma$, the number of $k$-spaces $\pi''$ through $\sigma_i$ such that $\pi'' \cap \Sigma = \sigma_i$. This equals the number of $(k-i-1)$-spaces in $\operatorname{PG}(n-i-1,q)$, disjoint to a $(2k-i)$-space. This number is $q^{(k-i)(2k-i+1)}{\genfrac{[}{]}{0pt}{}{n-2k-1}{k-i}}$ by Lemma \[lemmadisjunct\]. By this lemma we also see that the number of $k$-spaces disjoint to $\Sigma$ is given by $q^{(k+1)(2k+2)}{\genfrac{[}{]}{0pt}{}{n-2k-1}{k+1}}$. This implies that $W_i(q,n,k), -1 \leq i\leq k$, is the number of $k$-spaces disjoint to $\pi$ and $\pi'$, and intersecting $\Sigma$ in an $i$-space.\ Now we have enough information to count the number of $k$-spaces disjoint to $\pi$ and $\pi'$: $$\begin{aligned} W(q,n,k)&=\sum_{i=-1}^k W_i(q,n,k)\;.\end{aligned}$$ We use the same arguments to calculate $W_\Sigma(q,n,k)$ and $W_{\bar{\Sigma}}(q,n,k)$. By double counting $(P, \pi'')$ with $\pi''$ a $k$-space through $P\in \Sigma$ disjoint to $\pi$ and $\pi'$ and double counting $(P', \pi'')$ with $\pi''$ a $k$-space through $P'\notin \Sigma$ disjoint to $\pi$ and $\pi'$, we find: $$\begin{aligned} \left( {\genfrac{[}{]}{0pt}{}{2k+2}{1}} -2{\genfrac{[}{]}{0pt}{}{k+1}{1}}\right) \cdot W_\Sigma(q,n,k) &= \sum_{i=0}^k W_i(q,n,k) \cdot {\genfrac{[}{]}{0pt}{}{i+1}{1}} \text{ and} \\ \left( {\genfrac{[}{]}{0pt}{}{n+1}{1}} -{\genfrac{[}{]}{0pt}{}{2k+2}{1}}\right) \cdot W_{\bar{\Sigma}}(q,n,k) &= \sum_{i=-1}^{k-1} W_i(q,n,k) \cdot \left({\genfrac{[}{]}{0pt}{}{k+1}{1}}-{\genfrac{[}{]}{0pt}{}{i+1}{1}} \right)\;.\end{aligned}$$ This implies: $$\begin{aligned} W_\Sigma(q,n,k) =& \frac{1}{(q^{k+1}-1)^2} \sum_{i=0}^k W_i(q,n,k) (q^{i+1}-1) \\ W_{\bar{\Sigma}}(q,n,k) =& \frac{1}{q^{n+1}-q^{2k+2}} \sum_{i=-1}^{k-1} W_i(q,n,k)(q^{k+1}-q^{i+1}). \qedhere\end{aligned}$$ From now on we denote $W_i(q,n,k), W_\Sigma (q,n,k)$ and $W_{\bar{\Sigma}}(q,n,k)$ by $W_i, W_\Sigma$ and $W_{\bar{\Sigma}}$ if the dimensions $n$, $k$ and the field size $q$ are clear from the context. \[lemmas1s2d1d2\] Let $\mathcal{L}$ be a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$ with parameter $x$. 1. For every $\pi \in \mathcal{L}$, there are $s_1$ elements of $\mathcal{L}$ meeting $\pi$. 2. For skew $\pi, \pi'\in \mathcal{L}$ and a spread $\mathcal{S}_0$ in $\Sigma = \langle \pi,\pi' \rangle$, there exist exactly $d_2$ subspaces in $\mathcal{L}$ that are skew to both $\pi$ and $\pi'$ and there exist $s_2$ subspaces in $\mathcal{L}$ that meet both $\pi$ and $\pi'$. Here, $d_2$, $s_1$ and $s_2$ are given by: $$\begin{aligned} d_2(q,n,k,x,\mathcal{S}_0) &= (W_\Sigma-W_{\bar{\Sigma}})|\mathcal{S}_0 \cap \mathcal{L}|-2W_\Sigma+x W_{\bar{\Sigma}}\\ s_1(q,n,k,x) &= x{\genfrac{[}{]}{0pt}{}{n}{k}}-(x-1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}\\ s_2(q,n,k,x,\mathcal{S}_0) &= x{\genfrac{[}{]}{0pt}{}{n}{k}}-2(x-1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k} +d_2(q,n,k,x,\mathcal{S}_0)\;,\\\end{aligned}$$ when $W_\Sigma$ and $W_{\bar{\Sigma}}$ are given by lemma \[driedisjunct\]. 3. Define $d'_2(q,n,k,x) = (x-2)W_\Sigma$ and $s'_2(q,n,k,x) = x{\genfrac{[}{]}{0pt}{}{n}{k}} -2(x-1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} +d'_2(q,n,k,x)$. If $n>3k+1$, then $d_2(q,n,k,x,\mathcal{S}_0) \leq d'_2(q,n,k,x)$ and $s_2(q,n,k,x,\mathcal{S}_0) \leq s'_2(q,n,k,x)$ for every spread $\mathcal{S}_0$ in $\Sigma$. <!-- --> 1. This follows directly from Theorem \[theodef\]$(3)$ and $|\mathcal{L}|=x{\genfrac{[}{]}{0pt}{}{n}{k}}$. 2. Let $\chi_\pi$ and $\chi_{\pi'}$ be the characteristic vectors of $\{\pi\}$ and $\{\pi'\}$, respectively, and let $\mathcal{Z}$ be the set of all $k$-spaces in $\operatorname{PG}(n,q)$ disjoint to $\pi$ and $\pi'$, and let $\chi_\mathcal{Z}$ be its characteristic vector. Furthermore, let $v_\pi$ and $v_{\pi'}$ be the incidence vectors of $\pi$ and $\pi'$, respectively, with their positions corresponding to the points of $\operatorname{PG}(n,q)$. Note that $A\chi_\pi = v_\pi$ and $A\chi_{\pi'} = v_{\pi'}$. By Lemma \[driedisjunct\] we know the numbers $W_\Sigma$ and $W_{\bar{\Sigma}}$ of $k$-spaces disjoint to $\pi$ and $\pi'$, through a point $P$, if $P\in \Sigma$ and $P \notin \Sigma$ respectively. Let $\mathcal{S}_0$ be a $k$-spread in $\Sigma$ and let $v_\Sigma$ be the incidence vectors of $\Sigma$ (as a point set). We find: $$\begin{aligned} A\chi_\mathcal{Z} &=W_\Sigma(v_\Sigma-v_\pi-v_{\pi'})+W_{\bar{\Sigma}} (j-v_{\Sigma} ) \\ &=W_\Sigma(A\chi_{\mathcal{S}_0}^{}-A\chi_\pi-A\chi_{\pi'})+W_{\bar{\Sigma}} \left({\genfrac{[}{]}{0pt}{}{n}{k}}^{-1}Aj-A \chi_{\mathcal{S}_0}\right)\\ \Leftrightarrow\qquad&\chi_\mathcal{Z}-W_\Sigma(\chi_{\mathcal{S}_0}-\chi_\pi-\chi_{\pi'})-W_{\bar{\Sigma}} \left({\genfrac{[}{]}{0pt}{}{n}{k}}^{-1}j- \chi_{\mathcal{S}_0}\right) \in \ker(A).\end{aligned}$$ We know that the characteristic vector $\chi$ of $\mathcal{L}$ is included in $(\ker(A))^\perp$. This implies: $$\begin{aligned} &&\chi_\mathcal{Z} \cdot \chi &=W_\Sigma(\chi_{\mathcal{S}_0}\cdot\chi-\chi(\pi)-\chi(\pi'))+W_{\bar{\Sigma}} (x- \chi_{\mathcal{S}_0}\cdot\chi) \\ &\Leftrightarrow & |\mathcal{Z}\cap \mathcal{L}| &=W_\Sigma(|\mathcal{S}_0\cap \mathcal{L}|-2)+W_{\bar{\Sigma}} (x- |\mathcal{S}_0\cap \mathcal{L}|) \\ &\Leftrightarrow & |\mathcal{Z}\cap \mathcal{L}| &=(W_\Sigma-W_{\bar{\Sigma}})|\mathcal{S}_0\cap \mathcal{L}|-2W_\Sigma+x W_{\bar{\Sigma}}\;, \end{aligned}$$ which gives the formula for $d_2(q,n,k,x)$. The formula for $s_2(q,n,k,x)$ follows from the inclusion-exclusion principle. 3. Suppose $\Sigma$ is a $(2k+1)$-space in $\operatorname{PG}(n,q)$, and $S_0$ is a $k$-spread in $\Sigma$ such that $|\mathcal{S}_0 \cap \mathcal{L}|> x$. By definition $1$ in Theorem \[theodef\] we know that the characteristic vector $\chi$ of $\mathcal{L}$ can be written as $\sum_{P \in \operatorname{PG}(n,q)} x_P r_P^T$ for some $x_{P}\in\R$ where $r_{P}$ is the row of $A$ corresponding to the point $P$. Let $\chi_{\pi}$ be the characteristic vector of the set $\{\pi\}$ with $\pi$ a $k$-space, then $\chi_{\pi} \cdot \chi=\sum_{P \in \pi} x_P$ equals $1$ if $\pi \in \mathcal{L}$ and $0$ if $\pi \notin \mathcal{L}$. As $\chi \cdot j = |\mathcal{L}| = x{\genfrac{[}{]}{0pt}{}{n}{k}}$ we find that $\sum_{P \in \operatorname{PG}(n,q)} x_P = x$.\ If $|\mathcal{S}_0 \cap \mathcal{L}|> x$, then $\chi \cdot \chi_{S_0} = \sum_{P\in \Sigma}x_P >x$. This implies that $\sum_{P \in \operatorname{PG}(n,q) \setminus \Sigma} x_P = \sum_{P \in \operatorname{PG}(n,q) } x_P - \sum_{P \in \Sigma} x_P$ is negative. As $n>3k+1$, there exists a $k$-space $\tau$ in $\operatorname{PG}(n,q)$, disjoint to $\Sigma$ with $\chi_{\tau} \cdot \chi = \sum_{P\in \tau} x_P$ negative, which gives the contradiction.\ There follows that $|\mathcal{S}_0 \cap \mathcal{L}|\leq x$. Since this is true for every spread $\mathcal{S}_0$ in every $(2k+1)$-space in $\operatorname{PG}(n,q)$, the statement holds. Remark that we will use the upper bound $d'_2(q,n,k,x)$ and $s'_2(q,n,k,x)$ instead of $d_2(q,n,k,x,\mathcal{S}_0)$ and $s_2(q,n,k,x,\mathcal{S}_0)$ respectively, since they are independent of the chosen spread $\mathcal{S}_0$. The following lemma is a generalization of Lemma $2.4$ in [@Klaus]. \[lemmaklaus\] Let $c,n,k$ be nonnegative integers with $n>2k+1$ and $$\begin{aligned} (c+1)s_1-\binom{c+1}{2}s'_2 > x{\genfrac{[}{]}{0pt}{}{n}{k}}\;,\end{aligned}$$ then no Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$ with parameter $x$ contains $c+1$ mutually skew subspaces. Assume that $\operatorname{PG}(n,q)$ has a Cameron-Liebler set $\mathcal{L}$ of $k$-spaces with parameter $x$ that contains $c+1$ mutually disjoint subspaces $\pi_0,\pi_1,\dots,\pi_c$. Lemma \[lemmas1s2d1d2\] shows that $\pi_i$, meets at least $s_1(q,n,k,x)-i s_2(q,n,k,x)$ elements of $\mathcal{L}$ that are skew to $\pi_0, \pi_1, \dots,\pi_{i-1}$. Hence $x{\genfrac{[}{]}{0pt}{}{n}{k}} = |\mathcal{L}| \geq (c+1) s_1-\sum_{i=0}^c i s_2 \geq (c+1) s_1-\sum_{i=0}^c i s'_2$ which contradicts the assumption. Classification result {#nieuwhfdst} ===================== In this section, we will list some classification results for Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$. First note that a Cameron-Liebler set of $k$-spaces with parameter $0$ is the empty set.\ In the following lemma we start with the classification for the parameters $x \in \ ]0,1[ \ \cup \ ]1,2[$. \[lemmatussen012\] There are no Cameron-Liebler sets of $k$-spaces in $PG(n,q)$ with parameter $x \in \ ]0,1[ $ and if $n\geq3k+2$, there are no Cameron-Liebler sets of $k$-spaces with parameter $x \in \ ]1,2[ $. Suppose there is a Cameron-Liebler set $\mathcal{L}$ of $k$-spaces with parameter $x\in \ ]0,1[$. Then $\mathcal{L}$ is not the empty set so suppose $\pi \in \mathcal{L}$. By definition $3$ in Theorem $\ref{theodef}$ we find that the number of $k$-spaces in $\mathcal{L}$ disjoint to $\pi$ is negative, which gives the contradiction.\ Suppose there is a Cameron-Liebler set $\mathcal{L}$ of $k$-spaces with parameter $x \in \ ]1,2[$. By definition $3$ in Theorem \[theodef\], we know that there are at least two disjoint $k$-spaces $\pi,\pi'\in \mathcal{L}$. By Lemma \[lemmas1s2d1d2\]$(2,3)$ we know that there are $d_2 \leq d_2'$ elements of $\mathcal{L}$ disjoint to $\pi$ and $\pi'$. Since $d_2'$ is negative, we find a contradiction. We continue with a classification result for Cameron-Liebler $k$-sets with parameter $x=1$, where we will use the following result, the so-called Erdős-Ko-Rado theorem for projective spaces. \[EKR1\] If $\mathcal{L}$ is a set of pairwise intersecting $k$-spaces in $\operatorname{PG}(n,q)$ with $n\geq2k+1$, then $|\mathcal{L}| \leq {\genfrac{[}{]}{0pt}{}{n}{k}}$, and equality holds if and only if $\mathcal{L}$ either consists of all $k$-spaces through a fixed point, or $n = 2k+1$ and $\mathcal{L}$ consists of all $k$-spaces in a fixed hyperplane. \[xgelijkaanee\] Let $\mathcal{L}$ be a Cameron-Liebler set of $k$-spaces with parameter $x=1$ in $\operatorname{PG}(n,q)$, $n\geq2k+1$. Then $\mathcal{L}$ is a point-pencil or $n=2k+1$ and $\mathcal{L}$ is the set of all $k$-spaces in a hyperplane of $\operatorname{PG}(2k+1,q)$. The theorem follows immediately from Lemma \[EKR1\] since, by Theorem \[theodef\]$(3)$, we know that $\mathcal{L}$ is a family of pairwise intersecting $k$-spaces. We continue this section by showing that there are no Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q)$, $n\geq3k+2$, with parameter $2\leq x\leq q^{\frac{n}{2}-\frac{k^2}{4}-\frac{3k}{4}-\frac{3}{2}}(q-1)^{\frac{k^2}{4}-\frac{k}{4}+\frac{1}{2}} \sqrt{q^2+q+1}$. For this classification result, we will use the following theorem. \[theomussche\] Let $k\geq 1$ be an integer. If $q\geq3$ and $n\geq 2k+2$, or if $q=2$ and $n\geq 2k+3$, then any family $\mathcal{F}$ of pairwise intersecting $k$-subspaces of $\operatorname{PG}(n,q)$, with $\cap_{F \in \mathcal{F}} F = \emptyset$ has size at most ${\genfrac{[}{]}{0pt}{}{n}{k}} -q^{k^2+k}{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} + q^{k+1}$. To ease the notations, we denote $q^{\frac{n}{2}-\frac{k^2}{4}-\frac{3k}{4}-\frac{3}{2}}(q-1)^{\frac{k^2}{4}-\frac{k}{4}+\frac{1}{2}} \sqrt{q^2+q+1}$ by $f(q,n,k)$.\ Recall that the set of all $k$-spaces in a hyperplane in $PG(n,q)$ is a Cameron-Liebler set of $k$-spaces with parameter $x=\frac{q^{n-k}-1}{q^{k+1}-1}$ (see Example \[voorbeeldCL\]) and note that $f(q,n,k) \in \mathcal{O}(\sqrt{q^{n-2k}})$ while $\frac{q^{n-k}-1}{q^{k+1}-1} \in \mathcal{O}(q^{n-2k-1})$. We start with some lemmas. \[ongelijkheid\] For $n\geq 2k+2$, we have: $$\begin{aligned} &{\genfrac{[}{]}{0pt}{}{n}{k}}> {\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}>W_\Sigma\;.\\ \text{If also } k\geq2, \ \ &{\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}>q^{nk-k^2} + q^{nk-k^2-1} +q^{nk-k^2-2}\;.\end{aligned}$$ The first inequality follows since ${\genfrac{[}{]}{0pt}{}{n}{k}}$ is the number of $k$-spaces through a point in $\operatorname{PG}(n,q)$, ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}$ is the number of $k$-spaces through a point disjoint from a $k$-space not through that point, and $W_\Sigma$ is the number of $k$-spaces through a point and disjoint from two given $k$-spaces not through that point.\ The second inequality, for $k>1$, follows from $$\begin{aligned} {\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}&=\left(\prod_{i=0}^{k-3} \left( \frac{q^{n-k-1-i}-1}{q^{k-i}-1}\right) \right)\left(\frac{q^{n-2k+1}-1}{q^{}-1}\frac{q^{n-2k}-1}{q^2-1} \right) q^{k^2+k}\\ &> q^{(n-2k-1)(k-2)} (q^{n-2k}+q^{n-2k-1}+q^{n-2k-2})q^{n-2k-2} q^{k^2+k}\\ &= q^{nk-k^2} + q^{nk-k^2-1} +q^{nk-k^2-2}\;.\qedhere\end{aligned}$$ \[lemmaongelijkheidklauspargroterdan1\] Let $\mathcal{L}$ be a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q)$, $n\geq3k+2$, with parameter $2\leq x\leq f(q,n,k)$, then $\mathcal{L}$ cannot contain $x+1$ mutually disjoint $k$-spaces. This follows from Lemma \[lemmaklaus\], with $c=x \geq 2$: $$\begin{aligned} &(x+1)s_1 - \binom{x+1}{2}s'_2 > x{\genfrac{[}{]}{0pt}{}{n}{k}} \\ \Leftrightarrow\quad& (x^2+x){\genfrac{[}{]}{0pt}{}{n}{k}} - (x^2-1) {\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k} -\frac{x^3+x^2}{2}{\genfrac{[}{]}{0pt}{}{n}{k}} +(x^3-x){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k}-\frac{x^2+x}{2}d'_2 > x{\genfrac{[}{]}{0pt}{}{n}{k}}\\ \Leftrightarrow\quad& \frac{x^2-x^3}{2}{\genfrac{[}{]}{0pt}{}{n}{k}} + (x^3-x^2-x+1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} > \left(\frac{x^3-x^2}{2}-x\right)W_\Sigma\end{aligned}$$ As ${\genfrac{[}{]}{0pt}{}{n}{k}} \geq {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ by the first inequality in Lemma \[ongelijkheid\], the following inequality is sufficient. $$\begin{aligned} \left(\frac{x^3-x^2}{2}-x+1\right) {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} > \left(\frac{x^3-x^2}{2}-x\right)W_{\Sigma}\;.\end{aligned}$$ Since $n\geq3k+2$, we find, by the first inequality in Lemma \[ongelijkheid\] and since $\frac{x^3-x^2}{2}-x+1=(x-1)\left(\frac{x^2}{2}-1\right)>0$ for $x\geq 2$, that the above inequality always holds. \[lemmainequality\] If $x\leq f(q,n,k)$ and $n\geq3k+2$, then $$\begin{aligned} {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x-2)s'_2 > \max \left\{x {\genfrac{[}{]}{0pt}{}{n}{k}} -x {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k},{\genfrac{[}{]}{0pt}{}{n}{k}} - {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}+q^{k+1}\right\}\;. \end{aligned}$$ For $k>1$, we will prove the following inequalities: $$\begin{aligned} &{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x-2)s'_2 > x {\genfrac{[}{]}{0pt}{}{n}{k}} -x {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} > {\genfrac{[}{]}{0pt}{}{n}{k}} - {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}+q^{k+1}\;.\end{aligned}$$ To prove the first inequality, we show that $x\leq f(q,n,k)$ implies it. The first inequality is equivalent with $$\begin{aligned} (2x^{2}-5x+5){\genfrac{[}{]}{0pt}{}{n-k-1}{k}}q^{k^2+k} - (x^2-x){\genfrac{[}{]}{0pt}{}{n}{k}} -(x-2)^{2}W_{\Sigma} >0\end{aligned}$$ Since $W_\Sigma \leq {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$, the following inequality is sufficient. $$\begin{aligned} &(x^2-x+1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x^2-x){\genfrac{[}{]}{0pt}{}{n}{k}} >0\\ \Leftrightarrow\quad& {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} > (x^2-x) \left( {\genfrac{[}{]}{0pt}{}{n}{k}} -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} \right)\end{aligned}$$ Given a hyperplane $\alpha$ in $\operatorname{PG}(n,q)$ the number of $(k-1)$-spaces in $\alpha$ meeting a fixed $k$-space $\pi$ in $\alpha$ equals ${\genfrac{[}{]}{0pt}{}{n}{k}} -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ by Lemma \[lemmadisjunct\]. We know that this number is smaller than the number of points $Q \in \pi$ times the number of $(k-1)$-spaces in $\alpha$ through $Q$. This implies that $$\begin{aligned} {\genfrac{[}{]}{0pt}{}{n}{k}} -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} \leq {\genfrac{[}{]}{0pt}{}{k+1}{1}}{\genfrac{[}{]}{0pt}{}{n-1}{k-1}} \leq {\genfrac{[}{]}{0pt}{}{k+1}{1}}{\frac{(q^{n-1}-1)\cdots (q^{n-k+1}-1)}{(q^{k-1}-1)\cdots (q^{}-1)}} \leq \frac{q^{nk-\frac{k^2}{2}-n+\frac{3k}{2}+1}}{(q-1)^{\frac{k^2}{2}-\frac{k}{2}+1}}\;.\end{aligned}$$ By the second inequality in Lemma \[ongelijkheid\] we also know that ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} \geq q^{nk-k^2}+q^{nk-k^2-1}+q^{nk-k^2-2}$, which gives that the following inequality is sufficient: $$\begin{aligned} (x^2-x) < q^{n-\frac{k^2}{2}-\frac{3k}{2}-3}(q-1)^{\frac{k^2}{2}-\frac{k}{2}+1}(q^2+q+1)\;.\end{aligned}$$ The inequality $(x^2-x) \leq \left(x-\frac{1}{2}\right)^{2}$ implies that $$\begin{aligned} x-\frac{1}{2}< q^{\frac{n}{2}-\frac{k^2}{4}-\frac{3k}{4}-\frac{3}{2}}(q-1)^{\frac{k^2}{4}-\frac{k}{4}+\frac{1}{2}}\sqrt{q^2+q+1}\end{aligned}$$ is sufficient, which is a direct consequence of $x\leq f(q,n,k)$. We prove the second inequality in a similar way. We have $$\begin{aligned} &x {\genfrac{[}{]}{0pt}{}{n}{k}} -x {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} > {\genfrac{[}{]}{0pt}{}{n}{k}} - {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}+q^{k+1}\\ \Leftrightarrow\quad &(x-1) \left( {\genfrac{[}{]}{0pt}{}{n}{k}} -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} \right)> q^{k+1}\end{aligned}$$ The number of $(k-1)$-spaces in a hyperplane $\alpha$ of $\operatorname{PG}(n,q)$ meeting a $k$-space $\pi$ in $\alpha$ equals ${\genfrac{[}{]}{0pt}{}{n}{k}} -{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ by Lemma \[lemmadisjunct\]. This number is larger than the number of $(k-1)$-spaces in $\alpha$ meeting a $k$-space $\pi$ exactly in one point, which equals ${\genfrac{[}{]}{0pt}{}{k+1}{1}} {\genfrac{[}{]}{0pt}{}{n-k-1}{k-1}}q^{k^2-k}$, also by Lemma \[lemmadisjunct\]. We find that $$\begin{aligned} & (x-1) {\genfrac{[}{]}{0pt}{}{k+1}{1}} {\genfrac{[}{]}{0pt}{}{n-k-1}{k-1}}q^{k^2-k}> q^{k+1}\end{aligned}$$ is sufficient. This last inequality is true since $x\geq 2$ and ${\genfrac{[}{]}{0pt}{}{k+1}{1}}q^{k^2-k} > q^{k^2}>q^{k+1}$; here we needed that $k>1$.\ To end this proof, we only have to show the inequalities for $k=1$ and $n\geq5$. First we look at the inequality $$\begin{aligned} &{\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2} -(x-2)s'_2 > {\genfrac{[}{]}{0pt}{}{n}{1}} - {\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2}+q^{2} \\ \Leftrightarrow\quad&(2x^{2}-6x+6){\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2}-(x^{2}-2x+1){\genfrac{[}{]}{0pt}{}{n}{1}}>q^{2}-(x-2)^{2}W_{\Sigma}\;.\end{aligned}$$ Again, since $W_\Sigma \leq {\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2}$, the following inequalities are sufficient: $$\begin{aligned} &(x^2-2x+2){\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2} -(x^2-2x+1){\genfrac{[}{]}{0pt}{}{n}{1}} >q^2 \\ \Leftrightarrow\quad &{\genfrac{[}{]}{0pt}{}{n-2}{1}}q^2-q^2 > (x-1)^2 \left({\genfrac{[}{]}{0pt}{}{n}{1}}-{\genfrac{[}{]}{0pt}{}{n-2}{1}}q^2 \right) \\ \Leftrightarrow\quad &(x-1)^2<\frac{q^n-q^3}{q^2-1} \\ \Leftrightarrow\quad &x\leq \sqrt{\frac{q^{n}-q^{3}}{q^{2}-1}}+1\;.\end{aligned}$$ Since $\sqrt{\frac{q^{n}-q^{3}}{q^{2}-1}} > \sqrt{q^{n-2}-q^{n-5}} = f(q,n,1)$ we proved the first inequality for $k=1$. Now we look at the second inequality. $$\begin{aligned} &{\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2} -(x-2)s'_2 > x{\genfrac{[}{]}{0pt}{}{n}{1}} - x{\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2}.\\ \Leftrightarrow \quad & (2x^2-5x+5){\genfrac{[}{]}{0pt}{}{n-2}{1}} q^{2} -(x^2-x){\genfrac{[}{]}{0pt}{}{n}{1}} >(x^2-4x+4) W_\Sigma \\ \Leftrightarrow \quad & (2x^2-5x+5)(q^n-q^2) -(x^2-x)(q^n-1) >(x^2-4x+4)(q^n-2q^2+q) \\ \Leftrightarrow \quad & x^2 +(3q-1)x-\frac{q^n+3q^2-4q}{q-1}<0 \end{aligned}$$ As $x\leq f(q,n,1)\leq \sqrt{q^{n-2}-q^{n-5}}$ the following inequality is sufficient: $$\begin{aligned} (q^{n-2} -q^{n-5}) +(3q-1)\sqrt{q^{n-2}-q^{n-5}}-\frac{q^n+3q^2-4q}{q-1} <0\;.\end{aligned}$$ Since $\sqrt{q^{n-2}-q^{n-5}}<q^{\frac{n}{2}-1}$ and $\frac{q^n+3q^2-4q}{q-1}= 4q+\sum^{n-1}_{i=2}q^{i}$ the following inequality is also sufficient: $$\begin{aligned} 0>q^{n-2}-q^{n-5}+3q^{\frac{n}{2}} -q^{\frac{n}{2}-1} -\sum^{n-1}_{i=2}q^{i}-4q=3q^{\frac{n}{2}}-q^{n-1}-q^{n-5}-q^{\frac{n}{2}-1}-\sum^{n-3}_{i=2}q^{i}-4q\;.\end{aligned}$$ For $n\geq5$ we can see that the inequality above holds since $3q^{\frac{n}{2}} < q^{n-1}+q^{n-3}$. \[lemmaLcontainspoint-pencil\] If $\mathcal{L}$ is a Cameron-Liebler set of $k$-spaces in $\operatorname{PG}(n,q), n\geq3k+2$, with parameter $2\leq x\leq f(q,n,k)$, then $\mathcal{L}$ contains a point-pencil. Let $\pi$ be a $k$-space in $\mathcal{L}$. By Theorem \[theodef\](3), we find $(x-1){\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ $k$-spaces in $\mathcal{L}$ disjoint to $\pi$. Within this collection of sets, we find by Lemma \[lemmaongelijkheidklauspargroterdan1\], at most $x-1$ spaces $\sigma_1,\sigma_2,\dots, \sigma_{x-1}$ that are mutually skew. By the pigeon hole principle, we find an $i$ so that $\sigma_i$ meets at least ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ elements of $\mathcal{L}$ that are skew to $\pi$. We denote this collection of $k$-spaces disjoint to $\pi$ and meeting $\sigma_i$ in at least a point by $\mathcal{F}_{i}$.\ Now we want to show that $\mathcal{F}_{i}$ contains a family of pairwise intersecting subspaces. For every $\sigma_j \neq \sigma_i$, we find at most $s'_2$ elements that meet $\sigma_i$ and $\sigma_j$. In this way, we find at least ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x-2)s_2 \geq {\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x-2)s'_2$ elements of $\mathcal{L}$, meeting $\sigma_i$, disjoint to $\pi$ and disjoint to $\sigma_j$ for all $j\neq i$. We denote this subset of $\mathcal{F}_{i}\subseteq\mathcal{L}$ by $\mathcal{F}'_{i}$. This collection $\mathcal{F}'_{i}$ of $k$-spaces is a set of pairwise intersecting $k$-spaces: if two elements $\alpha, \beta$ in $\mathcal{F}'_{i}$ would be disjoint, then $ (\{ \sigma_1,\dots,\sigma_{x-1}\} \setminus \{\sigma_i\})\cup \{\alpha, \beta,\pi \}$ would be a collection of $x+1$ pairwise disjoint elements of $\mathcal{L}$, which is impossible by Lemma \[lemmaongelijkheidklauspargroterdan1\].\ By Lemma \[lemmainequality\] we have ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k} -(x-2)s'_2 > {\genfrac{[}{]}{0pt}{}{n}{k}}-{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}+q^{k+1}$ since $2\leq x\leq f(q,n,k)$. This implies that $\cap_{F\in \mathcal{F}'_{i}} F$ is not empty by Theorem \[theomussche\]; let $P$ be the point contained in $\cap_{F\in \mathcal{F}'_{i}} F$. We conclude that $\mathcal{F}'_{i}$ is a part of the point-pencil through $P$.\ We now show that $\mathcal{L}$ contains the whole point-pencil through $P$. If $\gamma\notin \mathcal{L}$ is a $k$-space through $P$, then $\gamma$ meets at least ${\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}-(x-2)s'_2 > x{\genfrac{[}{]}{0pt}{}{n}{k}}-x{\genfrac{[}{]}{0pt}{}{n-k-1}{k}} q^{k^2+k}$ elements of $\mathcal{F}'_{i} \subseteq \mathcal{L}$, where the inequality follows from Lemma \[lemmainequality\]. This contradicts Theorem \[theodef\](3). We see that $\mathcal{L}$ contains a point-pencil through $P$. There are no Cameron-Liebler sets of $k$-spaces in $\operatorname{PG}(n,q), n\geq3k+2$ with parameter $2\leq x\leq q^{\frac{n}{2}-\frac{k^2}{4}-\frac{3k}{4}-\frac{3}{2}}(q-1)^{\frac{k^2}{4}-\frac{k}{4}+\frac{1}{2}}\sqrt{q^2+q+1}$. We prove this result using induction. By Lemma \[lemmaLcontainspoint-pencil\] we know that $\mathcal{L}$ contains the point-pencil $[P]_k$ through a point $P$. By Lemma \[basislemma4\](4) $\mathcal{L} \setminus [P]_k$ is a Cameron-Liebler set of $k$-spaces with parameter $(x-1)$, which by the induction hypothesis (in case $x-1>2$) or by Lemma \[lemmatussen012\] (in case $1<x-1<2$) does not exist, or which contains a point-pencil (in case $x-1=1$) by Lemma \[xgelijkaanee\]. In the former case there is an immediate contradiction; in the latter case $\mathcal{L}$ should contain two disjoint point-pencils of $k$-spaces, a contradiction. Acknowledgements {#acknowledgements .unnumbered} ---------------- The research of Jozefien D’haeseleer is supported by the FWO (Research Foundation Flanders). [1]{} A. Blokhuis, A.E. Brouwer, A. Chowdhury, P. Frankl, T. Mussche, B. Patkos, T. Szonyi, *A Hilton-Milner Theorem for Vector Spaces.* Electron. J. Combin. 17, 2010. A.E. Brouwer, A.M. Cohen, A. Neumaier, *Distance-Regular Graphs.* Springer Verlag, Berlin, 1989. A.E. Brouwer, S.M. Cioabǎ, F. Ihringer, M. McGinnis, *The smallest eigenvalues of Hamming graphs, Johnson graphs and other distance-regular graphs with classical parameters*. arXiv:1709.09011, 2017. A.A. Bruen, K. Drudge, *The construction of Cameron-Liebler line classes in $\operatorname{PG}(3,q)$.* Finite Fields Appl., 5(1):35–45, 1999. P.J. Cameron, R.A. Liebler, *Tactical decompositions and orbits of projective groups.* Linear Algebra Appl., 46:91–102, 1982. J. De Beule, J. Demeyer, K. Metsch, M. Rodgers, *A new family of tight sets in $Q^+(5,q)$.* Des. Codes Cryptogr., 78(3):655–678, 2016. M. De Boeck, M. Rodgers, L. Storme, A. Švob, *Cameron-Liebler sets of generators in finite classical polar spaces.* arXiv:1712.06176, 2017, (submitted). M. De Boeck, L. Storme, A. Švob, *The Cameron-Liebler problem for sets.* Discrete Math., 339(2):470–474, 2016. K. Drudge, *Extremal sets in projective and polar spaces*. PhD thesis, University of Western Ontario, 1998. J. Eisfeld, *The eigenspaces of the Bose-Mesner algebras of the association schemes corresponding to projective spaces and polar spaces.* Des. Codes Cryptogr., 17:129–150, 1999. T. Feng, K. Momihara, Q. Xiang, *Cameron-Liebler line classes with parameter $x=\frac{q^2-1}{2}$.* J. Combin. Theory Ser. A, 133:307–338, 2015. Y. Filmus, F. Ihringer, *Boolean degree 1 functions on some classical association schemes.* arXiv:1801.06034, 2018, (submitted). W.N. Hsieh, *Intersection theorems for systems of finite vector spaces.* Discrete Math., 12:1–16, 1975. A.L. Gavrilyuk, I. Matkin, *Cameron-Liebler line classes in $\operatorname{PG}(3,5)$.* arXiv:1803.10442, 2018. A.L. Gavrilyuk, I. Matkin, T. Pentilla, *Derivation of Cameron-Liebler line classes.* Des. Codes Cryptogr., doi:10.1007/s10623-017-0338-4, 6pp., 2017. A.L. Gavrilyuk, K. Metsch, *A modular equality for Cameron-Liebler line classes.* J. Combin. Theory Ser. A, 127:224–242, 2014. A.L. Gavrilyuk, I.Y. Mogilnykh, *Cameron-Liebler line classes in $\operatorname{PG}(n,4)$.* Des. Codes Cryptogr., 73(3):969–982, 2014. C. Godsil, K. Meagher, *Erdös-Ko-Rado Theorems: Algebraic Approaches.* Cambridge Studies in Advanced Mathematics, vol 149. Cambridge Univ. Press, 2016. C. Godsil, M. Newman, *Eigenvalue bounds for independent sets.* J. Combin. Theory Ser. B, 98(4):721–734, 2008. K. Metsch, *A gap result for Cameron-Liebler $k$-classes.* Elsevier, 2017. K. Metsch, *The non-existence of Cameron-Liebler line classes with parameter $2<x<q$.* Bull. Lond. Math. Soc., 42(6):991–996, 2010. K. Metsch, *An improved bound on the existende of Cameron-Liebler line classes.* J. Combin. Theory Ser. A, 121:89–93, 2014. M. Newman, *Independent sets and eigenspaces.* PhD thesis, University of Waterloo, 2004. M. Rodgers, *Cameron-Liebler line classes.* Des. Codes Cryptogr., 68:33–37, 2013. M. Rodgers, L. Storme, A. Vansweevelt, *Cameron-Liebler $k$-classes in $\operatorname{PG}(2k+1,q)$.* Combinatorica, doi:10.1007/s00493-016-3482-y, 15 pp., 2017. B. Segre, *Lectures on Modern Geometry (with an appendix by L. Lombardo-Radice).* Consiglio Nazionale delle Ricerche, Monografie Mathematiche. Edizioni Cremonese, Roma, 479 pp., 1961. B. Segre, *Teoria di Galois, fibrazioni proiettive e geometrie non desarguesiane.* Ann. Mat. Pura Appl. 64(4):1–76, 1964.
--- author: - | Francesco Vissani\ Deutsches Elektronen-Synchrotron, DESY\ Notkestraße 85, D-22603 Hamburg, Germany, and\ International Centre for Theoretical Physics, ICTP\ Strada Costiera 11, 34100 Trieste, Italy\ E-mail: title: 'Signal of neutrinoless double beta decay, neutrino spectrum and oscillation scenarios' --- Informations on neutrino parameters =================================== Massive neutrinos and $0\nu 2\beta$ decay ----------------------------------------- Atmospheric neutrino data can be interpreted in terms of a dominant $\nu_\mu-\nu_\tau$ oscillation channel, although a sub-dominant channel $\nu_\mu-\nu_{\rm e}$ is not excluded [@subdmix]. The latter may be due to a $\nu_e$ component of the heaviest (lightest) neutrino state $\nu_3$ ($\nu_1$) for spectra with “normal” (“inverted”) hierarchy–our definition of “hierarchy” is discussed in section \[secdef\]. Several possibilities are open for the interpretation of the solar neutrino data, depending on the frequencies of oscillation and mixings. Hence, the indications for massive neutrinos are strong. However, there is quite a limited knowledge on the [*neutrino mass spectrum itself*]{}, and particularly on the lightest neutrino mass. The search for $0\nu 2\beta$ decay can shed light on this important issue. The bound of 0.2 eV obtained [@2b0nuexp] on the parameter $${\cal M}_{{\rm ee}}=| \sum_i U_{{\rm e}i}^2\ m_i | \label{eq1}$$ is sensibly smaller than the mass scales probed by present studies of $\beta$-decay, or those inferred in cosmology [@PDG]. In eq. (\[eq1\]), the non-negative quantities $m_i,$ $i=1,2,3...N$ are the neutrino masses ($m_{i+1}\ge m_i$); the complex quantities $U_{\ell i},$ $\ell={\rm e},\mu,\tau...,$ are the elements of the mixing matrix, which relates the flavor eigenstates to the mass eigenstates: $\nu_\ell(x)=\sum_i U_{\ell i}\, \nu_i(x).$ Hence, ${\cal M}_{{\rm ee}}$ can be thought of as (the absolute value of) the ee$-$entry of the neutrino mass matrix. Let us recall that, beside the $(N-1)(N-2)/2$ phases relevant to neutrino oscillations, there are still $N-1$ physical phases in the lepton sector that have no analogy in the quark sector, and arise from the Majorana structure of the neutrino mass matrix. Notice that both the amplitudes [*and*]{} the phases of the elements of the mixing matrix $U_{{\rm e}i}$ are relevant in determining the size of ${\cal M}_{{\rm ee}}.$ Extremal values of ${\cal M}_{{\rm ee}}$ for $0\nu2\beta$ decay --------------------------------------------------------------- We obtain in this section the extremal values of ${\cal M}_{\rm ee}$ under arbitrary variations of the phases, keeping fixed the neutrino masses $m_i$ and the “mixing elements”[^1] $|U_{{\rm e}i}^2|.$ The maximum value of ${\cal M}_{\rm ee}$ is simply: $${\cal M}_{\rm ee}^{max} = \sum_i |U_{{\rm e}i}^2 |\ m_i . \label{eq2}$$ The minimum value can be written as: $${\cal M}_{\rm ee}^{min} = {\rm max}\{\ 2\ |U_{{\rm e}i}^2|\ m_i- {\cal M}_{\rm ee}^{max} ,\ \ 0 \ \}. \label{eq3}$$ To demonstrate this formula, let us consider the absolute value of the sum of three complex numbers: $r=|z_1+z_2+z_3|.$ We want to minimize $r$ by keeping fixed $|z_i|,$ namely, by varying the phases. Let us define the quantities $r_{1,2,3}$ and $q_{1,2,3}$ as: $r_1=|z_1| - |z_2| - |z_3|,$ $q_1=|z_1|-|z_2+z_3|,$ and similar eqs., but permuting the indices for $r_{2,3}$ and $q_{2,3}.$ Notice that [*at most*]{} one of the $r_i$‘s is positive. Assuming that $r_1>0,$ it is simple to show that $r^{min}=r_1;$ in fact, using twice the Schwartz inequality, we get $r\ge | q_1 |=q_1 \ge r_1.$ Similar considerations if $r_2>0,$ or $r_3>0.$ The last case has $r_i\le 0$ for $i=1,2,3.$ If one of the $r_i$‘s is zero, then $r^{min}=0,$ hence we need to consider the case when $r_i<0$ for all $i$‘s. In this case, the quantity $q_1$ goes from negative, when the phases of $z_2$ and $z_3$ are equal, to positive, when these phases are opposite. By continuity, a phase choice exists such that $q_1=0.$ Since by proper choice of the phase of $z_1$ we can get $r=|q_1|$ we conclude that, again, $r^{min}=0.$ In conclusion, the general case is covered by the formula: $r^{min}=\mbox{max}\{r_i,\ 0 \}.$ This is equivalent to eq. (\[eq3\]), after noticing that $r_i=2 |z_i|-\sum_{i=1}^3 |z_i|.$ The generalization of these results to $N$ neutrinos is quite simple: Just limit the sum in eq. (\[eq2\]) to $N=3.$ However, we will be concerned only with the case of three neutrinos in the rest of the work. The previous two equations give the extremal values of ${\cal M}_{\rm ee},$ once the neutrino spectrum [*and*]{} the mixing elements are known. Such extremal values are important, being independent of the complex phases. The information we get from the experimental upper bound is ${\cal M}_{\rm ee}^{bound} \ge {\cal M}_{\rm ee}^{min};$ the informations we could get from a positive signal, instead, is ${\cal M}_{\rm ee}^{signal} \in [{\cal M}_{\rm ee}^{min},\ {\cal M}_{\rm ee}^{max}].$ In the following it will be shown how to use and represent ${\cal M}_{\rm ee}^{min}$ and ${\cal M}_{\rm ee}^{max},$ and what we can learn on them assuming specific neutrino spectra, and scenarios of neutrino oscillations. Representation of ${\cal M}_{\rm ee}^{min}$ and ${\cal M}_{\rm ee}^{max}$ ========================================================================= We introduce and discuss in this section a graphical representation of the values of ${\cal M}_{\rm ee}^{min}$ and ${\cal M}_{\rm ee}^{max}.$ For this purpose we will make reference to fig. \[f:1\], where the representation of ${\cal M}_{\rm ee}^{min}$ is displayed, for an illustrative choice of the neutrino spectrum: $m_3=2\ m_2$ and $m_2=2\ m_1.$ In order to fix the ideas, we point out from the beginning the two essential features of fig. \[f:1\]: (1) the value of $ {\cal M}_{\rm ee}$ at the vertices, namely the masses of the neutrinos $m_i$; (2) the position of the inner triangle (also determined by the masses of the neutrinos). Let us begin by recalling some basic facts. The three mixing elements $|U_{{\rm e}i}^2|$ are constrained by the unitarity condition $ \sum_i |U_{{\rm e}i}^2|=1.$ This condition can be represented by using the inner region of one equilateral triangle with unit height, where the distance from the $i^{th}$ side represents the value of $|U_{{\rm e}i}^2|,$ see fig. \[f:1\] (this triangle was first used in [@1sttri], to analyze solar neutrino oscillations). To exemplify the use of the triangle, let us consider two special cases: (a) When $\nu_{\rm e}$ is an equal admixture of the three mass eigenstates, we have $|U_{{\rm e}i}^2|=1/3.$ This point is represented by the barycentre of the equilateral triangle of fig. \[f:1\]. (b) When $\nu_{\rm e}$ coincides with the mass eigenstate $\nu_1,$ we have $|U_{{\rm e}1}^2|=1,$ and the other two mixing elements are zero. This point is represented by the $1^{st}$-vertex (by definition, the $1^{st}$ vertex is opposite to the $1^{st}$ side, denoted with the label $|U_{{\rm e}1}^2|$ in fig. \[f:1\], [*etc.*]{}). From eq. (\[eq3\]), $ {\cal M}_{\rm ee}^{min}$ is [*zero*]{} in the inner triangular region represented in fig. \[f:1\]. The vertices of this inner triangle are given by: $$|U_{{\rm e}1}^2|/|U_{{\rm e}2}^2|=m_2/m_1 \mbox{ when } |U_{{\rm e}3}^2|=0 , \label{eq4}$$ and by the two additional equations obtained by the replacement $3\leftrightarrow 1,$ and $3\leftrightarrow 2.$ The condition $|U_{{\rm e}3}^2|=0$ in eq. (\[eq4\]) tells us that we are on the $3^{rd}$ (lower) side of the unitarity triangle of fig. \[f:1\]. At the $i^{th}$ vertex of the unitarity triangle ${\cal M}_{\rm ee}^{min}={\cal M}_{\rm ee}=m_i,$ as is clear from eq. (\[eq3\]), and as illustrated in fig. \[f:1\]. The value of ${\cal M}_{\rm ee}^{min}$ decreases linearly when moving from one vertex toward the inner triangle. In fact, $ {\cal M}_{\rm ee}^{min}$ is non-zero only close to the vertices of the unitarity triangle (assuming $m_1>0$). This concludes the illustration of fig. \[f:1\]. The unitarity triangle can also be used to represent the maximum possible value ${\cal M}_{\rm ee}.$ Quite simply, ${\cal M}_{\rm ee}^{max}$ is the function of the mixing elements $|U_{{\rm e}i}^2|$ that interpolates linearly among the values ${\cal M}_{\rm ee}=m_i$ taken at the vertices of the unitarity triangle, as clear from eq. (\[eq2\]). However, since ${\cal M}_{\rm ee}^{max}$ is just the sum of positive contributions (eq. (\[eq2\])), the analysis of ${\cal M}_{\rm ee}^{max}$ is nearly trivial. Phenomenology of oscillations and $0\nu 2\beta$ {#secdef} =============================================== We discuss now the $0\nu 2\beta$ signal assuming some specific spectra, and scenarios of oscillation, using the graphical representation introduced above. We take advantage of the indications from atmospheric and solar neutrinos, that can be accounted in terms of two different frequencies of neutrino oscillations, related to the mass differences squared $\Delta m_{atm}^2$ and $\Delta m_{\odot}^2$ ($\Delta m_{atm}^2\gg \Delta m_{\odot}^2$). We consider the following three cases: - Case \[${\cal N}$\]: “normal” hierarchy, $m_1\ll (\Delta m_{atm}^2)^{1/2}$; - Case \[${\cal I}$\]: “inverted” hierarchy, $m_1\ll (\Delta m_{atm}^2)^{1/2}$; - Case \[${\cal D}$\]: “normal” and “inverted” hierarchies, $m_1\gg (\Delta m_{atm}^2)^{1/2}$; from these cases, it will be easy to understand also the “intermediate” situations when $m_1\sim (\Delta m_{atm}^2)^{1/2}.$ With the term “hierarchy” (either “normal” or “inverted”) we refer to [*the mass differences squared*]{} (see eqs.  (\[defnh\]) and (\[defih\]) below)[^2]. We assume that the electronic admixture in atmospheric neutrinos is sub-dominant [@subdmix], and use for the mass splittings $\Delta m_{atm}^2$ and $ \Delta m_{\odot}^2$ the values suggested by the phenomenology. For solar neutrino solutions we use the terminology of [@bks], that we will recall in the following. A similar study has been performed in reference [@fmn], with the goal to extract informations on the mixing angles, knowing ${\cal M}_{{\rm ee}}$ and the neutrino spectrum. For other recent works oriented toward the phenomenology, see [@more]. Case : “normal” hierarchy, {#sec-n} --------------------------- What is the expected value of ${\cal M}_{{\rm ee}}$ for a neutrino spectrum with “normal” hierarchy: $$m_3^2-m_2^2=\Delta m^2_{atm}\gg m_2^2-m_1^2=\Delta m^2_{\odot}, \label{defnh}$$ assuming, to begin with, that $m_1$ is negligible? For the values of $\Delta m^2_{\odot}$ suggested by the MSW [@MSW] small mixing angle solution of the solar neutrino problem (SMA) or vacuum oscillation (VO), the only important contribution to $0\nu 2\beta$ decay rate comes from the heaviest eigenstate: ${\cal M}_{{\rm ee}}\approx |U_{{\rm e}3}^2| m_3.$ It is [*possible*]{} to have a comparable contribution from the second eigenstate assuming MSW solutions of the solar neutrino problem with large mixing angle (LMA) $\delta {\cal M}_{{\rm ee}} |_{\odot} = |U_{\rm e2}^2|\ m_2 \approx 4\times 10^{-3}$ eV (using $\Delta m^2_{\odot}\approx 10^{-4}$ eV$^2$ and $|U_{\rm e2}^2|\approx 0.4$). This is of the same size of the contribution from the heaviest eigenstate, $\delta {\cal M}_{{\rm ee}} |_{atm}=|U_{\rm e3}^2|\ m_3,$ if $|U_{\rm e3}^2|\approx 0.1 $ and $\Delta m^2_{atm}\approx 2\times 10^{-3}$ eV$^2.$ We conclude that, if future experiments searching for the $0\nu 2\beta$ transition will prove that $${\cal M}_{{\rm ee}} > 10^{-2} \mbox{ eV} , \label{normsig}$$ the hypothesis of a spectrum with “normal” hierarchy and very small $m_1$ will be disfavoured [@[b1]][^3]. The function ${\cal M}_{{\rm ee}}^{min}$ is represented in fig. \[f:2\] for two different values of $\Delta m^2_{\odot}:$ $10^{-4}$ eV$^2$ in the $1^{st}$ plot, and $10^{-5}$ eV$^2$ in the $2^{nd}$ (we assumed $\Delta m^2_{atm}=2\times 10^{-3}$ eV$^2$). Notice that assuming $m_1=0$ the inner triangle of fig. \[f:1\] degenerates into a line (for much smaller values of $\Delta m^2_{\odot},$ say for VO, the line practically coincides with the side $U_{{\rm e}3}=0$). Recalling that the inner triangle corresponds to the region where ${\cal M}_{{\rm ee}}^{min}=0,$ we appreciate from fig. \[f:2\] the crucial dependence on the parameter $|U_{{\rm e}3}^2|$ of the $0\nu2\beta$ transition rate. Let us increase now the size of $m_1,$ keeping $m_1 \ll (\Delta m^2_{atm})^{1/2}\approx m_3.$ The (degenerate) inner triangle in fig. \[f:2\] becomes an [*obtuse*]{} isosceles triangle when $m_1\approx m_2 \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$} (\Delta m^2_{\odot})^{1/2};$ the base being parallel to the $3^{rd}$ side, where $U_{{\rm e}3}=0.$ A complete suppression of the $0\nu 2\beta$ transition can take place for those solutions of the solar neutrino problem that fall in this inner triangle, and for this reason, the most important conclusion is unchanged: The size of $|U_{{\rm e}3}^2|$ is very important in determining whether the case ${\cal M}_{{\rm ee}}^{min}=0$ is possible or not. More precisely, this mixing element has to be compared with the height of the triangle, $\sim m_1/m_3$ (see fig. \[f:1\]). Incidentally, we notice the simple formula $${\cal M}_{{\rm ee}}\approx |m_1 + |U_{{\rm e}3}^2|\ m_3\ e^{i \varphi}| \ \ \ \mbox{ where } \varphi={\rm arg}[\, U_{{\rm e}3}^2/U_{{\rm e}1}^2\, ] \label{smanorm}$$ valid for the SMA case, which illustrates that ${\cal M}_{{\rm ee}}\approx 0$ is possible when $m_1/m_3 \approx |U_{{\rm e}3}^2|$ ($m_3\approx (\Delta m^2_{atm})^{1/2}$ in present hypotheses) and the phases of $U_{{\rm e}3}^2$ and $U_{{\rm e}1}^2$ are opposite. Case [\[${\cal I}$\]]{}: “inverted” hierarchy, [$m_1\ll (\Delta m_{atm}^2)^{1/2}$]{} {#sec-i} ------------------------------------------------------------------------------------ Let us assume a spectrum with “inverted” hierarchy, namely $$m_3^2-m_2^2=\Delta m^2_{\odot} \ll m_2^2-m_1^2=\Delta m^2_{atm}, \label{defih}$$ and suppose, to begin with, that $m_1$ is negligible. In this case, since the sub-dominant mixing element is $|U_{{\rm e}1}^2|,$ we can obtain large maximum values [@[b2]]: $${\cal M}_{{\rm ee}}^{max} \approx (\Delta m^2_{atm})^{1/2} =(3 \mbox{ to } 9) \times 10^{-2} \mbox{ eV.} \label{invsig}$$ This could be close to the present bound [@2b0nuexp], if also the nuclear matrix elements take the highest values allowed by present uncertainties, $\sim 2-3$ [@simk]. In these hypotheses, ${\cal M}_{{\rm ee}}^{min}$ can be (close to) zero only if $|U_{{\rm e}2}^2|$ is very close to $|U_{{\rm e}3}^2|;$ the contribution from $|U_{{\rm e}1}^2|$ being irrelevant. In a graphical representation like in fig. \[f:2\], this corresponds to the fact that the inner triangle almost coincides with the bisector $|U_{{\rm e}2}^2| = |U_{{\rm e}3}^2|$ (the “small” mixing element $|U_{{\rm e}1}^2|$ is represented by the distance from the $1^{st}$–right–side). Let us increase the size of $m_1,$ keeping $m_1 \ll (\Delta m^2_{atm})^{1/2}\approx m_3.$ The inner triangle is, in this assumption, [*acute*]{} isosceles, the base being parallel to the side $U_{{\rm e}1}=0,$ and with length $\sim m_1/m_3\times 2/\sqrt{3}.$ Hence, only those solutions of the solar neutrino problem which have almost maximal mixing angles (VO, averaged oscillations and perhaps LMA) fall in the region where the $0\nu 2\beta$ transition rate may be strongly suppressed. In the case of SMA, since $|U_{\rm e3}^2|$ is small by assumption (and $|U_{\rm e1}^2|$ is not large) we have simply: $${\cal M}_{{\rm ee}}\approx m_2\approx (m_1^2 + \Delta m^2_{atm})^{1/2}. \label{smainverted}$$ Hence, ${\cal M}_{{\rm ee}}\approx 0$ is impossible if the SMA solution is correct. Quite generally, in the case of “inverted” hierarchy, it is less likely that ${\cal M}_{{\rm ee}}^{min}$ is zero. Case [\[${\cal D}$\]]{}: “nearly degenerate” spectrum, [$m_1\gg (\Delta m_{atm}^2)^{1/2}$]{} {#sec-d} -------------------------------------------------------------------------------------------- Largest values of ${\cal M}_{{\rm ee}}$ (up to the experimental bound) can be taken for a “nearly degenerate” neutrino spectrum [@[1]; @[2]]. The maximum value is simply ${\cal M}_{{\rm ee}}^{max}=m_1+{\cal O}(\Delta m^2/m_1),$ $m_1$ playing the role of mass spectrum offset. The corresponding minimum value, ${\cal M}_{{\rm ee}}^{min}/m_1={\rm max}\{ 2 |U_{{\rm e}i}^2|-1,\ 0 \}$ is represented in fig. \[f:3\] assuming “normal” hierarchy of the mass differences (eq. (\[defnh\])); ${\cal O}(\Delta m^2/m_1^2)$ terms have been neglected. From this figure it is visible that, to interpret properly the results of $0\nu 2\beta$ decay studies (and possibly, to exclude the inner region in the $1^{st}$ plot, the one where ${\cal M}_{{\rm ee}}\ll m_1$ is [*possible*]{}) we need precise information on the mixing elements. This requires distinguishing among oscillation scenarios. The plots also illustrate the importance to quantify the size of $|U_{\rm e3}^2|$ [@[1]; @[2]], [@subdmix]. Similar considerations apply when the mass differences have “inverted” hierarchy, eq. (\[defih\]) with $|U_{\rm e1}^2|$ playing the role of $|U_{\rm e3}^2|.$ Notice in particular that with approximate mass degeneracy the role of the sub-dominant mixing is almost the same for “normal” and “’inverted” hierarchy; this should be contrasted with the conclusions for the cases \[${\cal N}$\] and \[${\cal I}$\], when $m_1\ll (\Delta m^2_{atm})^{1/2}$. In the particular case of SMA solution, eq. (\[smanorm\]) is still valid, with $m_1\approx m_3$ (and $U_{\rm e3}\to U_{\rm e1}$ for “inverted” hierarchy); hence, up to sub-dominant mixing terms ${\cal M}_{{\rm ee}}\approx {\cal M}_{{\rm ee}}^{max} \approx m_1,$ and a complete cancellation is impossible. A complementary representation {#sec-cr} ------------------------------ In order to recapitulate and confirm the results obtained in this section, we present a complementary graphical representation. Supposing that the mixing elements are known with good precision, we can plot the range of values of ${\cal M}_{{\rm ee}}$ as a function of the only residual parameter: The mass of the lightest neutrino[^4]. This is done in fig. \[f:4\], where we assume the mass splittings $\Delta m^2_{atm}=2\times 10^{-3}$ eV$^2$ and $\Delta m^2_{\odot}=10^{-4}$ eV$^2$ for “normal” and “inverted” hierarchy. The mixing $|U_{\rm e3}^2|$ (resp. $|U_{\rm e1}^2|$) with the heaviest (resp. lightest) state is $0,2,4$ and 6 $ \times 10^{-2}$ in the 4 types of curves, going from inner to outer ones. We fixed $|U_{{\rm e}2}^2|=0.4$ (resp. $|U_{{\rm e}3}^2|=0.4$), which corresponds roughly to an LMA solution. The figure confirms the conclusions obtained in section \[sec-n\] for the case \[${\cal N}$\], about the importance of $\Delta m^2_{\odot},$ and of the small mixing element $|U_{\rm e3}^2|.$ For the case \[${\cal I}$\], instead, $|U_{\rm e1}^2|$ and $\Delta m^2_{\odot}$ are less important in agreement with the discussion in section \[sec-i\]. This representation emphasizes that also a [*null*]{} experimental result may be a very important information on the massive neutrino parameters: In fact, ${\cal M}_{{\rm ee}}^{min} \raisebox{-.4ex} {\rlap{$\sim$}} \raisebox{.4ex}{$<$} 10^{-2}$ eV could rule out the assumption of “inverted” hierarchy, see the second plot of fig. \[f:4\]; or, a bound on ${\cal M}_{{\rm ee}}^{min} $ at the $10^{-3}$ level could amount to a measurement of the lightest neutrino mass, see the first plot of the same figure. Unfortunately, the value of $m_1$ determined in this way depends strongly on the parameters of oscillation, since: $${\cal M}_{{\rm ee}}^{min}= \left|\ |U_{{\rm e}2}^2|\ (\Delta m^2_\odot)^{1/2} -|U_{{\rm e}3}^2| \ (\Delta m^2_{atm})^{1/2}\ \right|\ \ \ \ \mbox{for }m_1=0;$$ so that, even in the LMA case we are considering, it will be a real challenge to prove that $m_1\neq 0.$ Concluding remarks ================== On the case ${\cal M}_{{\rm ee}}\approx 0$ ------------------------------------------ We regarded ${\cal M}_{{\rm ee}}$ as a function of several parameters: the mixing elements, the squared mass splittings, the mass of the lightest neutrino and the complex phases. Following this approach, one may be led to wonder whether the cases when the rate is small as a consequence of cancellations among the various parameters are (in some sense) “natural”. We show here how the smallness can arise in a “natural” manner. Let us postulate that the neutrino mass matrix has a hierarchical structure, analogous to the structure of the Yukawa couplings of the charged fermions. In this case, we can expect that the “ee-entry” of the neutrino mass matrix ($={\cal M}_{{\rm ee}}$) is the smallest one, and also ${\cal M}_{{\rm ee}}\ll (\Delta m^2_{atm})^{1/2}.$ This is what happens in the two models of references [@yanaram], where: $${\cal M}_{{\rm ee}}\approx (\Delta m^2_{atm})^{1/2}\times (\sin\theta_C)^{2 n} ;$$ $\theta_C$ is the Cabibbo angle, and $n=2,3$ in the two models respectively. The value of ${\cal M}_{{\rm ee}}$ in these models is rather small (see also [@yb]). Although the contribution from third family is modest, LMA solutions with relatively large mass splittings are possible in this type of models [@large], which [*a priori*]{} may imply much larger values of ${\cal M}_{{\rm ee}},$ as remarked for the case of section \[sec-n\]. Thus, these models provide examples of cases when ${\cal M}_{{\rm ee}}$ is small as a consequence of cancellations among the various contributions. In another sense, the statement ${\cal M}_{{\rm ee}}\approx 0$ is surely “natural” in a standard model framework, since at one loop level the radiative corrections are tiny: $\sim y_e^2/(4\pi)^2\sim 5\times 10^{-14},$ where $y_e$ is the electron Yukawa coupling. What is the maximum value of ${\cal M}_{{\rm ee}}?$ --------------------------------------------------- Let us briefly summarize the results of section \[secdef\], about an aspect of importance for experimental search: The maximum value of ${\cal M}_{{\rm ee}}$ that we can [*a priori*]{} expect. For given mixing elements, ${\cal M}_{{\rm ee}}^{max}$ [*increases*]{} passing from the cases discussed in sections \[sec-n\] (case \[${\cal N}$\]) to section \[sec-i\] (case \[${\cal I}$\]), and finally to section \[sec-d\] (case \[${\cal D}$\]). Indeed, ${\cal M}_{{\rm ee}}^{max}$ reaches at most the $10^{-2}$ eV level in case \[${\cal N}$\], depending on the sub-dominant mixing element $|U_{{\rm e}3}^2|$ and on the scenario of oscillation (eq. (\[normsig\])); it can be of the order of 3 to $9\times 10^{-2}$ eV in the case \[${\cal I}$\], depending on the size of $\Delta m^2_{atm}$ (eq. (\[invsig\])); finally, ${\cal M}_{{\rm ee}}^{max}$ can be as large as the experimental upper limit of 0.2 eV in the case \[${\cal D}$\]. In this sense, the [*a priori*]{} hope of a positive experimental result increases when going from \[${\cal N}$\] to \[${\cal I}$\], and from \[${\cal I}$\] to \[${\cal D}$\][^5]. Studies of neutrino oscillations and search for $0\nu 2\beta$ decay ------------------------------------------------------------------- We have shown that the parameters of oscillations are strictly related to the possible value of the $0\nu2\beta$ decay rate. However, the dependence on the type of spectrum is also essential. We summarize here some results of special interest (making reference for details to the previous section): $\bullet$ For the small angle MSW solution, ${\cal M}_{\rm ee}$ is quite large for “inverted” hierarchy in the case $m_1\ll (\Delta m^2_{atm} )^{1/2},$ see eq. (\[smainverted\]); for “normal” hierarchy, we have instead eq. (\[smanorm\]), which is smaller than the previous case by a factor of $|U_{\rm e3}^2|$ when $m_1$ is small, and possibly even smaller (eq. (\[smanorm\])).\ $\bullet$ For the large mixing angle MSW solution, contributions from “solar” frequency, order $(\Delta m^2_{\odot} )^{1/2}$ are [*not*]{} negligible, and they may lead to cancellations (or enhancements) depending on the size of $|U_{\rm e3}^2|$ in the case of “normal” hierarchy (sections \[sec-n\] and \[sec-cr\]).\ $\bullet$ For VO solution, and “normal” hierarchy, the dependence of ${\cal M}_{\rm ee}^{min}$ on $|U_{\rm e3}^2|$ in quite appreciable (section \[sec-n\]).\ $\bullet$ For “inverted” hierarchy, cancellations are not easy to obtain if $m_1$ is small in comparison with $(\Delta m^2_{atm} )^{1/2},$ except for solutions of the solar neutrino problem with almost maximal mixing angles (section \[sec-i\]).\ $\bullet$ Largest values of ${\cal M}_{\rm ee}$ are taken in the case of “nearly degenerate” spectrum, $m_1\gg (\Delta m^2_{atm} )^{1/2}$ (section \[sec-d\]). In this extreme case, cancellations are possible especially for quite large mixing angle solutions, with relevant dependence on the size of the sub-dominant mixing, for both “normal” and “inverted” hierarchies. Conclusions and perspectives ---------------------------- In this work, we discussed the interplay between the studies of neutrino oscillations and the search for $0\nu 2\beta$ decay. We introduced new graphical representations, aimed at clarifying the relations between the neutrino spectra, the scenarios of oscillations and the rate of the neutrinoless double beta decay. For the perspectives, it has to be noticed that the present information on massive neutrinos is compatible with quite different oscillations scenarios and neutrino spectra. Future experiments aiming at a signal of the $0\nu 2\beta$ process above the $10^{-2}$ eV level [@genius] will have an important role in deciding among the alternative possibilities. I thank R. Barbieri, C. Giunti, M. Maris and A. Yu. Smirnov for useful discussions, and the Referee of the work for having suggested important improvements. Earlier accounts were presented in [@prevacc]. [99]{} G.L. Fogli, E. Lisi, A. Marrone and G. Scioscia, \[\];\ G.L. Fogli, talk at the $V\!\!\, I\!\!\, I\!\!\, I$ Int. Workshop on “Neutrino Telescopes”, Venice, Feb. 99. For most recent results (Heidelberg-Moskow experiment):\ L. Baudis et al., [*Limits on the Majorana neutrino mass in the 0.1 eV range*]{}, . See for instance C. Caso [*et al*]{}, ; PDG internet site at: <http://pdg.lbl.gov/>. G. L. Fogli, E. Lisi and D. Montanino, \[\]. J. N. Bahcall, P. I. Krastev and A. Yu. Smirnov, \[\]. T. Fukuyama, K. Matsuda and H. Nishiura, \[\]; \[\]. V. Barger and K. Whisnant, [*Majorana Neutrino Masses from Neutrinoless Double Beta Decay and Cosmology*]{}, ;\ C. Giunti, [*Neutrinoless double-beta decay with three or four neutrino mixing,*]{} . L. Wolfenstein, ;\ S. P. Mikheyev and A. Yu. Smirnov, \[\]; . S. M. Bilenkii, A. Bottino, C. Giunti and C. W. Kim, \[\]; see also second paper in [@[1]]. D. O. Caldwell and R. Mohapatra, ;\ S. T. Petcov and A. Yu. Smirnov, \[\];\ A. S. Joshipura, . A. Faessler and F. Šimkovic, , reviewed the nuclear physics aspects of the $0\nu 2\beta$ decay. Calculations of the matrix elements are compared in table $V$ therein. S. M. Bilenkii, C. Giunti, C. W. Kim and S. T. Petcov, \[\]. H. Minakata and O. Yasuda, \[\];\ F. Vissani, [*A study of the scenario with nearly degenerate Majorana neutrinos*]{}, (compare also with H. Georgi and S. L. Glashow, [*Neutrinos on Earth and in the Heavens*]{}, ). J. Sato and T. Yanagida, \[\];\ N. Irges, S. Lavignac and P. Ramond, \[\]. W. Buchmüller and T. Yanagida, \[\]. F. Vissani, \[\]. J. Hellmig and H. V. Klapdor-Kleingrothaus, *Z. Physik* [**A 359**]{} (1997) [351]{} \[\];\ H. V. Klapdor-Kleingrothaus, . A. Dighe, S. Pastor and A. Yu. Smirnov, [*The physics of relic neutrinos*]{}, , proceedings of the “Relic Neutrino” workshop, Trieste, Sep. 98;\ F. Vissani, IC/99/36, contributed work for “Sixth Topical Seminar on Neutrino and Astro-Particle Physics”, San Miniato, May 99. [^1]: In the following, we will always refer with the term “mixing elements” to the absolute value of the elements of the mixing matrix. [^2]: In order to simplify the connection with the phenomenology, we use a definition of “hierarchy” that is relevant to neutrino oscillations, which involves [*just*]{} the mass differences squared. Notice that sometimes in the literature, “hierarchy” is used in reference to the neutrino spectrum itself. [^3]: Alternatively, one should postulate a different origin of the $0\nu 2\beta$ decay. [^4]: In practice, this representation will be useful when the parameters of oscillation will be known reliably. [^5]: On the contrary, one might argue that the case \[${\cal N}$\] is more likely than \[${\cal I}$\], and this latter more likely than \[${\cal D}$\], again on the basis of an analogy between the neutrino spectrum and the spectra of the charged fermions.
--- abstract: 'The problem of weakly correlated electrons on a square lattice is studied theoretically. A simple renormalization group scheme for the angle–resolved weight $Z(\theta)$ of the quasiparticles at the Fermi surface is presented and applied to the Hubbard model. Upon reduction of the cutoff the Fermi surface is progressively destroyed from the van Hove points toward the zone diagonals. Due to the renormalized $Z(\theta)$, divergences of both antiferromagnetic and superconducting correlation functions are suppressed at the critical scale, where the interactions diverge.' author: - Dražen Zanchi title: 'Angle–Resolved Loss of Landau Quasiparticles in 2D Hubbard Model' --- LPTHE/01-17 April 2001 Understanding of the one–particle spectrum of strongly correlated systems near the metal–insulator transition is an extremely difficult task, particularly if one wants to construct a microscopic theory. One standard example is the pseudogap regime of the HTC superconductors. ARPES measurements [@ARPES_psgap] showed that the Fermi surface is destroyed by correlations. This happens first near the van Hove points where the one-particle spectrum develops the characteristic 2-peak structure. Remaining parts of the Fermi surface, often called Fermi patches, get progressively narrower around Brillouin zone diagonals as the temperature decreases. Regions around van Hove points contain non–Fermi liquid with a pseudogap and other signatures of strong correlations such as flat bands.[@flat_band] In other words, the pseudogap has the form similar to the absolute value of the $d_{x^2-y^2}$–superconducting (SC) order–parameter. This vision of the pseudogap is in agreement with STM results [@tunel_psgap] as well, regardless of details on how these results are interpreted. The above experiments still do not reveal much on the origins of the pseudogap, namely whether it is simply the signature of a liquid of pre–formed pairs or something much richer in fluctuations. In fact it is known that the antiferromagnetic (AF) fluctuations are also strong in the pseudogap regime.[@neutrons] If we assume that AF and SC fluctuations are somehow [*together*]{} the major reason for the strong renormalization of the one–particle selfenergy, and for the consequent partial destruction of the Fermi surface, then a many–body analysis of the one–particle propagator can be done in a controlled way. In fact the weak coupling theory easily reproduces AF and SC fluctuations from particle–hole (p-h) and particle–particle (p-p) loop–logarithms. Even if the coupling in realistic HTC systems is of the order of Fermi energy (i.e. intermediate–to–strong), already the weak–coupling theory contains the observed two–particle correlations. In this paper I answer the question of how the angle–resolved quasiparticle weight $Z(\theta)$ is renormalized by strong and coupled AF and SC fluctuations, and of the main consequences of the renormalized $Z(\theta)$ to the characteristic angle–resolved two–particle correlation functions. The renormalization of the quasiparticle weight in 2D was recently studied by Kishine and Yonemitsu.[@Kishine99] To calculate the renormalization of $Z$ resolved in the position on a flat Fermi surface they used a two–loop selfenergy expansion with the two–loop–renormalized vertex. The results show clearly that the flatness of the Fermi surface induces the suppression of the quasiparticle residue and that this effect is anisotropic. In the present work I consider the whole square Fermi surface of the Hubbard model. For this purpose I employ the N–patch renormalization group theory. After recent theoretical studies from several groups it emerged that the N–patch model describes in a systematic and controlled way weakly correlated electrons near half–filling, and explains the major aspects of HTC-s.[@these; @doucot_97; @ZS_prb_00; @Halboth_00; @Honerkamp_01; @Tsai_Marston] Until now the RG analysis of the N–patch model has been done only on the level of the two–particle scattering amplitudes or, in field–theoretical jargon, of the four–point vertex $U(K_1,K_2,K_3)$. The analysis of the renormalization group flow of $U$ as a function of three patch indices $i_1,i_2,i_3$ gave several important results. Typically the amplitudes $U(i_1,i_2,i_3)$ diverge at some interaction– and doping–dependent critical energy scale $\Lambda_c$. For the case of the Hubbard model, we distinguish two main renormalization regimes [@ZS_prb_00], the [*parquet regime*]{} and the [*BCS regime*]{}. In the parquet regime ($|\mu| < \Lambda$) both particle-particle and particle-hole propagators have strong contributions to the beta–function due to the van Hove singularities and the Fermi surface nesting. In this regime, provided $\Lambda \rightarrow \Lambda_c$ and neglecting the selfenrgy corrections, both SC and AF tendencies are strong and build [*divergent*]{} correlation functions $\chi ^{SC}$ and $\chi ^{AF}$. The dominant component of the antiferromagnetic susceptibility is of the $s$–type and the dominant component of the superconducting one is of the $d_{x^2-y^2}$–type. Both static compressibility $\chi _c$ and homogeneous magnetic susceptibility $\chi _s$ go to zero as the cutoff approaches to its critical value $\Lambda_c$.[@these; @Halboth_00; @Honerkamp_01] Consequently $\Lambda_c$ is the energy scale of the crossover between the strange metal and the strongly correlated regime with gap or pseudogap. A question arising from already existing results on the Hubbard model [@these; @ZS_prb_00] and on its extensions [@Halboth_00; @Honerkamp_01; @Tsai_Marston] is the following: if one wants to interpret the critical scale $\Lambda_c$ (or temperature) as the energy (temperature) $T^*$ for the onset of the pseudogap, why then are all signatures of the pseudogap not seen? This means in particular that $\chi ^{AF}$ and $\chi ^{SC}$ should be finite and not diverging. Emergent is the necessity to calculate the correlation functions with the corrections due to the one–particle selfenergy. At stronger doping ($|\mu| > \Lambda$) nesting properties get weaker so that eventually only remaining renormalization channel is superconducting (p-p). This is the BCS regime. There the superconductivity is simply BCS–like with the coupling constants and the angular profile of the order parameter determined at higher scales by a parquet–like flow, where $\Lambda$ was larger than the chemical potential. Equivalent to the RG approach is the fast parquet theory [@DzYak; @Zheleznyak], where $\theta$ variable (continuum version of the patch index) is called the fast variable in addition to the cutoff logarithm called the slow variable. In two dimensions the parquet integro–differential equations always have mobile pole solutions, i.e. the AF and SC fluctuations decouple one from the other. This seems to be in disagreement with the RG results, where we detected only immobile poles, the type of solution in which all scattering amplitudes develop the pole at the same scale $\Lambda_c$. The question of the consilience between the two theories is still controversial. However, the results of De Abreu and Douçot [@doucot_00] indicate that the mobile pole solution is dominant only in the very vicinity of $\Lambda_c$ and that the fixed pole solution is an [*intermediate*]{} solution, valid over several decades of energy scale. The width of the “very vicinity” characterized by the mobile poles depends on the coupling constant so that for reasonable and not too weak $U_0$ the final regime is so close to $\Lambda_c$, that the couplings are already too strong and out of reach of a weak coupling theory. Consequently the real physical interpretation can be given only to the immobile pole regime. We will concentrate on this “intermediate” regime in which all fluctuations are coupled and at least behave as if having an immobile pole. We suppose that the electronic Green function has the form $$\label{GF_Z} G_l(K)=\frac{Z_l(\theta)}{i\omega-\xi({\bf k})}\; .$$ $Z_l(\theta)$ is the angle–resolved scale dependent quasiparticle weight and $\xi({\bf k})$ is the tight–binding dispersion. The formalism keeps the notation introduced in the reference [@ZS_prb_00]. The form (\[GF\_Z\]) contains two main approximations. The first one is to keep trace only of the renormalization of the coherent part of the propagator. The second approximation is to assume that the spectrum $\xi({\bf k})$ remains non–renormalized. This assumption implies that we ignore the flow of the Fermi surface (FS) and of the Fermi velocity. Because of the particle–hole symmetry the flow of the FS is zero at half–filling, where we can expect that our form of the Green function is closer to reality than in the imperfectly nested (non–half filled) case. The flow equation for $Z(\theta)$ is derived from the general and exact one–loop RG equation for the complete selfenergy $\Sigma(K)$ given in [@ZS_prb_00]. Let us suppose that we are at some scale $l$, and that we know the propagator (\[GF\_Z\]). We integrate $dl$ further and look what is the effective two–point vertex $\Gamma _2$ in the effective action $S(l+dl)$; it is $$\label{Gamma_2} \Gamma_2(l+dl)=Z_l^{-1}(\theta)(i\omega-\xi({\bf k}))+d\Sigma_l(K)\; .$$ To find $Z_{l+dl}$ we expand $d\Sigma_l(K)$ in first order in $i\omega$ to obtain $$\label{noviZ} Z_{l+dl}=Z_l(1-Z_l\partial_{i\omega}d\Sigma) \; .$$ The differential equation for $Z$ follows immediately $$\label{DlZ} \partial _lZ_l(\theta)=-Z_l^2(\theta)\, \partial _{i\omega}\left[ \partial _l \Sigma(K)\right] \arrowvert _{\xi=i\omega=0 } \; .$$ Only the terms of $\partial_l\Sigma$ which are linear in energy contribute. These are just the terms which are marginal upon zeroth order scaling in Shankar’s sense.[@Shankar] We will look for these terms. The equation for $\partial _l\Sigma$ can be written as $$\label{DlS} \partial _l\Sigma(\theta,\epsilon,\omega)=\frac{\Lambda}{(2\pi)^2}\int \frac{d\omega'}{2\pi}\sum_{\nu}\int{\cal J}_\nu(\theta',\Lambda)d\theta'\; G_l(\theta',\omega',\nu\Lambda){\cal D}_l(K,K'_{\nu}) \; ,$$ where ${\cal D}_l=2F_l-\tilde{F}_l$; $F_l$ and $\tilde{F}_l$ are energy–momenta dependent forward and backward scattering processes at the scale $l$, related to the effective interaction in a way that $F_l(K_1,K_2)=U(K_1,K_2,K_1)$ and $\tilde{F}_l(K_1,K_2)=U(K_1,K_2,K_2)$. ${\cal J}_\nu(\theta,\Lambda)$ is the angle–resolved density of states at energy $\xi=\nu\Lambda$. There is another, approximate but physically justified way to decompose ${\cal D}$. In fact, we will [*suppose*]{} that $D_l$ can be written as a sum of p-p and p-h terms: $$\label{Decomp} {\cal D}_l(K,K')={\cal D}_l^{pp}(K+K')+{\cal D}_l^{ph}(K-K')\; .$$ The p-p part of the propagator ${\cal D}$ depends only on the total energy–momentum $Q_{pp}=(\omega _{pp},{\bf q}_{pp})\equiv K+K'$ while the p-h part depends only on the energy–momentum transfer $Q_{ph}=(\omega _{ph}, {\bf q}_{ph})\equiv K-K'$. As usual we skip the marginal part of the dependence on ${\bf q}_{pp}$ and ${\bf q}_{ph}$. For that purpose we note that both momenta can be written in the form $$\label{both_q} {\bf q}={\bf q}^{(0)}(\theta,\theta')+{\bf q}^{(1)}(\theta,\theta',\xi,\xi')\; ,$$ where ${\bf q}$ stands either for ${\bf q}_{pp}$ or for ${\bf q}_{ph}$, ${\bf q}^{(0)}(\theta,\theta')$ is the value of [**q**]{} when both momenta ${\bf k}$ and ${\bf k}'$ are at the Fermi surface, while ${\bf q}^{(1)}$ is the correction due to non–zero energies $\xi$ and $\xi'$. Using standard scaling arguments we can skip ${\bf q}^{(1)}$ in the limit $\Lambda/\epsilon_F\rightarrow 0$ because the cutoff is imposed to momenta. Similar argument cannot be used for $\omega _{pp}$ and $\omega _{ph}$ because the integral in (\[DlS\]) runs over all frequencies independently of the actual cutoff $\Lambda (l)$. We are therefore left with the locally dispersionless phononic propagators $$\label{Dpp} {\cal D}_l^{pp}(\theta,\theta',Q_{pp})\approx {\cal D}_l^{pp}(\theta,\theta', i\omega+i\omega')= 2F_l^{pp} (\theta,\theta',i\omega+i\omega')-\tilde{F}_l^{pp}(\theta,\theta',i\omega+i\omega')$$ and $$\label{Dph} {\cal D}_l^{ph}(\theta,\theta',Q_{ph})\approx {\cal D}_l^{ph}(\theta,\theta', i\omega-i\omega')= 2F_l^{ph} (\theta,\theta',i\omega-i\omega')-\tilde{F}_l^{ph}(\theta,\theta',i\omega-i\omega')\; .$$ In these expressions we made the same pp-ph decomposition of the forward and backward amplitudes as we did with ${\cal D}$ in eq.\[Decomp\]. The following step is to re-constitute the $i\omega$ dependence from the cutoff dependence. This can be done with logarithmic precision simply replacing $\Lambda$ with $i\omega$. The derivatives over frequency in eq.(\[DlZ\]) (acting only on $F$-parts) are then readily calculated $$\label{Diom} \partial _{i\omega}F_l^{pp,ph} (\theta,\theta',i\omega\pm i\omega')|_{i\omega=0}=\pm \frac{1}{i\omega'}\partial _l F_l^{pp,ph} (\theta,\theta')\; ,$$ and equivalently for backward amplitudes $\tilde{F}_l^{pp,ph}$. The frequency–independent quantities $\partial _lF_l^{pp}(\theta,\theta')$ and $\partial _lF_l^{ph}(\theta,\theta')$ are the p-p and p-h parts of the $\beta$–function of the N–patch model, with appropriate configurations of the external momenta: $$\label{SveBete} \begin{array}{llll} {\partial _lF_l^{pp}(\theta,\theta')=\beta _{pp}\{ U,U\} (\theta,\theta',\theta)} \\ {\partial _l\tilde{F}_l^{pp}(\theta,\theta')=\beta _{pp}\{ U,U\} (\theta,\theta',\theta')} \\ \partial _lF_l^{ph}(\theta,\theta')=-[ X\beta _{ph}\{ XU,XU\} ] (\theta,\theta',\theta) \\ \partial _l\tilde{F}_l^{ph}(\theta,\theta')=[ 2\beta _{ph}\{ U,U\} -\beta _{ph}\{ U,XU\}- \beta _{ph}\{ XU,U\}](\theta,\theta',\theta')\; , \end{array}$$ where all $\beta$–functions are given in ref.[@ZS_prb_00], but for the moment with dressed Green functions (\[GF\_Z\]). Notice that the forward scattering has finite p-h contributions only from the ZS’ channel (1=3) while only the ZS channel (1=4) contributes to the backward scattering. This means that we forget about the contributions at zero momentum transfer. They are somewhat tricky, but don’t have any logarithmic part so that we can forget them. We can also get rid of the $Z$ factors in beta–functions of the eq.(\[SveBete\]) by rescaling the fermions at every step of the RG in a way that $$\label{resc_psi} \bar{\Psi}(\theta)Z_l^{-1}(\theta)\Psi(\theta)\rightarrow \bar{\Psi}(\theta)\Psi(\theta)$$ and re–defining the effective interaction $$\label{resc_U} U_l(1,2,3)\rightarrow[Z_l(1)Z_l(2)Z_l(3)Z_l(4)]^{-1/2}U_l(1,2,3)\; .$$ After transformations (\[resc\_psi\]) and (\[resc\_psi\]) the calculations of the $\beta$–functions to the one–loop order are identical to the case with $Z=1$. Performing $\omega '$ integral to logarithmic precision, the flow equation for $Z(\theta)$ becomes $$\label{RG_polelog} \partial _l\log{Z}_l(\theta)=\frac{1}{(2\pi)^2}\int d\theta'\; {\cal J}_- (\theta',\Lambda)\eta_l(\theta,\theta')\equiv \eta_l(\theta)\; ,$$ where $$\eta_l(\theta,\theta')\equiv \partial_l\{ 2[F_l^{pp}-F_l^{ph}]- \tilde{F}_l^{pp}+\tilde{F}_l^{ph} \} (\theta,\theta')$$ with $\partial_l F$–terms given by eqs.(\[SveBete\]) and calculated with the [*bare*]{} Green functions, as in ref.[@ZS_prb_00]. Generalization of eq.\[RG\_polelog\] to finite temperatures can be done simply by replacing ${\cal J}_-$ with $\sum _{\nu} (-\nu) {\cal J}_{\nu}n_F(\nu\Lambda)$. Taking 1D “limit” of the eq.(\[RG\_polelog\]) is simple and instructive: instead of N patches we now have 2 patches: $\theta=R$ (right) and $\theta=L$ (left). Independent scattering amplitudes at non-rational filling are $F(RL)=g_2=U(RLR)$ and $\tilde{F}(RL)=g_1=u(RLL)$. We skip $g_4=u(RRR)$ from considerations because it has no logarithmic renormalization. The $\theta$–integrals reduce to summation over two points so that it is easy to reproduce the well–known result $\eta=-\frac{1}{4\pi^2 v_F^2}(g_1^2-g_1g_2+g_2^2)$ [@Q1D_Bourbonnais]. It is the Luttinger liquid exponent. In two dimensions the angle resolved $\eta _l(\theta)$ can also be associated with some non-Landau (non-Fermi liquid) behavior. We see that $\eta $ becomes finite if the forward and backward amplitudes have some logarithmic flow over a wide range of $\theta$–space $(\Delta \theta \sim 1)$. Another possibility for having finite $\eta$ is when the Fermi surface is close to the van Hove singularities. “Close” means that the distance between the Fermi level and the van Hove singularity is comparable or inferior to the scale $\Lambda _c$ at which interactions start to flow strongly. We will now calculate the renormalization of $Z_l(\theta)$ in the 2D Hubbard model at half–filling, from the knowledge of the scale dependence of the patch-dependent interaction $U_l(1,2,3)$. In the RG equations of the previous section the discretization of $\theta$ is done in a way described in ref.[@ZS_prb_00]. In the present case the Fermi surface is square so that there are two mechanisms for the suppression of quasiparticle residues. Namely, both above mentioned conditions are fulfilled: (i) forward and backward amplitudes have logarithmic flows for any configuration $(\theta,\theta')$ if the two angles are at opposite sides of the Fermi surface, so that the available phase space is indeed large; (ii) van Hove singularities are at the Fermi surface and are [*nested*]{}. In fact, one can alternatively imagine a Fermi surface with non-nested van Hove singularities and nested parts elsewhere. Such a model would be even closer to the realistic situation in some HTC compounds. For the sake of rigor, we will however remain limited to the Hubbard model. (7,8) (-0.5,0)[![The evolution of the angle–resolved quasiparticle weight on the Fermi surface. The lines are for $l\equiv\log{(4t/\Lambda)} =$3.; 4.; 4.4; 4.95; 5.11; 5.17; 5.20. The critical scale is $l_c\approx 5.204$[]{data-label="slika_z"}](./z_theta.eps "fig:"){width="8cm"}]{} The result is shown in fig.\[slika\_z\]. The figure shows $Z(\theta_i)$ with $0\leq\theta_i\leq\pi/2$ on 9 equidistant points. Different lines correspond to different values of the scaling parameter as it approaches its critical point, i.e. when couplings diverge. Settings are the same as in ref.[@ZS_prb_00]: the Fermi surface was discretized into 32 patches and the initial interaction is $U_0/(4t)=0.333$. One sees that the Fermi surface is first destroyed at the van Hove points and than the regions of the FS destroyed by correlations grow larger and larger. This kind of flow is compatible with the interpretation that $\Lambda_c$ is not the critical temperature for some symmetry braking, but merely the scale at which coherent quasiparticle cease to exist at the Fermi surface giving place to a gapped or pseudogapped liquid. The magnitude of the (pseudo-)gap is largest in van Hove points and smallest on the diagonals of the Brillouin zone. The whole gap function $\Delta(\theta)$ then scales as $\Lambda_c\times f(\theta)$, where $f(\theta)$ is a function with the symmetry of the absolute value of the $d_{x^2-y^2}$–harmonic. This is the angle–resolved (pseudo-)gap, responsible for the correlation–induced angle–resolved localization. In other words the electrons near the van Hove points get much less mobile than those near diagonals. The similar scenario has been proposed by the Zürich group [@Honerkamp_01] even without concrete calculations of the quasiparticle weight. The question of antiferromagnetism and superconductivity remains to be clarified. Let’s discuss this problem taking into account the scale dependent $Z(\theta)$ in the flow equations for the susceptibilities $\chi ^{AF}(\theta,\theta')$ and $\chi ^{SC}(\theta,\theta')$. Following the procedure given in ref.[@ZS_prb_00] and dressing the electronic propagators with $Z$–factors we get $$\label{flow_chi} \dot{\chi}_l^{\delta}(\theta_1,\theta_2)=\frac{1}{Z_l(\theta_1)Z_l(\theta_2)} \oint d\theta\; \tilde{z}_{l}^{\delta}(\theta_1,\theta ) D_l^{\delta}(\theta ) \tilde{z}_{l}^{\delta}(\theta ,\theta_2) \; .$$ This equation has the same structure as the one in ref.[@ZS_prb_00], with two modifications. First, we skip the retardation effects, replacing $l_{\delta}$ simply by $l$, because we are at half–filling. Second, the quantity $\tilde{z}_{l}^{\delta}(\theta_1,\theta )$ that has the role of a triangular vertex is somewhat modified. Its flow writes: $$\label{flow_z} \left[ \partial _l-\eta(\theta_1)-\eta(\theta_2)\right] \tilde{z}_l^{\delta}(\theta_1,\theta_2)=-\oint d\theta \; \tilde{z}_{l}^{\delta}(\theta_1,\theta ) D_l^{\delta}(\theta ) V_{l}^{\delta}(\theta ,\theta_2) \; .$$ The meaning of $\tilde{z}_l^{\delta}(\theta_1,\theta_2)$ is that $$\tilde{z}_l^{\delta}(\theta_1,\theta_2)\equiv Z_l(\theta_1) {z}_l^{\delta}(\theta_1,\theta_2)Z_l(\theta_2)$$ so that the initial conditions for $\tilde{z}_l^{\delta}$ and for ${z}_l^{\delta}$ are the same. After discretization we integrate numerically equations (\[flow\_chi\]) and (\[flow\_z\]). Fig.\[susc\] shows the flow of the dominant eigenvalues of susceptibilities $\delta=AF$ and $\delta=SC$ near the divergence of scattering amplitudes. The thin line represents both (degenerated) susceptibilities for $U=0$. Including only the one–loop vertex renormalization we get the strong enhancement and, as far as my numerics can say, even divergences of both AF and SC susceptibilities. If we now include also the one–particle–weight renormalization, both susceptibilities are radically reduced and lose their divergent behavior. On the other hand, the flow of the compressibility $\chi _c$ and magnetic susceptibility $\chi _{\sigma}$ is not affected by $Z$–renormalization because of the Ward identities.[@Metzner_Ward] All above results support the statement that what happens at energy scale $\Lambda _c$ is a flow towards a state with spin– and charge–gap or pseudogap, insulating and without AF or SC ordering. (7,7) (-0.5,0)[![Scale dependence of the dominant components of both antiferromagnetic and superconducting susceptibilities at half–filling, A: due to renormalized vertex only and B: due to renormalized vertex and quasiparticle weight. The thin line represents the bare susceptibility.[]{data-label="susc"}](./susc.eps "fig:"){width="8cm"}]{} We finish this discussion with a few words about the effects of doping. Two regimes, [*parquet*]{} and BCS, exist also in the scaling properties of $Z_l(\theta)$. In the whole parquet regime we expect the behavior governed by the proximity to the half–filling situation, so that the present results can be applied at all energies larger than the chemical potential $|\mu|$. Upon doping the nesting becomes more and more imperfect, the p-h logarithm loses its divergence, the critical scale is more and more suppressed and the p-h and p-p channels get progressively less coupled. Eventually at strong enough doping and low enough $\Lambda$ one is in the BCS regime where the effective physics is described by the 2D BCS theory, with renormalized and $\theta$–dependent interaction and quasiparticle weight. The anomalous dimension $\eta(\theta)$ goes to zero because (i) the the range $\Delta\theta$, over which the forward and the backward amplitude have strong flow due to BCS diagram, scales with the cutoff, and (ii) the van Hove singularities are outside of the cutoff. The critical temperature for the onset of the superconductivity is [*not*]{} affected by the renormalized $Z$, but the magnitude and angular dependence of the superconducting order parameter [*are*]{} dressed by $Z(\theta)$. The symmetry of the gap remains $d_{x^2-y^2}$. To summarize, I proposed a simple renormalization group theory for the angle–dependent destruction of the Fermi surface in the Hubbard model. The results offer a theoretical comprehension of the angle–dependent Fermi surface truncation in the cuprate superconductors in terms of the scattering processes of the electrons on the low–energy collective excitations of [*both*]{} particle–particle and particle–hole types. The theory, based on the N–patch model, is in its essence a controlled weak–coupling procedure that keeps trace of the dependence of the effective interaction and one–particle spectral weight on the position of the particles at the Fermi surface. As one approaches the critical scale $\Lambda _c$ the quasiparticle weight goes to zero first near the van Hove points, and the effect progresses toward Brillouin zone diagonals as one lowers the temperature. Dressing the flow equations for AF and SC response functions with the one–particle weight factors results in dramatical reduction of correlations of both types. The strongly correlated state just below $\Lambda_c$ is gapped or pseudogapped and without any long–range–order. Critical scale $\Lambda_c$ is interpreted as the pseudogap temperature $T^*$ found in cuprate superconductors. I am grateful to Serguei Brazovskii, Benoit Douçot, Benedikt Binz, Claude Bourbonnais, Nicolas Dupuis, and Sebastien Dusuel for important discussions and comments. Laboratoire de Physique Théorique et Hautes Energies, Universités Paris VI Pierre et Marie Currie – Paris VII Denis Diderot, is supported by CNRS as Unité Mixte de Recherche, UMR7589. M. R. Norman [*et al.*]{}, Nature (London) [**392**]{}, 157 (1998). Z.-X. Shen [*et al.*]{}, Science [**267**]{}, 343 (1995). Ch. Renner [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 149 (1998). R. M. Birgenau [*et al.*]{}, Phys. Rev. B [**39**]{}, 2868 (1989); J. Rossat-Mignod [*et al.*]{}, Physica (Amsterdam) [**169B**]{}, 58 (1991). J. Kishine and K. Yonemitsu, Phys. Rev. B [**59**]{}, 14823 (1999). D. Zanchi, Ph.D. thesis, Université Paris-Sud (1996) F. Vistulo de Abreu and B. Douçot, Europhys. Lett. [**38**]{}, 533 (1997). D. Zanchi and H. J. Schulz, Phys. Rev. B [**61**]{}, 13609 (2000). C. J. Halboth and W. Metzner, Phys. Rev. B [**61**]{} 7364 (2000). C. Honerkamp, M. Salmhofer, N. Furukawa, and T. M. Rice, Phys. Rev. B [**63**]{} 35109 (2001). Shan-Wen Tsai and J. B. Marston, cond-mat/0010355, 0010300. I. E. Dzyaloshinskii and V. M. Yakovenko, Sov. Phys. JETP[**67**]{}, 844 (1988). A. T. Zheleznyak, V. M. Yakovenko, and I. E. Dzyaloshinskii, Phys. Rev. B [**55**]{}, 3200 (1997). F. Vistulo de Abreu and B. Douçot, cond-mat/0007463. R. Shankar, Rev. Mod. Phys. [**66**]{}, 129 (1994). See for instance: C. Bourbonnais and L. G. Caron, Int. J. Mod. Phys. B [**5**]{}, 1033 (1991). W. Metzner, C. Castellani, and C. Di Castro, Adv. in Phys. [**47**]{}, 317 (1998).
--- abstract: 'We report six   observations of  during and after the outburst of Sep 23 1998. The outburst flux is lower than the quiescent flux in the entire observed energy band (0.1–10 keV), in agreement with earlier observations. The  spectra are fitted with two-temperature plasma and cooling flow spectral models. These fits show a clear spectral evolution in  for the first time in : the hard  turn-up after the outburst is reflected in the emission measure and the temperature. Moreover, during outburst the 1.5–10 keV flux decreases significantly. We argue that this is not consistent with the constant flux during a  outburst observation made eight years earlier. We conclude from this observation that there are significant differences between outburst  lightcurves of .' author: - 'H.W. Hartmann$^1$' - 'P.J. Wheatley$^2$' - 'J. Heise$^1$' - 'J.A. Mattei$^3$' - 'F. Verbunt$^4$' date: 'Received ; accepted ' title: The  spectra of VW Hydri during the outburst cycle --- Introduction {#intro} ============  from dwarf novae arise very near the white dwarf, presumably in a boundary layer between the white dwarf and the accretion disk surrounding it. Information on the properties of the  emitting gas as a function of the mass transfer rate through the accretion disk is provided by observations through the outburst cycle of dwarf novae. It may be hoped that such observations help to elucidate the nature of the  emission in cataclysmic variables, and by extension in accretion disks in general.  is a dwarf nova that has been extensively studied during outbursts and in quiescence, at wavelengths from optical to hard . It is a dwarf nova of the SU UMa type, i.e. in addition to ordinary dwarf nova outbursts it occasionally shows brighter and longer outbursts, which are called superoutbursts. Ordinary outbursts of  occur every 20–30d and last 3–5 days; superoutbursts occur roughly every 180d and last 10–14d (Bateson [@bateson77]). A multi-wavelength campaign combining data obtained with , , the International Ultraviolet Explorer, and by ground based optical observers covered three ordinary outbursts, one superoutburst, and the three quiescent intervals between these outbursts (Pringle et al. [@pringle87], Van Amerongen et al. [@amerongen87], Verbunt et al. [@verbunt87], Polidan, Holberg [@polidan87], van der Woerd & Heise [@woerd87]). The   data show that the flux in the 0.05–1.8keV range decreases during the quiescent interval; the flux evolution at lower energies and at higher energies (1–6keV) are compatible with this, but the count rates provided by   are insufficient to show this independently. Folding the  data of three outbursts showed that a very soft component appears early in the outbursts and decays faster than the optical flux (Wheatley et al. [@wheatley96]). The  Position Sensitive Proportional Counter (PSPC) and Wide Field Camera (WFC) covered a dwarf nova outburst of  during the  All Sky Survey (Wheatley et al. [@wheatley96]). The PSPC data show that the flux in the 0.1–2.5keV range is lower during outburst. The  data showed no significant difference between outburst and quiescent  spectrum. The best spectral constraints are obtained for the quiescent  spectrum by combining  WFC from the All Sky Survey with data from  PSPC and   pointings. A single temperature fit is not acceptable, the sum of two optically thin plasma spectra, at temperatures of 6keV and 0.7keV is somewhat better. The spectrum of a plasma which cools from 11keV and has emission measures at lower temperatures proportional to the cooling time, provides an acceptable fit of the spectrum in the 0.05–10keV energy range (Wheatley et al. [@wheatley96]). In this paper we report on a series of  observations of , which cover an ordinary outburst and a substantial part of the subsequent quiescent interval. The observations and data reduction are described in Sect.2, the results in Sect.3 and a discussion and comparison with earlier work is given in Sect.4. Observations and data reduction {#obs} ===============================  is monitored at optical wavelengths by the American Association of Variable Star Observers (AAVSO). On Sep 23 1998 the optical magnitude of   started to decrease. The outburst lasted for 5–6 days and reached a peak magnitude of 9.2. This outburst served as a trigger for a sequence of six observations by  between Sep 24 and Oct 18. As a result we have obtained one  observation during outburst and five observations during quiescence. Since  appears as an on-axis source, the Low Energy Concentrator Spectrometer (LECS, Parmar et al. [@parmar97]) source counts are extracted from a circular region with a 35 pixel radius centered at the source. We use the Sep 1997 LECS response matrices centered at the mean raw pixel coordinates (130,124) for the channel-to-energy conversion and to fold the model spectra when fitted to the data. The combined Medium Energy Concentrator Spectrometer (MECS2 and MECS3, Boella et al. [@boella97]) source counts are extracted from a circular region with a 4 (30 pixel) radius. The September 1997 MECS2 and MECS3 response matrices have been used. These matrices are added together. The background has been subtracted using an annular region with inner and outer radii of 35 and 49.5 pixels for the LECS and 30 and 42.5 pixels for the MECS, around the source region. We ignore the data of the High Pressure Gas Scintillation Proportional Counter (HPGSPC, Manzo et al. [@manzo97]) and the Phoswitch Detection System (PDS, Frontera et al. [@frontera97]) since their background subtracted spectra have a very low signal to noise ratio. The LECS and MECS data products are obtained by running the  Data Analysis System pipeline (Fiore et al. [@fiore99]). We rebin the energy channels of all four instruments to ${1 \over 3}\times \mbox{FWHM}$ of the spectral resolution and require a minimum of 20 counts per energy bin to allow the use of the chi-squared statistic. The total LECS and MECS net exposure times are 82.5 ksec. and 181.4 ksec. respectively. The factor 2.2 between the LECS and MECS exposure times is due to non-operability of the LECS on the daytime side of the earth. Results ======= [crrrrr]{} & & &\ & & & & &\ & & & & &\ 1 & 24–26/09/1998 & 34.1 & 0.024(3) & 76.6 & 0.016(2)\ 2 & 27–28/09/1998 & 7.2 & 0.098(8) & 20.2 & 0.109(8)\ 3 & 3–4/10/1998 & 9.1 & 0.089(8) & 17.4 & 0.094(6)\ 4 & 10–11/10/1998 & 9.4 & 0.072(7) & 20.0 & 0.075(5)\ 5 & 12/10/1998 & 11.1 & 0.072(6) & 21.6 & 0.085(5)\ 6 & 17–18/10/1998 & 11.6 & 0.077(6) & 25.6 & 0.078(5)\ Total & & 82.5 & & 181.4 &\ In Fig. \[lc\] we show the optical lightcurve, provided by the [*American Association of Variable Star Observers*]{} and the [*Variable Star Network*]{}, of  at the time of our  observations. These optical observations show that our first  observation was obtained during an ordinary outburst that peaked on Sep 24, whereas observations 2–6 were obtained in quiescence. The last ordinary outbursts preceding our first  observation was observed by the AAVSO to peak on Sep 8; the first outburst observed after our last   observation was a superoutburst that started on Nov 5 and lasted until Nov 19. Lightcurve {#rate_evolution} ---------- In Fig. \[lc\] we also show the count rates detected with the  LECS and MECS. For the latter instrument we show the count rates separately for the full energy range 1.5-10 keV, and for the hard energies only in the range 5-10 keV. In both LECS and MECS the count rate is lower during the outburst than in quiescence. In quiescence the count rate decreases significantly between our second and third (only in the MECS data), and between the third and fourth observations (both LECS and MECS data), but is constant after that (see Table \[table1\]). The MECS count rate decreases during our first observation, when  was in outburst, as is shown in more detail in Fig. \[decay\]. This decrease can be described as exponential decline $N_{\rm ph}\propto e^{-t/\tau}$ with $\tau\simeq 1.1$d. The count rates in the LECS are compatible with the same decline, but the errors are too large for an independent confirmation. The count rates at lower energies, 0.1–1.5keV, are compatible with both a constant value and the exponential decay during our first observation. Spectral fits ------------- [cllllllll]{}\ Obs. & T$_1$ & T$_2$ & E.M.$_1$ & E.M.$_2$ & L$_1$ & L$_2$ & ${\rm n_H}$ & $\chi^2$ (d.o.f.)\ & keV & keV & $10^{52}\mbox{ cm}^{-3}$ & $10^{52}\mbox{ cm}^{-3}$ & $10^{30} \mbox{erg s}^{-1}$ & $10^{30} \mbox{erg s}^{-1}$ & $10^{19}\mbox{cm}^{-2}$ &\ 1 & $0.68_{-0.11}^{+0.10}$ & $3.2_{-0.4}^{+0.6}$ & $2.0_{-0.5}^{+0.5}$ & $6.6_{-0.7}^{+0.7}$ & $0.7_{-0.2}^{+0.2}$ & $1.4_{-0.2}^{+0.3}$ & $4^*$ & 65 (61)\ 2 & $ 0.9_{-0.3}^{+0.8}$ & $3.7_{-0.3}^{+1.0}$ & $3_{-2}^{+11}$ & $40_{-10}^{+3}$ & $1.0_{-0.7}^{+4.2}$ & $9_{-3}^{+2}$ & $4^*$ & 92 (70)\ 3 & $ 1.3_{-0.3}^{+0.4}$ & $6.1_{-1.3}^{+2.1}$ & $8_{-5}^{+6}$ & $27_{-5}^{+5}$ & $2.0_{-1.4}^{+1.8}$ & $7_{-2}^{+3}$ & $4^*$ & 103 (75)\ 4 & $ 1.2_{-0.3}^{+0.3}$ & $6.0_{-1.5}^{+2.3}$ & $7_{-4}^{+5}$ & $23_{-4}^{+5}$ & $1.9_{-1.3}^{+1.5}$ & $6_{-2}^{+2}$ & $4^*$ & 75 (66)\ 5 & $ 1.2_{-0.3}^{+0.3}$ & $6.5_{-1.1}^{+1.6}$ & $5_{-3}^{+3}$ & $25_{-3}^{+3}$ & $1.4_{-0.8}^{+1.0}$ & $6.7_{-1.4}^{+1.6}$ & $4^*$ & 84 (74)\ 6 & $ 1.6_{-0.4}^{+0.4}$ & $6.5_{-1.8}^{+2.8}$ & $12_{-7}^{+5}$ & $20_{-5}^{+6}$ & $2.5_{-1.8}^{+1.4}$ & $5_{-2}^{+3}$ & $4^*$ & 78 (77)\ 3–6 & $1.28_{-0.16}^{+0.16}$ & $6.0_{-0.7}^{+0.9}$ & $7_{-2}^{+3}$ & $25_{-2}^{+2}$ & $1.9_{-0.7}^{+0.8}$ & $6.3_{-0.9}^{+1.0}$ & $4_{-2}^{+3}$ & 149 (113)\ \ Obs. & ${\rm T}_{\rm low}$ & ${\rm T}_{\rm high}$ & & & ${\rm n_H}$ & $\chi^2$ (d.o.f.)\ & keV & keV & & & $10^{19}\mbox{cm}^{-2}$ &\ 1 & $<0.4$ & $4.5^{+0.4}_{-0.6}$ & & & $4^*$ & 76 (62)\ 2 & $1.0^{+0.5}_{-0.3}$ & $6.8^{+1.1}_{-0.5}$ & & & $4^*$ & 92 (71)\ 3–6 & $0.66^{+0.18}_{-0.08}$ & $9.9^{+0.8}_{-0.8}$ & & & $4^*$ & 153 (115)\ [$^*$Fixed parameter c.f. the value obtained from the combined observations 3–6]{} We have made spectral fits to the combined MECS and LECS data for each of the six separate  observations and computed the luminosities assuming a distance of 65 pc to  (see Warner [@warner87]). As expected on the basis of earlier work, described in the introduction, we find that the observed spectra cannot be fitted with a single-temperature plasma. The combination of spectra of optically thin plasmas at two different temperatures does provide acceptable fits. The parameters of these fits are listed in Table \[fitres2\], and their variation between the separate observations is illustrated in Fig. \[param\]. The need for a two-temperature fit is illustrated in Figs. \[2comp\_err\] and \[smooth\] for the outburst spectrum of observation 1 and for the quiescent spectrum of the combined observations 3–6: the low temperature component is required to explain the excess flux near 1 keV. The Fe-K emission line near $6.70\pm 0.05\mbox{ keV}$ is clearly present in our data, and is due to hydrogen or helium like iron from the hot component of the plasma. The LECS data in observations 3–6 are poorly fitted above $\sim 5$ keV which is probably due to calibration uncertainties of the instrument (Fiore et al. [@fiore99]). We fix $n_{\rm H}$ at $4\times 10^{19}\mbox{ cm}^{-2}$, the best-fit value of the combined observation 3–6. (Fixing $n_{\rm H}$ at $6\times 10^{17}\mbox{ cm}^{-2}$, which was found by Polidan et al. ([@polidan90]), does not change the fit parameters, except for the chi-squared values of observations 2, 3 and 3–6 which become slightly worse; 98, 111 and 158 respectively.) The temperature of both the cool and the hot component of the two-temperature plasma is higher during quiescence than during the outburst, increasing from respectively 0.7keV and 3.2keV in outburst to 1.3keV and 6keV in quiescence. The temperatures immediately after outburst – in our second observation – are intermediate between those of outburst and quiescence. The emission measure (i.e. the integral of the square of the electron density over the emission volume, $\int n_{\rm e}^2dV$) of both the cool and the hot component of the two-temperature plasma is also higher in quiescence; immediately after outburst the emission measure of the hot component is higher than during the later phases of quiescence. The temperatures and emission measures of the two-temperature plasma are constant, within the errors, in the later phases of quiescence of our observations 3–6. For that reason, we have also fitted the combined data of these four observations to obtain better constraints on the fit parameters (see Table \[fitres2\]). Note that the decrease of the count rate between observations 3 and 4, mentioned in Sect. \[rate\_evolution\], is significant even though it is not reflected in the emission measures and luminosities of the two components separately. This is due to the combined spectral fitting of the LECS and the MECS, since the decrease in count rate is less significant for the LECS. Moreover, the errors on the count rates are much smaller than those on the emission measures ($\la 10\%\mbox{ and }\ga 20\%$ respectively). [cllll]{} Obs. & T$_2$ & E.M.$_2$ & MECS & PSPC\ & keV & $10^{52}\mbox{ cm}^{-3}$ & cts s$^{-1}$ & cts s$^{-1}$\ 1a & $3.6_{-0.7}^{+1.3}$ & $8.4_{-1.3}^{+1.3}$ & $0.022_{-0.003}^{+0.003}$ & $0.35_{-0.03}^{+0.03}$\ 1b & $3.0_{-0.6}^{+1.3}$ & $5.4_{-0.8}^{+0.8}$ & $0.013_{-0.002}^{+0.002}$ & $0.23_{-0.02}^{+0.02}$\ We fit the first 31 ksec and the next 46 ksec of the outburst spectrum (1a and 1b) separately. Both fits are good with $\chi^2<1$. From the fit results we compute the MECS and  PSPC count rates. The results are shown in Table \[vwh\_splitobs1\]. We have only indicated the temperature and emission measure of the hot component since the cool component is responsible for the iron line emission outside the MECS bandwidth and does not have a large impact upon the continuum emission. Note from Table \[vwh\_splitobs1\] that the decay in count rate is entirely due to the decrease of the emission measure. To compare our observations with the results obtained by Wheatley et al. ([@wheatley96]) we consider next the cooling flow model (cf. Mushotzky, Szymkowiak [@mushotzky88]) for our observations 1, 2 and 3–6. In this model the emission measure for each temperature is restricted by the demand that it is proportional to the cooling time of the plasma. The results of the fits are shown in Table \[fitres2\]. Note that these results are not better than the two-temperature model fits. Due to the poor statistics of the LECS outburst observation we cannot constrain the lower temperature limit. The MECS is not sensitive to this temperature regime at all. A contour plot of the upper and lower temperature limits for the combined quiescent observations 3–6 is shown in Fig. \[contour36\]. The boundaries of the low temperature in Fig. \[contour36\] are entirely determined by the Fe-L and Fe-M line emission; for a low temperature of $\la0.35$ keV the contributions to the line flux integrated over all higher temperatures exceeds the observed line flux. For a low temperature of $\ga1.2$ keV there is not sufficient line flux left in the model. The boundaries of the high temperature are determined by the continuum slope; for a high temperature of $\la8.5$ and $\ga11.5$ keV the model spectrum is too soft and too hard respectively to fit the data. Comparison with previous  observations {#vwh_discussion} ====================================== Time variability {#vwh_timevar} ---------------- We predict the  count rates of  during outburst and quiescence with the observed  flux from the two-temperature fit (see Table \[fitres2\]). Here we do apply $n_{\rm H}=6\times 10^{17}\mbox{ cm}^{-2}$ (Polidan et al. [@polidan90]) since  is probably more sensitive to $n_{\rm H}$ than . The predicted count rates during outburst and quiescence are 0.31 and $0.87\mbox{ cts s}^{-1}$. The  observed count rates are 0.4 and $1.26\mbox{ cts s}^{-1}$ respectively (Belloni et al. [@belloni91]; Wheatley et al. [@wheatley96]). Both predictions appear to be different from the observations by a factor $\sim0.75$. From Fig. \[decay\], we observe a decrease in MECS count rate by a factor of $\ga 4$ during outburst. This is inconsistent with the constant 0.4 $\mbox{ cts s}^{-1}$ observed by the  PSPC during outburst (Wheatley et al. [@wheatley96]). Using the LECS data during the outburst in a bandwidth (0.1–1.5 keV) comparable to the  PSPC we cannot discriminate observationally between a constant flux and the exponential decay observed by the MECS. However, our spectral fits to the data require that the 0.1-2.5 keV flux decreases in tandem with the hard flux. Thus the difference between the  PSPC and the  MECS lightcurves during outburst may either be due to variations between individual outbursts or to the different spectral bandwidths of the observing instruments. The predicted decay of the count rate significantly exceeds the range allowed by the ROSAT observations of the Nov 1990 outburst. We interpret the time variability of the count rate shown in Figs. \[decay\] and \[param\], as a change mainly in the amount of gas in the inner disk that emits keV photons. At the end of the outburst, while the inner disk is still predominantly optically thick, the mass accretion rate onto the white dwarf is decreasing. As a result, the amount of hot optically thin gas drops gradually. This is observed in Fig. \[decay\]. The transition to a predominantly optically thin inner disk occurs just before observation 2. As a result the amount of optically thin emitting material in the disk increases strongly. This is shown by the increase of the emission measure of the hot component in Fig. \[param\], observation 2, which even peaks above the quiescent value. The settling of the accretion rate towards quiescence is shown in Fig. \[param\], observations 3–6 for both the temperature and the emission measure. In contrast to the emission measure, the temperature of the hot component increases only gradually throughout observations 1–6 as it reflects the slowly decreasing accretion rate rather than the amount of optically thin emitting material in the disk. Spectral variability {#vwh_specvar} -------------------- Both a two-temperature plasma model and a cooling flow model fit the spectrum of our  observations of  better than a one-temperature model. The contribution of the cool component lies mainly in the presence of strong Fe-L line emission around 1 keV. The hot component contributes the continuum and the Fe-K line emission at $\sim 6.7$ keV. Adding a soft atmospheric component in the form of a $\la 10$ eV blackbody model does not improve our fits. This blackbody component, reported by Van der Woerd et al. ([@woerd86]) and Van Teeseling et al. ([@teeseling93]), is too soft to be detected by  LECS. Based upon the $\chi^2$-values, the  observation of  does not discriminate between a continuous temperature distribution (the cooling flow model) and a discrete temperature distribution (the two-component model) of the  emitting region. Wheatley et al. ([@wheatley96]) derive a lower and upper temperature of $\la 0.53\mbox{ and }11^{+3}_{-2}$ keV respectively for a cooling flow fit to the combined  PSPC and  LAC data during quiescence. These temperatures are consistent with our cooling flow fits to  data during quiescence; there is a small overlap between the 2 and 3$\sigma$ contours shown in Fig. 6 by Wheatley et al. and the contours of our Fig. \[contour36\]. Conclusions {#vwh_conclusions} ===========  does not discriminate between a continuous (cooling flow) and a discrete temperature distribution. Our observation of a decreasing count rate, followed by a constant count rate during quiescence is in contradiction with the disk instability models. These models predict a slightly increasing mass transfer onto the white dwarf which must show up as an [*increase*]{} in the  flux. [*Ad hoc*]{} modifications to disk instability models, such as interaction of the inner disk with a magnetic field of the white dwarf (Livio, Pringle [@livio92]), evaporation of the inner disk (Meyer, Meyer-Hofmeister [@meyer94]), or irradiation of the inner disk by the white dwarf (King [@king97]), possibly are compatible with the decrease of ultraviolet flux (e.g. Van Amerongen et al. [@amerongen90]) and  flux during quiescence. If we assume a continuous temperature distribution the upper temperature limit of our quiescence spectrum is consistent with the observations by Wheatley et al. ([@wheatley96]). The cooling flow model requires an accretion rate of $3\times 10^{-12}\mbox{ M}_\odot\mbox{ yr}^{-1}$ to explain the  luminosity late in quiescence. A similar result is obtained when we convert the luminosity derived from the two-temperature model to an accretion rate. Any outburst model must accommodate this accretion rate.  MECS observes a significant decrease in the count rate during outburst. Our simulations show a similar decrease for the  PSPC which would have been significantly detected. The fact that the  count rate during outburst was constant (Wheatley et al. [@wheatley96]) and the results from our cooling flow model fits suggest that the outburst of Sep 24 1998 behaved differently from the outburst of Nov 3 1990. This work has been supported by funds of the Netherlands Organization for Scientific Research (NWO). Bateson F.M. 1977, N.Z.J.Sci 20, 73 Belloni T., Verbunt F., Beuermann K. et al., 1991, A&A 246, L44 Boella G., Chiappetti L., Conti G. et al., 1997, A&AS 122, 327 Fiore F., Guainazzi M., Grandi P. 1999, [*Cookbook for  NFI spectral analysis*]{} Frontera F., Costa E., Dal Fiume D. et al., 1997, A&AS 122, 357 King A. 1997, MNRAS 288, L16 Livio M., Pringle J.E. 1992, MNRAS 259, 23P Manzo G., Giarrusso S., Santangelo A. et al., 1997, A&AS 122, 341 Meyer F., Meyer-Hofmeister E. 1994, A&A 288, 175 Mushotzky R.F., Szymkowiak A.E. 1988, in: [*Cooling flows in clusters and galaxies*]{}, ed: Fabian A.C., Kluwer Dordrecht, the Netherlands, 53 Parmar A.N., Martin D.D.E., Bavdaz M. et al., 1997, A&AS 122, 309 Polidan R.S., Holberg J.B. 1987, MNRAS 225, 131 Polidan R.S., Mauche C.W., Wade R.A. 1990, ApJ 356, 211 Pringle J.E., Bateson F.M., Hassall B.J.M. et al., 1987, MNRAS 225, 73 Van Amerongen S., Damen E., Groot M., Kraakman H., Van Paradijs J. 1987, MNRAS 225, 93 Van Amerongen S., Kuulkers E., van Paradijs J. 1990, MNRAS 242, 522 Van der Woerd H., Heise J., Bateson F. 1986, A&A 156, 252 Van der Woerd H., Heise J. 1987, MNRAS 225, 141 Van Teeseling A., Verbunt F., Heise J. 1993, A&A 270, 159 Verbunt F., Hassall B.J.M., Pringle J.E., Warner B., Marang F. 1987, MNRAS 225, 113 Warner B. 1987, MNRAS 227, 23 Wheatley P.J., Verbunt F., Belloni T. et al., 1996, A&A 307, 137
--- abstract: 'The Linear Search Problem is studied from the view point of Hamiltonian dynamics. For the specific, yet representative case of exponentially distributed position of the hidden object, it is shown that the optimal orbit follows an unstable separatrix in the associated Hamiltonian system.' address: | Department of Mathematics\ University of Illinois\ Urbana, IL 61801 author: - 'Y. Baryshnikov, V. Zharnitsky' title: Search on the Brink of Chaos --- Introduction {#sec-1} ============ The Linear Search Problem has a venerable history, going back to R. Bellman (’63) and A. Beck (’64). They looked into the following question: > An object is placed at a point $\HH$ on the real line, according to a known probability distribution. A [*search plan*]{} (or [ *trajectory*]{}) is a sequence $\xx=\{x_i\}_{i=1}^{\infty}$ with $\ldots -x_4<-x_2<0<x_1<x_3<\ldots$ (or $\ldots -x_3<-x_1<0<x_2<x_4<\ldots$). A search is performed by a [*searcher*]{} walking alternating to the points of the search plan, starting at $0$, until the point $\HH$ is found. The total distance traveled till the point is found is $L(\xx, \HH)$, and the [*cost*]{} of the search plan $\xx$ is given by $$E(\xx)=\ex \, [L(\xx, \HH)].$$ The task is to find the plan $\xx$ minimizing $E(\xx)$. We are therefore in the [*average case*]{} analysis situation. The search problem has been also studied in theoretical computer science, see [*e.g.*]{} [@reif], where it was called cow-path problem. There have been many interesting generalizations such as search on rays, rendezvous, search with turn cost etc. [@revenge; @newman; @alpern99]. Finally, there is some recent work in connection with robotics, see [*e.g.*]{} [@bretl]. Background on Linear Search Problem {#sec-1.1} ----------------------------------- This Linear Search Problem was studied mostly by Anatole Beck and his coauthors in a series of papers where they analyzed to great details the archetypal case of normally distributed $\HH$ (see [@franck; @beck; @rides; @gal; @lim]). It turned out that the candidates for optimal trajectories form a 1-parametric family (parameterized by the length of the first excursion $|x_1|$). Using careful analysis Beck further reduced the choice of the candidates to just two initial points, of which one turned out to be the best by numerics. On the nature of these initial points, [@beck] stated: > ...we opine that this is a question whose answer will not shed much mathematical light. This note aims at uncovering the underlying geometric structure of the Linear Search Problem. Specifically, we argue that the correct framework here is that of [*Hamiltonian dynamics*]{}, especially where hyperbolicity of the underlying dynamics can be deployed. In our geometric picture the mysterious two points naturally appear at the intersection of a separatrix (that is present in the associated Hamiltonian system) with the curve of initial turning points. To this end we analyze in detail a one-sided version of the Linear Search Problem which we describe next. The original problem considered by Beck is addressed from the same viewpoint in the appendix. We restrict our proofs mostly to the exponentially distributed position $\HH$: this is done primarily to keep the presentation succinct and clear. In the appendix we demonstrate that our approach with small modifications works for some other distribution, [*e.g.*]{} for one-sided Gaussian. We believe that even more general classes of distributions can be also analyzed - this will be done in a follow-up paper. Half-line problem {#sec-1.2} ----------------- We concentrate here on a [*one-sided gatherer*]{} version of the search problem. Here, the hiding object $\HH$ is located on the [*half-line*]{} $\Real_+$, according to some (known) probability distribution. One searches for $\HH$ according to the [*plan*]{} $$\xx=\{0=x_0<x_1<x_2<\ldots < x_k < \ldots \},$$ and stops after the step $n=n(\xx,\HH)$ [*iff*]{} the point $\HH\in(x_{n-1}, x_n]$. One can think of a [*gatherer*]{} who mindlessly collects anything on the way, bringing the loot to the origin, where the results are analyzed (in a contrast to the [*searcher*]{}, who stops as soon as the sought after object is found). As in the original version, one needs to minimize the average cost of the search, which in our case is given by $$\label{eq:cost} E(\xx)=\ex [L(\xx,\HH)] =\ex \left [ \sum_{k=1}^{n(\xx,\HH)} x_k \right ].$$ ![Linear Search Problem: two-sided searcher on the left, one-sided gatherer on the right. The cost of the indicated plans given the positions of the hidden objects are shown by darker shade.[]{data-label="fig:lsp"}](plan_beck "fig:"){width=".47\textwidth"} ![Linear Search Problem: two-sided searcher on the left, one-sided gatherer on the right. The cost of the indicated plans given the positions of the hidden objects are shown by darker shade.[]{data-label="fig:lsp"}](plan_uni "fig:"){width=".47\textwidth"} [Note that the search problem on the line reduces to the above one-sided version if there are two searches which have to meet at the origin with the found object. If the underlying probability distribution is symmetric, then they have to search in the opposite directions returning to the orig-ion after each excursion. ]{} Motivation ---------- One-sided linear search appears naturally in quite a few applications. The initial motivation was the problem of search in unstructured Peer-to-Peer storage systems, analyzed in [@yulik_oper], where the relevance of Hamiltonian dynamics was first noticed. In such an unstructured network, one is sequentially flooding some (hop-)vicinity of a node, see Figure \[network\], with request for an item, setting the Time-to-Live at some limit, until the item is found. The cost of a plan is the total number of queries at all nodes of the network, representing the per query overhead. ![Search for an object in Peer-to-Peer unstructured network. The object is found after the 3rd flooding.[]{data-label="network"}](p2p.pdf){width=".45\textwidth"} Further applications include [*robotic search*]{}, where one deals with programming a robot of low sensing and computational capabilities, unable to recognize the objects it collects. Also the problem of efficient eradication of unwanted phenomena (say irradiation of a tumor) can be mapped onto our model. Outline of the results {#sec-1.3} ---------------------- We start with the general discussion of the one-sided search problem, showing in particular that the natural necessary condition of optimality implies that the optimal plan should satisfy a three-term recurrence, the [*variational recursion*]{} (a discrete analogue of the Euler-Lagrange equations). This reduces the dimension of phase space, but also introduces Hamiltonian dynamics. We analyze in details a “self-similar” case of homogeneous tail distribution function, also called a Pareto distribution, and see that the phase space is split naturally into a chaotic and monotonicity regions, divided by a [*separatrix*]{}. Hamiltonian dynamics associated to the variational recursion is then studied. We set up the stage for a general distribution, but mostly constrain our proofs to the case of the [*exponentially distributed position $\HH$ of the object*]{}, i.e. to the case of $$f(x):=\prob(\HH>x)=\exp(-x).$$ We prove that the optimal trajectories should start at the separatrix[^1]. On the other hand, the plans satisfying variational recursion are represented by a one-dimensional curve. The intersection of the separatrix with the curve gives two candidates for the starting position, mirroring the situation in the original setting of Beck [*et al*]{}’s papers. We conclude with several open questions. Occasionally, we use several standard notions from the theory of dynamical systems; for definitions we refer to [@katok]. Basic properties {#sec-2} ================ Basic notions {#sec-2.1} ------------- The input into the search algorithm is a [*plan*]{}, or a [*trajectory*]{} $$\xx=\{x_0=0, x_1, \ldots \, x_k, \ldots \}, x_k\geq 0, x_k \rar \infty,$$ that is an unbounded sequence of turning points. Below we list some simple properties of the cost functional (\[eq:cost\]): \[prp:basic1\] The cost of a plan is given by $$\label{eq:cost-functional} E(\xx)=\sum_{k=1}^{\infty} x_k \, \Prob( \HH >x_{k-1}) = \sum_{k=1}^{\infty} x_k f(x_{k-1}).$$ Any optimal search plan is strictly monotone. In other words, if a plan $\xx = \{x_n\}$ is not strictly increasing, there is a naturally modified strictly monotone plan $\tilde \xx = \{ \tilde x_n\}$ such that $E(\tilde \xx) < E(\xx)$. The contribution to the average cost is the length of excursion times the probability that such excursion will have to occur: $$E(\xx,\HH)=\sum_k x_k \cdot \one( \HH>x_{k-1})$$ which implies . Now, assume that a plan ${\bf x}$ is not strictly monotone. Consider a modified plan $\tilde \xx$, where the turning points preventing strict monotonicity are removed. Then, as can be verified by straightforward estimates, $E(\tilde \xx) < E(\xx)$. \[prp:basic2\] If the position of the object is [*known*]{}, then the cost of its recovery, , is a lower bound on the cost of any trajectory $$E({\bf x}) \geq L.$$ There exists a plan of cost at most $4L+\epsilon$ (thus finite if $L$ is). First, note that the sum $$E({\bf x}) = \sum_{k=0}^{\infty} x_{k+1}f(x_k)$$ is bounded below by the integral $$\int_0^{\infty}f(x)dx = L.$$ Next, observe that $$L = \ex [\HH] = -\int_0^{\infty} x \cdot f^{\prime}(x) dx = \int_0^{\infty} f(x) dx,$$ by definition and using integration by parts once. Then, using monotonicity of $f$ we estimate this integral from below $$L = \int_0^{\infty} f(x) dx = \sum_{k=0}^{\infty} \int_{x_k}^{x_{k+1}} f(x) dx \geq \sum_{k=0}^{\infty} (x_{k+1}-x_k)f(x_{k+1}).$$ Evaluating the expression on the right over the geometric sequence $x_0 = 0, x_k = A\cdot 2^{k-1}$ $( k = 1, 2, \ldots)$, we have $$L \geq \frac 1 4 \sum_{k=0}^{\infty} f(x_{k+1})x_{k+2}.$$ Adding $x_1= A$ to both sides, we obtain $$4L + A \geq E({\bf x}),$$ which proves the claim since $A$ can be taken arbitrarily small. If the tail distribution function is continuously differentiable (or even Lipshitz) on $[0,\infty)$, then the optimal trajectory does exist. In particular, one need not consider bi-infinite trajectories $\{ 0 < \ldots < x_{-2}< x_{-1}<x_1<x_2 < \ldots\}$. This is an extension of the corresponding result for the two-sided search, see [*e.g.*]{} [@beck_exist]. However, for completeness, we give an independent proof in the next section. The Lipshitz property is essential, as was also observed by Beck and Franck, since one can construct an example for which no sequence with finitely many terms near zero is optimal. In other words, there is no first turning point, see example in the next section. Variational recursion --------------------- Optimality of a sequence implies a local condition. Assume the tail distribution function $f(x)=\Prob(\HH>x)$ is differentiable. If the plan $\xx$ is optimal, then the terms $\{x_k\}$ satisfy the [*variational recursion*]{}: $$\label{eq:var_rec} f(x_{n-1})+x_{n+1} f'(x_{n})=0.$$ It is immediate, if one notices that the cost depends on $x_k$ via only two terms, $f(x_{k-1}) x_k$ and $f(x_k) x_{k+1}$. This allows us to find $x_{n+1}$ as a function of $x_{n-1}, x_n$, $$x_{n+1}=-\frac{f(x_{n-1})}{f'(x_n)}$$ and to reconstruct the whole optimal plan from its first two points, $x_0=0$ and $x_1$. In fact, it is useful to think of $\{x_k\}_{k=0,1,\ldots}$ as of iterations of the mapping $\vr:\Real_+^2\to\Real_+^2$ given by $$\vr:(x,y)\mapsto (y,-f(x)/f'(y))$$ (which we will still be referring to as [ *variational recursion*]{}). Existence of an optimal sequence {#sec:exist_min} ================================ For the two-sided (Beck-Bellman) search problem, the existence of the optimal search plans was shown in [@franck; @beck_exist] and some improvements appeared in the subsequent papers. For completeness, we supply the existence proof for the one-sided case, as we consider in detail the associated nonlinear map. Recall the cost functional $$E({\bf x}) = \sum_{k=0}^{\infty} x_{k+1}f(x_k)$$ and formulate the minimization problem: $$\begin{aligned} \label{eq:minprob} \hspace{10mm} E_0 = {\rm inf} \left \{ E({\bf x}), {\bf x} =( \ldots ,x_{-2}, x_{-1}, x_0, x_1, x_2, ..., x_k, ...), x_j > 0, j \in {\mathbb N}, x_k \rightarrow \infty \right \}.\end{aligned}$$ Note that we do not restrict the sequence to have the first term. We will prove this. On the other hand, if $f(x)$ does not vanish for any $x\geq 0$ there can be no other density points for an optimal plan, for otherwise the cost would be infinite. Clearly, $E_0 \geq 0$, since $E\geq 0$. By definition of the infimum, there exists a minimizing sequence $\{ {\bf x}^{(n)}\}$ such that $$E({\bf x}^{(n)}) \rar E_0.$$ The goal is to show that there is a convergent subsequence such that ${\bf x}^{(n_k)} \rar {\bf x}^{*}$ and $E({\bf x}^{(n_k)})\rar E_0.$ Assume $f$ is Lipshitz and $f(x)\neq 0$ for any $x\geq 0$. In the minimization problem , there exist two positive monotone sequences, $\{ a_k \}_{k=0}^{\infty}, \{ b_k \}_{k=0}^{\infty}$, such that $a_k < b_k$, $a_k \rightarrow \infty$, $b_k \rightarrow \infty$ and there is a minimizing subsequence $\{ \xx^{(n)}\}$ such that $a_k < x_k^{(n)} < b_k $. First, we note that $E_0 \leq 4 L$ is a bounded quantity, see the previous section. To prove existence of $\{ b_k \}$, we first observe that any minimizing sequence must satisfy $E({\bf x}^{(n)}) \leq 2E_0$, for sufficiently large $n$. Thus, $x_1 \leq 2E_0 = b_1$ and then $x_2 f(x_1) \leq 2E_0$. Therefore, $$x_2 \leq \frac{2E_0}{f(x_1)} \leq \frac{2E_0}{f(2E_0)} .$$ We define then $b_2 = 2E_0/f(2E_0)$. Proceeding by induction, $$b_{k+1} = 2E_0/f(b_k(E_0)), \label{eq:beb}$$ we obtain the desired sequence. Note that the sequence is strictly monotone as $$x f(x) < L \leq E_0 < 2E_0,$$ and therefore, the mapping cannot a fixed point $x=2E_0/f(x)$. Thus, the sequence $\{ b_k \}$ monotonically grows to infinity and it bounds the corresponding terms of the minimizing sequence. To establish lower bounding sequence we prove[^2] Assume $f$ is Lipshitz and let ${\bf x}$ be a monotone, possibly bi-infinite, sequence of turning points. Assume $x_m < 1/2C_L$, then the modified sequence $\tilde \xx$ with all $x_j (j < m)$ removed, will have lower cost. Rewrite $$E(\xx) = \sum_k x_{k+1} f(x_k) = \ldots + x_{m-1} f(x_{m-2}) + x_m f(x_{m-1}) + x_{m+1} f(x_m) + \ldots$$ and the modified sequence $$E(\tilde \xx) = x_m + x_{m+1}f(x_m) + \ldots.$$ We need to show $$x_m < \ldots + x_{m-1} f(x_{m-2}) + x_m f(x_{m-1}).$$ Rearranging some terms we get, $$\frac{1-f(x_{m-1})}{x_{m-1}} < \ldots + \frac{f(x_{m-2})}{x_m}.$$ The left handside is bounded by the Lipshitz constant $C_L$ and the right handside is bounded from below by $1/x_m - C_L$. Therefore, by choosing $x_m< 1/2C_L$, we obtain the desired result. Therefore, an optimal sequence of turning points is one-sided and there is at most one point in the interval $[0,\delta=1/2C_L]$. Then, we let $a_0 = 0$ and $a_1 = \delta$. Now, the sequence $\{a_k\}$ can be constructed using monotonicity $a_{k+1}\geq a_k$ and that there are finitely many terms on any interval of, say, unit size: $|\delta, \delta+1|,|\delta+1, \delta+2| $, etc.\ Monotonicity has been proved in the previous section by showing that in nonmonotone sequence, by deleting the appropriate terms, we obtain a strictly monotone sequence with smaller cost. There exists a converging subsequence, ${\bf x}^{(n)}\rightarrow {\bf x}^{(*)}$, where ${\bf x}^{(*)}$ is strictly monotone and $x^{(*)}_k \rightarrow \infty$. The cost function converges $E({\bf x}^{(n)}) \rightarrow E({\bf x}^{*})=E_0$. Fix $N>0$ and let ${\mathbb P}_N \xx = (x_1, x_2, ..., x_N)$. For the minimizing sequence $\xx^{(n)}$, let $\xx^{(n_1)}$ be a subsequence for which $x^{n_1}_1 \rightarrow x_1^*$. Take a subsequence of this subsequence, so that ${\mathbb P}_2 {\bf x}^{(n_2)} \rightarrow \{ x_1^*, x_2^* \} $. Proceeding further and using diagonal subsequence ${\bf x}^{(n_k),k}$, we obtain a convergent subsequence, which we will still denote by ${\bf x}^{(n)}\rightarrow {\bf x}^*$. The limit $\xx^*$ is a monotone sequence by construction. It must be also strictly monotone, for if not, [*i.e.*]{} if some terms are equal, we already know from the previous section that by removing repeated terms the cost is decreased, which contradicts the sequence being minimizing. Now, to prove the second part of the theorem, let $E^N({\bf x})$ denote $N-th$ partial sum. Fix $N>0$ to be sufficiently large, and observe that $E^N ({\bf x}^{(n)}) \rightarrow E^{N}({\bf x}^*)$ just by continuity. Because of the lower bounding sequence $\{ a_k\}$, we can take $N$ so large that $x^{(n)}_N$ and $x_N^*$ are larger than any fixed number. Consider now the remainders $$E({\bf x}^{(n)}) - E^N ({\bf x}^{(n)}), \,\,\,\, E({\bf x}^*) - E^N({\bf x}^*),$$ which are arbitrarily small. Indeed, $$E({\bf x}) - E^N ({\bf x}) = x_{N+1} f(x_N) + x_{N+2} f(x_{N+1})+\ldots$$ and since the sequence is minimizing we can estimate the reminder by choosing, e.g. $x_{N+k}= 2^{k+1} x_{N-1}$. Next, using an argument similar to the one used in Proposition \[prp:basic2\], we obtain the bound $$E({\bf x}) - E^N ({\bf x}) \leq 4 \int_{x_{N-1}}^{\infty} f(x) dx.$$ The same bound holds for the other reminder. Thus, taking $x_{N-1}$ large enough we can assure the reminders to be arbitrarily small. This implies the convergence $E({\bf x}^{(n)}) \rightarrow E({\bf x}^{*})=E_0$. Next we demonstrate that the Lipshitz condition is necessary. Indeed, without it we can construct an example with no initial turning point:\ \ [**Example with singularity**]{}. If the tail distribution function is not Lipshitz then the sequence may fail to have the first turning point. Here, we present a simple example of one-sided search. Let $f(x)=1-\sqrt{x}$ and assume the search is done on the unit interval $[0,1]$. It is also possible to modify this example to the infinite ray $(0,\infty)$ by changing $f(x)$ outside of any neighborhood of $0$ so it does not vanish anywhere. Suppose, the optimal sequence is given by a one-sided sequence $\{ 0< x_1 < x_2 < x_3 < \ldots\}$ with the cost $$E({\bf x}) = x_1 + x_2 (1-\sqrt{x_1}) + x_3 (1-\sqrt{x_2}) + \ldots.$$ Let us insert another point $x_0: 0 < x_0 < x_1$, then the cost of modified sequence is given by $$E({\bf \tilde x}) = x_0 + x_1 (1-\sqrt{x_0}) + x_2 (1-\sqrt{x_1}) + \ldots.$$ Comparing them, we find that the cost of modified sequence is lower if and only if $$x_0 +x_1 (1-\sqrt{x_0}) < x_1 \Leftrightarrow \sqrt{x_0} < x_1 \Leftrightarrow x_0 < x_1^2.$$ The latter inequality can be always achieved. Therefore, the optimal sequence does not have an initial turning point. Pareto distribution {#sec-2.2} =================== In this section we present an explicit example which illustrates our general approach: the optimal plan of the search problem belongs to an invariant manifold (separatrix) of the associated Hamiltonian map. Cost functional --------------- Consider a Pareto type tail distribution (analogous to that of [@reif]) $$\begin{aligned} f(x) &=& x^{-\alpha} \,\,\, {\rm if} \,\,\ x \geq 1, \nonumber \\ f(x) &=& 1 \,\,\, \,\,\,\,\,\, \,\, {\rm if} \,\,\ 0< x < 1, \nonumber \end{aligned}$$ where we assume that $\alpha > 1$ in order to have a bounded expected value. We will use the notation, exceptionally, $x_0=1$, which makes formulas look simpler. Note that $x_0=1$ does not correspond to an actual turning point. The expected cost is given by $$E({\bf x}) = x_1 + f(x_1) x_2 + f(x_2) x_3 + ... = \sum_{n=0}^{\infty} \frac{x_{n+1}}{x_n^\al}.$$ The variational recursion reads in this case $$x_{k+1} = \frac{1}{\al}\frac{x_k^{\al+1}}{x_{k-1}^{\al}}$$ or equivalently $$\frac{x_{k+1}}{x_k^{\al}} = \frac{1}{\al}\frac{x_{k}}{x_{k-1}^{\al}} = \frac{1}{\al^k}\frac{x_{1}}{x_{0}^{\al}}= \frac{1}{\al^k} x_1.$$ Therefore, for the sequences generated by the variational recursion, with $x_1=x$, we can immediately compute the cost $$E(\xx) = \sum_{n=0}^{\infty} \al^{-n} x_1 = x_1 \frac{\al}{\al -1},$$ as a function of the initial condition $x_1=x$. This expression indicates that $x_1$ should be as small as possible, provided the sequence satisfies the constraints of monotonicity and unbounded growth. From the sequence definition, we have $$\frac{x_{k+1}}{x_k}= \frac{1}{\al} \left ( \frac{x_k}{x_{k-1}}\right )^{\al}$$ or denoting the ratios by $r_k=x_k/x_{k-1}$, $$r_{k+1}= \al^{-1}r_k^{\al}, \,\,\, r_1 = x_1.$$ Defining $w_k=r_k \al^{-\frac{1}{\al-1}}$ gives $$w_{k+1} = w_k^{\al}.$$ We clearly need to take $w_1 \geq 1$, so that the ratios would not go to zero and the sequence $x_k$ would be monotone. However, since we need $x_1$ to be as small as possible, we take $w_1 = 1$, resulting in $x_1 = r_1 = \al^{\frac{1}{\al-1}}$. Therefore, the minimal cost is given by $$\begin{aligned} \label{eq:excost} E_0 = \frac{\al\cdot \al^{\frac{1}{\al-1}}}{\al -1}= \frac{\al^{\frac{\al}{\al-1}}}{\al -1},\end{aligned}$$ and the optimal sequence is given by $$x_k = \al^{\frac{k}{\al-1}}.$$ In a particular case of $\alpha=2$, the optimal sequence is given by geometric series $x_k = 2^k$. Hamiltonian dynamics -------------------- The global structure of the dynamics defined by the variational recurrence in this homogeneous problem is shown on the Figure \[fig:homo\]. Here we draw the invariant curves for the trajectories defined by $\vr$: the iterations of a point $(x_k, x_{k+1})$ found on one of these curves, stays on it forever. The red (thick) line corresponds to the optimal trajectory. ![Phase portrait for the variational recursion for the homogeneous distribution. There are two regions: above the line $x_{k+1}=\alpha^{1/(\alpha-1)}x_k$, where all the orbits monotonically grow and below, where all the orbits lose monotonicity eventually.[]{data-label="fig:homo"}](homogen.pdf){width=".5\textwidth"} The qualitative dynamics in this case can be summarized as follows: - There is a region of initial values $x_1$ where the variational recursion stops making sense: the iterates become non-monotone. We will call this region [*chaotic*]{}[^3]. - The optimal initial value is on the boundary of the chaotic set. - The growth of the optimal plan (exponential) is far slower than the growth for generic initial values outside the chaotic region (where it is super-exponential). The sequences can be represented as solutions of the two dimensional nonlinear map $$\begin{aligned} x_{k+1} &= r_{k+1}x_k \\ r_{k+1} &= \frac{1}{\al} r_k^{\al}.\end{aligned}$$ The ray $r=r^* = \al^{\frac{1}{\al-1}}$ is invariant. Above this ray $r=r^*$, the orbits go rapidly to infinity. The orbits below $r=r^*$ are not monotone, because $r_k$ monotonically decreases to zero and while $x_k$ may grow at first but after $r_k$ becomes less than 1, $x_k$ will be decreasing. Exponential tail distribution ============================= In this section we analyze in detail the prototypical case of exponential distribution. While, this case is sufficiently simple to allow complete understanding, the Hamiltonian dynamics is no longer integrable. Therefore, the methods that we develop would apply to other cases of interest. Variational recursion --------------------- We consider now several key properties of the variational recursion $\vr:(x,y)\mapsto (y,-f(x)/f'(y))$. One of the basic observation is that it preserves an area form: The mapping $\vr$ preserves the area form $\omega=f'(x) dx\wedge dy$. This is a rather general fact: for any recursion obtained by extremization of the functional $$E(\xx) = \sum_{k=0}^{\infty} F(x_k, x_{k+1}),$$ the 2-form $\frac{\partial^2 F}{\partial x \partial y} dx\wedge dy$ is invariant with respect to the associated two-dimensional mapping. It is possible to explicitly give the coordinates in which the variational recursion $\vr$ is [*Hamiltonian*]{}: if we use $(s,y)$, where $s=f(x)$ in lieu of $(x,y)$, then $$\vr:(s,y)\mapsto (f(y),-s/f'(y));$$ it maps $[0,1]\times\Real_+$ into itself and preserves the [*Lebesgue area*]{} $ds\wedge dy$. We will be referring to these coordinate system as [*standard*]{}. In the standard coordinates, the variational recursion for the exponentially distributed $\HH$ (i.e. for $f(x)=\exp(-x)$) is given by $$\vr:(s,y)\mapsto (e^{-y},se^y).$$ Further, one can see that $\vr$ has a unique stationary point, $s=e^{-1}, y=1$. One can verify that this fixed point is elliptic. ![Several orbits of the variational recursion for exponential distribution. The solid curve separates chaotic region from the monotonicity region. The region of interest is located to the left of the vertical line $x=1$. The monotone orbits outside of the chaotic region are not present as they are rapidly mapped to infinity. []{data-label="fig:expo"}](expo_phase.pdf){height="3in"} Cost functional and cost function {#sec-2.4} --------------------------------- We already know that the optimal plan can be found only among the trajectories satisfying the variational recursion. We will set $x_0=0$; under this assumption the trajectories (not necessarily increasing) satisfying the variational recursion are parameterized by the first non-zero term $x_1:=x$. We will be denoting the corresponding family of trajectories as $\xx_\vr(x)=\{x_0=0, x_1=x, x_2=x_2(x),\ldots\}$. For the exponentially distributed $\HH$, the first few terms of the family $\xx_\vr(x)$ are given by $x_1=x; x_2=e^x; x_3=e^{e^x-x}$ and so on.\ [**Notation:**]{} We will use the term [*cost functional*]{} for (\[eq:cost-functional\]), defined on the space of all trajectories $\xx$, while reserving the term [*cost function*]{} for the restriction of the functional $E$ to the one-parametric curve $\xx_\vr(x)$ of solutions to variational recursion, denoting the cost function by $E(x):=E(\xx_\vr(x))$.\ For exponentially distributed $\HH$, the cost [*function*]{} is finite on monotonic trajectories. Indeed, in this case, unless growing without bound, the trajectory should converge to the only fixed point of the variational recursion, which is impossible as it is an elliptic point. If for some $K$, $x_K>1$, then for $k>K$, $$x_{k+1}-x_k=\ln{x_{k+2}}\geq \ln{x_K}>0,$$ and $x_k$ grows at least as an arithmetic progression, implying the convergence of $$E(\xx)=\sum_{k=0}^{\infty} x_{k+1}\exp(-x_k)=\sum_{k=0}^{\infty}\exp(-x_{k-1}).$$ Now, as the cost function $E(x)$ is a function of one variable, and we established that the optimal trajectory should be one of the family $\xx_\vr(x)$, it might appear that the rest is straightforward: to find the minimum of $E(x)$ over the starting point $x_1=x$. However, if we take the formal derivative $$\frac{dE}{dx}=\sum_{k=0}^{\infty} \frac{d}{dx}\left ( x_{k+1}(x)f(x_k(x) \right ),$$ we will see that all the terms vanish, identically (precisely because $\xx_\vr(x)=\{x_1(x),x_2(x),\ldots\}$ satisfies the variational recursion). It might appear that $E(x)$ should be a constant! However, we already computed $E(x)$ in an example in section \[sec-2.2\], and know that this is not the case. The reason for this calamity is, of course, the fallacious differentiation of an infinite sum of differentiable functions with wildly growing $C^1$ norms. However, if we consider the [*approximants*]{} $$E^K(x)=\sum_{k=0}^K x_{k+1} f(x_k),$$ they can be differentiated term by term, yielding $$\label{eq:diff_CK} \frac{dE^K}{dx}(x)=x_{K+1}(x)f(x_K(x))$$ (by telescoping). As $E^K(x)$ approximates $E(x)$ to within $4 E_0 \, f(x_K)$, which uniformly converges to zero, the existence of a local minimum of $E(x)$ in an interval where $E$ is finite would imply that the approximants $E^K(x)$ have local minima in that interval, for all large enough $K$. Later we will use this observation to prove that the reduced cost function has optimal solution on the separatrix. Hamiltonian dynamics {#sec-3} ==================== Denote by $\phsp=\{1\geq s\geq 0, y\geq 0\}$ the phase space (in standard coordinates) on which the variational recursion acts. Chaotic and monotone regions {#sec-3.1} ---------------------------- The region $\monot_k$ of $k$-step monotonicity is defined as collection of points in $\phsp$ such that $k$-fold application of the $\vr$ produces a monotonic (along $y$ coordinate) sequence. The intersection of all $\monot_k$ is denoted by $\monot_\infty:=\cap_k \monot_k$ and is called the [*region of monotonicity*]{}. Its complement is called the [*chaotic region*]{}. The boundary $\separ$ of the monotonicity region is called the [ *separatrix*]{}. It is not immediate that the separatrix is a [ *curve*]{}: the monotone and chaotic regions can have rather wild structure. However, we will see that the separatrix is indeed a smooth curve, and the relevant part of it can be represented as a graph of a function in some appropriate coordinates. ![Invariant curve and iterated initial data in the exponential case in $(y,z)$ coordinates. The long curve is the separatrix. It corresponds to the solid curve in Figure \[fig:expo\]. The line segment with the end points $(0,0)$ and $(1,1)$ represents a one parameter family of the initial turning points $x_1$. Note that the segment intersects the separatrix at exactly two points. These two points are the candidates for the optimal search sequence. The other curves are obtained by iterating the initial segment by the forward map. []{data-label="fig:exp_inv"}](inv_curve_iter2.pdf){width=".6\textwidth"} Existence of separatrix: exponential distribution ------------------------------------------------- The existence of the separatrix in the phase space for the exponentially distributed $\HH$ is proved by applying the standard Banach contraction mapping principle. We start by introducing more convenient coordinates in the phase space[^4] $(x,y)\rightarrow (y,z=y-x)$. Thus, $z_{k+1} = x_{k+1}-x_k$ “measures” monotonicity of the orbits. In these new coordinates, the mapping $\vr$ is given by $$\label{eq:yz} \vr:(y,z)\mapsto (Y,Z)=(\exp{z}, \exp{z} -y).$$ The inverse map in these coordinates acts as $$\label{eq:yz-inv} \vrinv: (Y,Z)\mapsto (y,z)= (Y-Z, \ln{Y}).$$ The iterations of the boundary of monotonicity region $\{Z>0\}$ result in curves $z=\phi_k(y)$, where the functions $\phi_k$ satisfy the recursion $$\phi_{k+1}(Y-\phi_k(Y))=\ln(Y),$$ or, equivalently, $$\phi_{k+1}(\eta)=\ln(\psi_k(\eta)),$$ where $\psi_k$ is defined as inverse to $Y\mapsto Y-\phi_k(Y)$. \[prp:separ\] The map $\phi_k\mapsto \phi_{k+1}$ defined above is a contraction in the space of continuously differentiable positive functions with bounded derivative $0< \phi^{\prime}(y)<1/2$ for $y \geq 4$. There is a continuous limit $\phi=\lim_{k\to\infty} \phi_k$, which solves the functional equation $$\phi(y-\phi(y))=\ln(y)$$ and satisfies the bound $|\phi(y)-\ln(y)|\leq 1$ on $y\in [4,\infty)$. By construction, the region below the separatrix $\separ$ (in $(y,z)$ coordinates) corresponds to the non-monotonic solutions of the variational recursion, and that above $\separ$ correspond to monotonically increasing solutions. In other words, $\separ$ is indeed the boundary of $\monot_\infty$. Consider the inverse map . It takes a graph $(y,\phi(y))$ into a graph $(y,\Phi(y))$, where $$\Phi(\phi)(y) = \ln(w_{\phi}(y)),$$ where $w_{\phi}(y)$ solves the equation $$y = w_{\phi}(y) - \phi(w_{\phi}(y)).$$ We consider this mapping in the space of continuously differentiable functions $${\bf X} = \{ \phi \in C^1 (y_0,\infty), \phi (y)>0, 0< \phi^{\prime}(y) \leq 1/2 \}.$$ Note, that at each iteration we have a well defined function $w=w_{\phi}(y)$ and that $w_{\phi}(y) > y$. Indeed, by the implicit function theorem, we need $\phi^{\prime}(w) \neq 1$, which we have since $\phi^{\prime}(y) \leq 1/2$ and $w_{\phi}(y) > y$. First, show that we can iterate indefinitely: $$\Phi(\phi)(y) = \ln(w_{\phi}(y)) > \ln (y) > \ln (y_0) > 0,$$ if $y_0>1.$ Differentiating $$\label{eq:aprider} \frac{d}{dy}\Phi(\phi)(y) = \frac{w_{\phi}^{\prime}(y)}{w_{\phi}(y)} = \frac{1}{w_{\phi}(y)} \cdot \frac{1}{1-\phi^{\prime}(w_{\phi}(y))} \leq \frac{2}{w_{\phi}(y)} \leq \frac{2}{y} \leq \frac{2}{y_0} \leq \frac 12,$$ if $y_0>4$. Also, since $w_{\phi}^{\prime}(y) > 0$, we have $$\frac{d}{dy}\Phi(\phi)(y)>0.$$ Now, we show that the mapping $\Phi$ is a contraction in the space of continuous functions. Let $y \geq y_0$ and consider $$\begin{aligned} \label{eq:phidiff} |\Phi(\phi)(y) - \Phi(\psi)(y)| = |\ln(w_{\phi}(y))-\ln(w_{\psi}(y))| \end{aligned}$$ $$\leq \frac{1}{\min (w_{\phi}(y), w_{\psi}(y) )}\cdot |w_{\phi}(y)- w_{\psi}(y)| \leq \frac{1}{y_0} |w_{\phi}(y)- w_{\psi}(y)|.$$ Now, observe that $$|w_{\phi}(y)- w_{\psi}(y)| = | \phi(w_{\phi}(y))- \psi(w_{\psi}(y)) | \leq |\phi(w_{\phi}(y))-\phi(w_{\psi}(y))| + |\phi(w_{\psi}(y))-\psi(w_{\psi}(y))|$$ $$\leq \sup_{y\geq y_0} |\phi^{\prime}| \cdot | |w_{\phi}(y)- w_{\psi}(y)| + \sup_{y\geq y_0} |\phi(y)-\psi(y)|.$$ Therefore, $$| w_{\phi}(y)- w_{\psi}(y) | \leq \frac{\sup_{y\geq y_0} |\phi(y)-\psi(y)|}{1- \sup_{y\geq y_0} |\phi^{\prime}|} \leq 2\sup_{y\geq y_0} |\phi(y)-\psi(y)|$$ and combining this inequality with , we obtain the contraction $$\sup_{y\geq y_0} |\Phi(\phi)(y) - \Phi(\psi)(y)| \leq \frac{2}{y_0} \sup_{y\geq y_0} |\phi(y)-\psi(y)| \leq \frac{1}{2} \sup_{y\geq y_0} |\phi(y)-\psi(y)|,$$ assuming again that $y_0>4$. As usual, in the contraction argument, the distance between initial guess $\phi_0(y) = \ln(y)$ and the limit $\phi(y)$ is bounded by $||\phi-\phi_0|| \leq 2||\phi_1-\phi_0||$. Consider $$|\phi_1(y)-\phi_0(y)| = | \ln(t(y)) - \ln(y)|,$$ where $y=t(y)-\ln(t(y))$ with $y\geq 4$. Thus, $$| \ln(t(y)) - \ln(y)| \leq \frac{1}{y} |t(y)-y| \leq \frac{1}{y} |t^{\prime}(y)-1|\cdot y = |t^{\prime}(y)-1| = \frac{1}{|t(y)-1|} ,$$ where we used the derivative of the inverse function. Since, we assume that $y\geq 4$ which implies then $t(y) > 2$, we have $$|\phi(y)-\phi_0(y)| \leq 2|\phi_1(y)-\phi_0(y)| \leq 1.$$ Now, we verify that the obtained separatrix is actually smooth. We need this property as we later prove that the cost function increases away from the separatrix. In fact, the separatrix is possibly an analytic function, see the appendix. The separatrix is a continuously differentiable function on the interval $[13,\infty)$ satisfying the bound $$\frac{d}{dy} \phi(y) \leq \frac{2}{y}.$$ Now we consider contraction in the space of continuously differentiable functions with the norm $$||\phi||_1:=\sup_{y\geq y_0} |\phi| + \sup_{y\geq y_0} |\phi^{\prime}|$$ and with the bound $$|\phi^{\prime \prime}(y)| \leq 1.$$ We will also use the notation $$||\phi||_0:=\sup_{y\geq y_0} |\phi|.$$ Using the definition of $\Phi(\phi)$ and of $w_{\phi}$, we calculate $$\Phi^{\prime \prime}(\phi)(y) = \frac{w^{\prime \prime}_{\phi}}{w_{\phi}} - \frac{(w^{\prime}_{\phi})^2}{w^2_{\phi}}$$ and $$w_{\phi}^{\prime \prime}= \frac{\phi^{\prime \prime}(w_{\phi})(w^{\prime}_{\phi})^2 }{1-\phi^{\prime}(w_{\phi})}.$$ Recalling that for $y_0 \geq 4$, we have $0<\phi^{\prime}(y)<1/2$ and $1< w_{\phi}^{\prime}(y) <2$ so that $$|w_{\phi}^{\prime \prime}(y)| \leq 8 |\phi^{\prime \prime}(y)| \leq 8.$$ Next, we have $$|\Phi^{\prime \prime}(\phi)(y)| \leq \frac{|w_{\phi}^{\prime \prime}(y)| }{y_0} + \frac{4}{y_0^2}.$$ Taking, [e.g.]{} $y_0=10$, we can ensure that the last expression is bounded by 1. Now, we prove that we indeed have contraction $$||\Phi(\phi(y)) - \Phi(\psi(y))||_1 = \sup_{y\geq y_0} | \Phi(\phi(y)) - \Phi(\psi(y))| + \sup_{y\geq y_0} |\Phi^{\prime}(\phi(y)) - \Phi^{\prime}(\psi(y)) |.$$ We already know that $$\sup_{y\geq y_0} | \Phi(\phi(y)) - \Phi(\psi(y))| \leq \frac{2}{y_0}|\phi-\psi|_0 \leq \frac{2}{y_0}|\phi-\psi|_1.$$ Now, we estimate $$|\Phi^{\prime}(\phi(y)) - \Phi^{\prime}(\psi(y))| = \left | \frac{w_{\phi}^{\prime}}{w_{\phi}} - \frac{w_{\psi}^{\prime}}{w_{\psi}} \right | \leq \frac{ |w_{\psi}|\cdot | w_{\phi}^{\prime} - w_{\psi}^{\prime} | + |w_{\phi}^{\prime}|\cdot |w_{\psi}-w_{\phi}| }{|w_{\phi}||w_{\psi}|}.$$ Using the estimates obtained in the proof of Proposition \[prp:separ\], we have $$\frac{|w_{\phi}^{\prime}| }{|w_{\phi}||w_{\psi}|} \leq \frac{2}{y_0^2}$$ and $$|w_{\phi}(y)-w_{\psi}(y)| \leq 2 \sup_{y\geq y_0} |\phi(y) -\psi(y)|.$$ On the other hand, differentiating the identity $$y = w_{\phi}(y) - \phi(w_{\phi}(y))$$ and using triangle inequalities, we can estimate the difference $$| w_{\phi}^{\prime} - w_{\psi}^{\prime} | \leq |\phi^{\prime}(w_{\phi})|\cdot |w_{\phi}^{\prime} - w_{\psi}^{\prime}| + |w_{\psi}^{\prime}| ( |\phi^{\prime}(w_{\phi}) - \psi^{\prime}(w_{\phi}| + | \psi^{\prime}(w_{\phi})-\psi^{\prime}(w_{\psi}) | ).$$ The first difference on the right hand-side can be absorbed into the left hand-side as we did in the proof of Proposition \[prp:separ\]. The second difference is estimated by $$| \phi^{\prime}(w_{\phi}) - \psi^{\prime}(w_{\phi}) | \leq ||\phi-\psi||_1$$ and the third one, $$|\psi^{\prime}(w_{\phi}) - \psi^{\prime}(w_{\psi})| \leq ||\psi^{''}||_0\cdot |w_{\phi}-w_{\psi}| \leq |w_{\phi}-w_{\psi}|,$$ where $|w_{\phi}-w_{\psi}|$ has been estimated in Proposition \[prp:separ\]. Combining these inequalities, we obtain $$|\Phi^{\prime}(\phi(y)) - \Phi^{\prime}(\psi(y))| \leq \left ( \frac{12}{y_0} + \frac{4}{y_0^2} \right ) ||\phi-\psi||_1.$$ By taking sufficiently large $y_0$, e.g. $y_0=13$ we obtain contraction in $C^1$. Having established continuous differentiability of $\phi$, the bound follows from the apriori estimate . By iterating the inverse map, one can show that the separatrix is smooth on a larger interval $[1,\infty)$. Properties of the separatrix ---------------------------- - By construction, the region below the separatrix $\separ$ (in $(y,z)$ coordinates) corresponds to the non-monotonic solutions of the variational recursion, and that above $\separ$ corresponds to monotonically increasing solutions. In other words, $\separ$ is indeed the boundary of $\monot_\infty$. - Using functional equation, it is possible to obtain logarithmic series expansion of the function $\phi$ defining the separatrix near $y=\infty$ (the derivation can be found in the appendix): $$\phi(y)=\ln(y)+\frac{\ln(y)}{y}+\ldots$$ - In the standard coordinates, it is instructive to consider the separatrix as the stable invariant manifold of a topological saddle “at infinity”. The intuition behind this picture underlies the construction of the separatrix. Cost function and optimal trajectories {#sec-4} ====================================== To understand the properties of the cost function and its approximations $E^N(x)$ we will need a standard trick from hyperbolic dynamics. There it is used to find fragile objects (invariant foliations) from robust ones (invariant cones), see e.g. [@katok]. Consistent cone fields {#cons_fields} ----------------------- We will continue to work in $(y,z)$ coordinates. We will refer to a pair of nowhere collinear vector fields $(\eta(y,z),\xi(y,z))$ (or, rather, to the convex cone in the tangent spaces spanned by these vector fields) as the [*cone field*]{} $K_{(y,z)}$, and to the vector fields $\eta,\xi$ as the [*generators*]{} of $K_{(y,z)}$. We will say that the cone field $K_{(y,z)}$ is [ *consistent*]{} at $(y,z)$, if the variational recursion [**R**]{} maps it into itself, i.e. $$D\vr K_{(y,z)}\subset K_{\vr(y,z)};$$ here $D\vr$ is the differential of $\vr$. For exponential $\HH$, it is given in the coordinates $(y,z)$ by\ $$D \vr (y,z) = \ba{0} & {e^{z}}\\{-1} & {e^z}\ea \\$$ We will call a subset $A$ of the quadrangle $\{y\geq 0, z\geq 0\}$ a [*$\vr$-stable set*]{} if it is mapped into itself, [*i.e.*]{} $\vr (A) \subset A$. The subset of the quadrangle ${\bf A} = \{y \geq 0, z \geq \max(0,\phi(y))\}$ is a [**R**]{}-stable set. In other words, all the points in the positive quadrangle and above the separatrix do not leave that region under the action of [**R**]{}. This statement follows from invariance of the separatrix and that the ray $\{y=0,z\geq 0\}$ and the segment $\{0\leq y\leq y^*,z=0\}$ are mapped inside ${\bf A}$, where $(y^*,0)$ is the point where the separatrix intersects $y$-axis. Now we will construct an explicit consistent cone field for the exponential . It is in fact just the constant field, spanned by the tangent vectors $\eta=(1,2)$ and $\xi=(2,1)$ A straightforward computation shows that in the region $\{z>\ln 4\}$ the cone field generated by $\eta$ and $\xi$ is consistent, and we deduce In the region $z\geq \ln 4$ above the separatrix, which is a $\vr$-stable set there exists a consistent cone field transversal to the vertical vector field $(0,1)$. Monotonicity of the cost function on intervals of regularity {#sec-4.1} ------------------------------------------------------------ Now we are ready to prove the key fact about the cost function $E(x)$. Consider the ray $\rr:=\{(t,t), 0<t<\infty\}$ of initial conditions for the variational recursion. We will say that $t_*$ is a regular point, if some vicinity of $t_*$ in the ray $\rr$ belongs to the monotone region $\monot_\infty$. In other words, for the initial data $x_0=0, x_1=t$, where $t$ is close to $t_*$, the variational recursion generates an increasing trajectory, for which the cost function is a well defined function $E(x)$. It turns out that $x_*$ [*cannot be a local extremum*]{} of $E(x)$. \[prp:momotonicity\] In $(y,z)$ coordinates, if the region above the separatrix supports a consistent cone field $K$, with $\eta$ being one of the generators, and $\eta$ is not $\vr$-invariant then on any interval $I=(y_-, y_+)\subset \rr$ in the intersection of the ray of initial data with the monotone region $\monot_\infty$ the function $E(x)$ is monotone. Consider partial sums $E^N(x)$ which approximate $E(x)$: $$\label{eq:approx} E^N(x)=\sum_{m=0}^N f(x_m)x_{m+1},$$ where the trajectory $\xx_\vr(x)$ solves the variational recursion. It is immediate that $E^N(x)$ is a smooth function of $x$, if $f(x)$ is. As $E^N(x)$ converge pointwise to $E(x)$, non-monotonicity of $E$ on $I$ would imply that for some compact subinterval $J\subset I$, all the functions $E^N$ have a critical point on $J$ provided $N$ is sufficiently large. By (\[eq:diff\_CK\]), $$\frac{dE^N}{dx}=f(x_N)\frac{dx_{N+1}}{dx},$$ and criticality $\frac{dE^N}{dx}=0$ is possible only if ${dx_{N+1}}/{dx}=0$ at some point $x_*$ of $J$. As the $N$-th iteration of the initial point $(y,z)=(x_1, x_1-x_0)$ is $(x_{N+1}, x_{N+1}-x_N)$, the vanishing of ${dx_{N+1}}/{dx}=0$ means that in $(y,z)$ coordinates the $N$-th iteration by $D\vr$ of the tangent vector to the ray $\rr$ [ *is vertical*]{}. However, the line of the initial conditions is the diagonal $(y=t,z=t)$. Computer simulations, see Figure \[fig:exp\_inv\], show that after several iterates, the ray gets mapped into the cone field (above $z=\ln 4$). As the $K$ is consistent above the separatrix, the iterations of these tangent vectors under $D\vr$ will still be in the interior of $K$, while the vertical vector field is the generator of $K$. Hence, ${dx_{K+1}}/{dx}$ cannot vanish on $J$, ensuring that vicinity cannot contain a local extremum of $E$. Therefore, the cost function can only achieve minimum at one of the points of intersection of the separatrix with the line of initial conditions. Simulations and optimal trajectories {#sec-4.2} ------------------------------------ In this section we present results of numerical computation of the cost function for the one-sided search problem. We also explain how our theory fits with these observations. Figure \[fig:cost\] shows the plots of the cost of the trajectories $\xx_\vr$ for the exponentially distributed $\HH$, evaluated at both chaotic and monotone trajectories. The simulation was stopped either when the trajectories increased beyond some large threshold, or after a fixed number of steps (the former trigger would correspond to monotone trajectories; the latter to chaotic ones). ![Numerically evaluated cost function $E(x)$ for exponentially distributed $\HH$. Right display shows also results for chaotic region (stopped after a fixed number of iterations). Left display is a magnification of the right one, showing only the results over the region of monotonicity.[]{data-label="fig:cost"}](cost1.pdf "fig:"){height="2.5in" width=".48\textwidth"} ![Numerically evaluated cost function $E(x)$ for exponentially distributed $\HH$. Right display shows also results for chaotic region (stopped after a fixed number of iterations). Left display is a magnification of the right one, showing only the results over the region of monotonicity.[]{data-label="fig:cost"}](cost2.pdf "fig:"){height="2.5in" width=".48\textwidth"} The monotonicity of the cost over the left and right intervals is apparent. The separatrix $\separ$ intersects the ray of initial conditions $\rr$ at two points, $x_+\approx 0.7465...$ and $x_-\approx .1954...$ (compare with Figure \[fig:expo\]). Between the points, the initial conditions are in the chaotic region. The monotonicity of $E$ outside of the chaotic region means that one of the two initial values, $x_+$ or $x_-$ should generate the optimal trajectory. Numerically, $x_+$ wins: $E(x_+)\approx 2.3645<E(x_-)\approx 2.3861$. Conclusion {#sec-5} ========== We developed a geometric approach to the Linear Search Problem via discrete time Hamiltonian dynamics, which explains some of the hidden structure of the cost function. The rapid decay of the tail distribution function translates into hyperbolicity of the underlying Hamiltonian dynamics. The latter is defined by the variational recursion which plays a key role in the characteristics of the optimal search trajectory. In particular, hyperbolicity implies the existence of separatrix which divides the regular and chaotic regions, and the optimal search trajectory needs to start on the separatrix: the chaotic region cannot contain optimal orbits, while in the regular region the orbits father away from separatrix have higher cost (monotonicity of the cost function). While this scenario is proved in this note only for a specific case of exponential tail distribution function, we anticipate that for other distributions with sufficiently fast decay, the same type of results, including the existence of separatrix and monotonicity of cost function in the region of monotonicity, will hold. Some of this hope is supported by partial results, see the appendix. We plan to return to this more general classes of distributions in a follow-up paper, where we also plan to address the phenomenon of separatrix slow-down (the growth of trajectories on separatrix is slower than that in the interior of the region of monotonicity). There are other open questions arising in the context of Hamiltonian dynamics based approach to the search problem. Extending the set of analyzed distributions to those with bounded support is a natural task. We also expect that in the search on rays, where the corresponding Hamiltonian map is higher dimensional, hyperbolicity will also play an important role and higher dimensional separatrix (unstable manifold) can be found. It is expected that optimal search plan would still be restricted to the unstable manifold. [100]{} S. Alpern, A. Beck, Asymmetric rendezvous on the line is a double linear search problem, Math. Oper. Res. 24 (1999), no. 3, 604–618. S. Alpern, S. Gal, The theory of search games and rendezvous. Springer 2003. S. Aubry, P.Y. Le Daeron, The discrete Frenkel-Kontorova model and its extensions, Physica D 8 (1983) 381-422. Y. Baryshnikov, E. Coffman, P. Jelenkovich, P. Momcilovic, D. Rubenstein, Flood search under California split rule, Oper. Res. Lett. 32 (2004) 199–206. R. Bellman, Problem 63-9\*, SIAM Review, 5(2), 1963. A. Beck, On the linear search problem, Isr. Jour. of Math. 2 (1964) 221-228. A. Beck and M. Beck, Son of the linear search problem, Isr. J. Math. 48 (1984) 109-122. A. Beck, M. Beck, The revenge of the linear search problem, SIAM J. Control Optim. 30 (1992) no. 1, 112–122. A. Beck, M. Beck, The linear search problem rides again, Isr. J. Math 53 (1986) 365–372. A. Beck, D.J. Newman, Yet more on the linear search problem, Israel J. Math. 8 (1970) 419–429. W. Franck, An optimal search problem, SIAM Review Vol. 7, No. 4 (1965) 503-512. W.S. Lim, S. Alpern, A. Beck, Rendezvous search on the line with more than two players, Oper. Res. 45 (1997) no. 3, 357–364. I.R. De Pablo, A. Becker, T. Bretl, An optimal solution to the linear search problem for a robot with dynamics, Intelligent Robots and Systems (IROS), 2010. M-Y Kao, J.H. Reif, S.R. Tate, Searching in an Unknown Environment: An Optimal Randomized Algorithm for the Cow-Path Problem, Proceedings of SODA’1993. pp.441 447 A. Katok, B. Hasselblatt, Introduction to the modern theory of dynamical systems, CUP, Cambridge, 1995. Series expansions ================= The expansion near $x=\infty$ for the separatrix given by $$\phi(x-\phi(x)) = \ln(x)$$ leads to logarithmic series $$\phi(x) = \sum_{n=0}^{\infty} \frac{Q_n(\ln(x))}{x^n}.$$ The first three terms are given by $$\phi(x) = \ln (x) + \frac{\ln(x)}{x}+ \frac{1}{x^2} \left ( \frac 12 + \frac 34 \ln(x) -\frac 13 \ln^2(x) \right )+ ...$$ To justify this expansion, we need The equation $x = t(x)-\ln t(x)$ has a smooth solution for sufficiently large $x$ $$t(x) = x + \ln x + O \left (\frac{\ln x}{x} \right ).$$ Let us write $$t(x) = x + \ln x + r(x)$$ and substitute in the equation. After some simplifications, we have $$r = \ln \left ( 1+\frac{\ln x}{x} +\frac{r(x)}{x} \right ).$$ Application of the contraction mapping principle to $r(x)$ gives the required error estimate. Now, we prove $$\phi(x) = \ln x +O \left ( \frac{\ln x}{x} \right ).$$ Consider the first two iterations by ${\bf R}^{-1}$ of $\phi_0:=(x=t, y=0)$, $$\phi_1:=(x=t, y=\ln t), \phi_2:= (x= t-\ln t, y = \ln t).$$ They can be represented as graphs $y=\phi_1(x),y=\phi_2(x)$ for sufficiently large $x$. Note that $\phi_1(x) = \ln (x)$, while $\phi_2(x) = \ln t(x)$, where $x=t(x)-\ln t(x)$. Now, using the above lemma we estimate $$|\phi_2(x)-\phi_1(x)| = |\ln t(x) - \ln x| = |\ln \left ( x+\ln x + r(x) \right ) -\ln x|= \left | \ln \left ( 1+\frac{\ln x}{x} +\frac{r(x)}{x} \right ) \right | \leq C \, \frac{\ln x}{x}$$ Applying contraction mapping principle, we obtain the desired estimate $$|\phi(x) - \ln x| \leq C \, \frac{\ln x}{x}.$$ The mapping [**R**]{} restricted to the separatrix takes the form $$x_{n+1} = x_n + \ln(x_n) + O(\ln(x_n)/x_n)$$ The separatrix is given by $$\phi(x) = \ln(x) + \rho \left ( \frac{\ln(x)}{x} \right )$$ for $x\rightarrow \infty$. Then, using the forward map representation $(x_{n+1},y_{n+1})= (\exp y_n, x_{n+1}-x_n)$, we have $$\ln(x_{n+1}) + \rho (x_{n+1}) = x_{n+1}-x_n,$$ where $\rho(x) = O(\ln x/x)$ is a smooth function. Applying the implicit function theorem and estimating the error term, we obtain the result. The asymptotics of the mapping restricted to the separatrix is given by $$x_n = n(\ln(n) + \ln(\ln (n)))+r_n,$$ where $r_n$ is a sequence satisfying $$|r_{n+1}-r_n| \leq C.$$ Substitute the expansion of $x_n$ in the recurrent relation $$x_{n+1} = x_n + \ln(x_n) + O(\ln(x_n)/x_n),$$ then after some cancellations, we obtain that $r_{n+1}=r_n +1 + O(1)$ which implies the result. Two-sided Gaussian distribution: Beck-Bellman problem ===================================================== We consider the two-sided search on the real line with Gaussian probability distribution function as in the original Beck-Bellman problem and we show numerically that the same canonical structure persists: separatrix intersecting the curve of initial turning points. ![Invariant curve and iterated initial data in Beck’s problem. The long curve is invariant manifold. The other two bended curves are 1st and 2nd forward iterates of initial data. The initial data itself is not present because continuation of the separatrix in that region is computationally too difficult.[]{data-label="2pointsbeck"}](beck_gauss.pdf){width="100mm"} The difference relation obtained in [@beck], is given by $$(x_n + x_{n+1})\phi(x_n) = G(x_n)+G(x_{n-1}),$$ where $$\phi(t) = \frac{1}{\sqrt{2\pi}}e^{-t^2/2}, \,\,\,\, G(x) = \int_x^{\infty}\phi(t) dt.$$ The actual turning points are $(-1)^n x_n,$ while $x_n \geq 0$. For matlab computations, we use $${\erfc(x)} = \frac{2}{\sqrt{\pi}}\int_x^{\infty} e^{-t^2} dt$$ and the inverse function called ${\rm erfcinv}$. Using the relation $$G(x) = \frac{1}{2}\erfc (x/\sqrt{2}).$$ the finite difference relation takes the form $$(x_{n+1}+x_{n})\phi(x_n) = \frac{1}{2}( \erfc (x_n/\sqrt 2)+\erfc (x_{n-1}/\sqrt 2)).$$ Now, using $y_{n+1}=x_{n+1}-x_n$, we have $$\begin{aligned} x_{n+1}=\frac{1}{2\phi(x_n)} ( {\erfc} (x_n/\sqrt 2 )+{\erfc} ((x_n-y_n)/\sqrt 2) ) - x_n.\end{aligned}$$ We will also use the inverse map which takes the form $$\begin{aligned} &x_{n+1} = x_n-y_n \nn \\ &y_{n+1} = x_{n+1}-\sqrt{2} \, {\rm erfcinv}\, ( 2\phi(x_{n+1})(x_n+x_{n+1}) - {\erfc}(x_{n+1}/\sqrt{2})). \nn\end{aligned}$$ In this case, the initial data is given by the line segment $x_1=y_1 =t$. Gaussian tail distribution. One-sided search. ============================================= In this section we verify that contraction mapping principle can be used to establish existence of separatrix for the one-sided search problem with Gaussian tail distribution. In this case $f(x) = e^{-x^2}$, so that the second order difference relation is given by $$x_{n+1}= \frac{1}{2x_n} \, e^{\, x_n^2-x_{n-1}^2}.$$ Let $y_{n+1}=x_{n+1}^2-x_n^2$, then we have $$\begin{aligned} &x_{n+1} =\frac{1}{2x_n}e^{y_n} \nn \\ &y_{n+1} =x_{n+1}^2-x_n^2. \nn\end{aligned}$$ We will also need the inverse map $$\begin{aligned} &x_n = \sqrt{x_{n+1}^2-y_{n+1}} \nn \\ &y_n = \ln {(2x_n x_{n+1})} \nn\end{aligned}$$ In this case, the initial data is given by a quadratic curve $$y=x^2=t^2.$$ Now, we show that the contraction principle can be extended to Gaussian case. There exists an invariant manifold containing a graph $y=h(x)$ on $x\in [x_0,\infty)$ and with $$|h(x)-\ln (2x^2)|< 1.$$ Set up contraction mapping $$\Phi(\phi)(x) = \ln(2 z_{\phi}(x) x),$$ where $$z^2_{\phi}(x) -\phi(z_{\phi}(x)) = x^2.$$ Let $${\bf X} = \{ \phi \in C^1 (x_0,\infty), \phi(x)>0, 0< \phi^{\prime}(x) \leq 1/2 \}.$$ By applying the same argument as in the exponential case, we can ensure that $\Phi$ leaves ${\bf X}$ invariant if we take as the initial guess $\phi_0(x) = \ln(2x^2)$. To establish contraction, consider $$|\Phi(\phi)(x) - \Phi(\psi)(x)| = |\ln(2xz_{\phi}(x)) - \ln(2xz_{\psi}(x))| =$$ $$|\ln(z_{\phi}(x)) - \ln(z_{\psi}(x))| \leq \frac{1}{\min(z_{\phi}(x),z_{\psi}(x))} |z_{\phi}(x)-z_{\psi}(x)|.$$ ![Invariant curve and iterated initial data.The longer curve is the invariant manifold. Two other curves are iterated initial turning points.[]{data-label="2ptonesidegauss"}](oneside_gauss.pdf){width="100mm"} Using the identity $$z_{\phi}^2(x) - z_{\psi}^2(x) = \phi(z_{\phi}(x)) - \psi(z_{\psi}(x)),$$ and that $z_{\phi}(x) \geq x$, we have $$|z_{\phi}(x)-z_{\psi}(x)| \leq \frac{1}{z_{\phi}(x)+z_{\psi}(x)}| \phi(z_{\phi}(x)) - \psi(z_{\psi}(x))|$$ $$\leq \frac{1}{2x} ( |\phi(z_\phi(x)) - \psi(z_{\phi}(x))|+ |\psi(z_{\phi}(x)) - \psi(z_{\psi}(x))|$$ $$\leq \frac{1}{2x} \left ( ||f-g|| + ||g^{\prime}||\cdot |z_{\phi}(x)-z_{\psi}(x)| \right ).$$ Combining the terms, we have $$|z_{\phi}(x)-z_{\psi}(x)| \leq \frac{1}{2x - ||{\psi}^{\prime}||}\, ||\phi-\psi||$$ and then $$|\Phi(\phi)(x) - \Phi(\psi)(x)| \leq \frac{1}{2x} \cdot \frac{1}{2x - ||\psi^{\prime}||} \, ||\phi-\psi||.$$ Since we have assumed the bound $0<{\psi}^{\prime}<1/2$, taking $x\geq 1$, we obtain contraction $$|\Phi(\phi)(x) - \Phi(\psi)(x)| \leq \frac{1}{3} \, ||\phi-\psi||.$$ [^1]: This connection between energy minimizing orbits and invariant sets is reminiscent of the Aubry-Mather theory [@aubry]. There energy minimization is used to prove existence of the so-called Aubry-Mather sets. Here we proceed in the other direction: we establish an invariant set in order to find minimal “energy” orbits. [^2]: We use the notation $C_L$ for the Lipshitz constant. [^3]: Albeit the dynamics is not really chaotic in this particular case, we will see that this is rather an exception. [^4]: Recall that $(x,y)$ represent the successive points of the trajectory $(x_k,x_{k+1})$.
--- abstract: 'The Universe is filled with relic neutrinos, remnants from the Leptonic Era. Since the formation of galaxies started, gravitation has modified the Fermi-Dirac momentum distribution of these otherwise decoupled particles. Decelerated neutrinos moving toward the field-free regions between galaxies could violate the Pauli principle. The fermion degeneracy pressure resulting from this leads to an accelerated motion of galaxies away from one another. We show that this model not only offers a natural explanation for the accelerated expansion of the Universe, but also allows a straightforward calculation of the Hubble constant and the time-evolution of this constant. Moreover, it sets a lower limit for the (average) neutrino mass. For the latter, we find $m_\nu > 0.25$ eV/$c^2$ (95% C.L). PACS numbers: 14.60.Pq, 98.80.Es' --- epsf.sty \[t\][6.7in]{} **On the Accelerated Expansion of the Universe** Richard WIGMANS *Department of Physics, Texas Tech University, Lubbock TX 79409-1051, USA* (Submitted to Phys. Rev. Lett. on September 1, 2004) The experimental observation [@Rie98; @Per99] that distant Type Ia Supernovae are dimmer than expected seems to lead to the inevitable conclusion that the rate at which the Universe expands has increased since the time when these stellar explosions occurred, 5 - 10 billion years ago. Current cosmological reviews [@PDG04] ascribe the responsibility for this phenomenon to anti-gravitational action associated with “dark energy”. However, the nature of this energy and, therefore, the meaning of the non-zero cosmological constant which is needed in the equations that describe the evolution of the Universe is a mystery. In this letter, we argue that there is another scenario that may explain the experimental observations. This scenario does not invoke new forces or unknown forms of energy. It is based on a well known phenomenon, the [*fermion degeneracy pressure*]{}, a consequence of Pauli’s Exclusion Principle. This degeneracy pressure is responsible for a variety of astrophysical phenomena, the characteristics of White Dwarfs and neutron stars. Whereas the fermions involved in these objects are electrons and neutrons, the ones responsible for the degeneracy pressure discussed here are neutrinos. The proposed scenario requires that neutrinos have masses of $\sim 1$ eV/$c^2$. According to the Big Bang model, large numbers of neutrinos and antineutrinos have been around since the earliest stages of the evolving Universe. Since the decoupling that marked the end of the Leptonic Era, the wavelengths of these relic (anti-)neutrinos have been expanding in proportion to the size of the Universe. Their present spectrum is believed to be a momentum-redshifted relativistic Fermi–Dirac distribution, and the number of particles per unit volume in the momentum bin $(p,p+dp)$ is given by $$N(p) dp~=~{8\pi{p^2 dp}\over{h^3 [\exp (pc/kT_{\nu}) + 1]}}{\bigl({g_\nu\over 2}\bigr)} \label{numom}$$ where $g_\nu$ denotes the number of neutrino helicity states [@TG]. This momentum spectrum is depicted in Figure \[nuspec\]. The distribution is characterized by a temperature $T_\nu$, which is somewhat lower than that for the relic photons, which were reheated when the electrons and positrons decoupled. Since $(T_\nu/T_\gamma)^3 = 4/11$ and $T_\gamma = 2.725 \pm 0.001$ K [@COBE], $T_{\nu}$ is expected to be 1.95 K. For a neutrino temperature of 1.95 K, the Fermi momentum ($p_F = kT_\nu/c$) is $1.68 \cdot 10^{-4}$ eV/$c$, or $9.0\cdot 10^{-32}$ kg.m.s$^{-1}$. The present density of these Big Bang relics is estimated at $\sim$ 220 cm$^{-3}$, for each (Dirac) neutrino flavor [@boehm], nine orders of magnitude larger than the density of baryons in the Universe. It is important to realize that, depending on their mass, these relic neutrinos might be very [*nonrelativistic*]{} at the current temperature. Since they decoupled, their momenta have been stretched by a factor of $10^{10}$, from 1 MeV/$c$ to $10^{-4}$ eV/$c$. If their rest mass were 1 eV/$c^2$, then their Fermi velocity ($v_F = p_F/m_\nu$) would thus be $1.68 \cdot 10^{-4} c$, or only 50 km/s. The experimental upper limit on the mass of the electron antineutrino was recently determined at 2.2 eV/$c^2$ (95% C.L.), from a study of the electron spectrum of $^3$H decay [@Mainz]. The experimental results on atmospheric and solar neutrinos obtained by the Superkamiokande [@SuperK] and SNO[@SNO] Collaborations suggest that neutrinos do have a non-zero rest mass. A conservative interpretation of the experimental results is that at least one of the neutrino mass eigenvalues is larger than 0.04 eV/$c^2$. There is no experimental information that rules out a neutrino rest mass of the order of 0.1 – 1 eV/$c^2$. During the radiation era, the neutrino spectrum was only affected by the gradual expansion, with $p$ and $T_{\nu}$ inversely proportional to the evolving distance scale. However, when gravity became the dominant force in the Universe and stars and galaxies started to form, important changes were about to take place. Since neutrinos are subject to gravitational forces, their spectra were affected. The neutrino velocities either increased or decreased as a result of gravitational acceleration or deceleration, depending on the direction of motion of the particles with respect to the (dominant) gravitational source. The effects of this extended over intergalactic distances, to regions far away from these sources. For example, a neutrino moving away from our galaxy ($M \approx 10^{11} M_\odot$) at a distance of 300 kpc ($10^{22}$m), experiences a gravitational deceleration of $\sim 10^{-13}$ m.s$^{-2}$. If the initial velocity of this neutrino was 50 km/s, it would over a period of 5 billion years (the relevant time scale for the issues discussed in this letter) be slowed down by about a factor of 2 as a result of this deceleration. The velocity distribution of the relic neutrinos has thus gradually changed in the non-uniform gravitational fields that resulted from baryon clustering, galaxy formation. To understand the potential problems that may be caused by this, it is important to realize that the relic neutrinos form a degenerate fermion gas, at $T_\nu = 1.95$ K. Especially at momenta $p \ll p_F$, almost all available quantum states are occupied (Figure \[nufree\]). The expansion of the Universe does not change that fact, since the neutrino momenta are inversely proportional to the Universal scale and the maximum allowed fermion density evolves in proportion to $p^3$. The present situation is thus not new. The Universe has [*always*]{} been filled with a degenerate neutrino gas. The neutrinos that have lost the largest fraction of their momentum by gravitational deceleration are those whose velocities were close to the escape velocity when galaxy formation started. They tend to concentrate in the low-field regions surrounding the center-of-mass of galaxy clusters. If they have been sufficiently decelerated, then there may not be enough quantum states there to accommodate them. A constant or decelerated Hubble expansion does not prevent, alleviate or solve this problem. So what happens when gravitationally decelerated neutrinos cannot find a quantum state to fit into? Any attempt to squeeze more fermions into the available volume than allowed by the limits set by the Pauli principle creates a pressure that prevents this from happening. In the case of White Dwarfs and neutron stars, this pressure prevents the gravitational collapse of these objects. [*In a cluster of galaxies, this fermion degeneracy pressure leads to an accelerated motion of the galaxies away from their common center-of-mass*]{}. Let us consider two galaxies of equal mass $M$, separated by a distance $2r$. The degeneracy pressure prevents decelerated neutrinos from violating the Pauli principle in the low-field region surrounding the center-of-mass point ($C$) halfway between these galaxies. This is accomplished by curving the space around $C$ such that the inertial reference frames of the galaxies responsible for the deceleration experience a compensating force in the direction away from $C$. The accelerated motion resulting from this is the [*only way*]{} to achieve that none of the ${\cal{O}}(10^{77})$ neutrinos populating the cubic Mpc surrounding a typical galaxy makes a “soft landing” in the region around $C$. In this way, gravitationally decelerated neutrinos which otherwise would range out in this “forbidden” region will always feel a force pushing them away from it. The acceleration is thus equal to the deceleration a particle in $C$ would feel in the absence of other galaxies. Since this acceleration, which we will call the Pauli acceleration in the following, takes place with respect to the center-of-mass ($a = GMr^{-2}$), it is four times larger than the gravitational deceleration of the two galaxies with respect to each other. Since they are separated by a distance $2r$, this deceleration is $a = -GM(2r)^{-2}$. The net result is thus that the galaxies move apart at an [*increasing*]{} speed, regardless of their mass or distance. Because it is necessitated by gravitationally decelerated relic neutrinos, the Pauli acceleration is a relatively recent phenomenon. It only started to play a role after the first galaxies had formed and the resulting non-uniform gravitational fields had sufficiently reduced the speed of some fraction of the relic neutrinos. Based on the scenario described above, we have simulated the expansion, tracing it back to its beginnings. We have taken two galaxies separated by a distance ($R_0$) of 1 Mpc, typical for the present Universe. These galaxies move apart at a relative velocity of 73 km/s, the current value of the Hubble constant, $H_0$ [@PDG04]. We follow the history of this system back in time, in small steps (0.1% of the age of the Universe at each point). As we go back in time, the relative velocity decreases as a result of the Pauli acceleration, and as the galaxies get closer, this acceleration increases. Figure \[zmspeed\] shows some results of this study. The relative velocity of the two galaxies is shown as a function of the redshift $z$, which in this context is simply the ratio $R_0/R(-t)$. The results turned out to be quite sensitive to the chosen value of the galaxy mass. A change of a factor of 2 in this mass made a considerable difference for the $z$ value at which the expansion started to play a role. In the next round of simulations, we chose this mass according to the best current estimates [@PDG04], which have the total matter density at $\sim 25\%$ of the critical value ($\Omega_m = 0.25$). Therefore, $$\rho_m = {{3\Omega_m H^2}\over {8\pi G}} = 2.5\cdot 10^{-27}~~{\rm kg.m}^{-3}.$$ This gives for the total mass contained in a sphere with a diameter of 1 Mpc $\sim 4\cdot 10^{40}$ kg. Figure \[expan\] shows the results of our simulations for this choice of galaxy masses. The dashed and dotted lines describe the separate effects of the Pauli acceleration and the gravitational slowdown, the solid curve represents the combined effect. These results, which imply a net increase of the expansion velocity by $\sim 10\%$ in the period since $z = 1$, are in agreement with the Supernova data. These results also suggest that the Pauli acceleration started around $z = 6$, which is not unreasonable given the current ideas about the start of galaxy formation [@PDG04]. We can also follow the development in the reverse order and thus calculate the current expansion rate. Figure \[Hdnow\] shows this expansion rate, the current value of $H_0$, as a function of the redshift at which the Pauli acceleration started. The described expansion scenario has several interesting consequences for the present structure of the Universe. We mention two that seem to be confirmed by experimental observations: 1. The distribution of galaxies in the Universe is non-uniform. Field-free regions have the tendency to grow over time, since they push all galaxies in their vicinity away with increasing velocities. One should therefore expect large voids, surrounded by strings of galaxies. 2. Each galaxy is surrounded by a “halo” of gravitationally trapped neutrinos. This halo may extend over a distance of up to several hundred kpc. The density of available quantum states is proportional to $r^{-3/2}$ ($v_{\rm esc}^3$) and, therefore, the total mass in the galaxy plane may be expected to scale as $1/\sqrt{r}$. This halo may represent a substantial fraction of the dark matter. How large this fraction is depends on the neutrino rest mass. The scenario described in this letter hinges of course critically on the value of the neutrino mass. If this mass were too small, then the gravitational effects on the relic neutrino velocity would be insignificant. We have estimated the lower limit on $m_\nu$ as follows. In order to violate the Pauli principle, relic neutrinos would have to lose at least one third of their momentum through gravitational deceleration. This is illustrated by the arrow in Figure \[nufree\], which indicates what happens when half of the neutrinos in a certain momentum bin are decelerated (the other half are accelerated). Neutrinos with momenta larger than $\sim 1.5 p_F$ even have to lose more than half of their momentum. According to the neutrino spectrum described by Equation \[numom\] and shown in Figure \[nuspec\], 95% of the relic neutrinos have velocities larger than $0.82 v_F$. These neutrinos have to lose at least 40% of their momentum before they could trigger the fermion degeneracy pressure. Simulations of the type described above showed that when two galaxies with a standard mass of $4\cdot 10^{40}$ kg each move apart at a rate of 73 km/s, neutrinos with velocities up to 165 km/s (measured in the rest frame of one of the galaxies) may lose 40% or more of their momentum through gravitational deceleration. Therefore, the Fermi velocity of the neutrinos that may accomplish this feat must be smaller than 200 km/s. And since $m_\nu$ (in units of eV/$c^2$) equals $1.68\cdot 10^{-4} c/v_F$, this means that the (average) neutrino mass has to be larger than 0.25 eV/$c^2$ (95% confidence level). Fermion degeneracy pressure is a phenomenon that comes into action whenever the fermion density approaches the limits set by the Pauli Exclusion Principle. Until now, it has been exclusively associated with the extremely high temperatures and densities that characterize the interior of degenerate stellar objects. It is remarkable that the same phenomenon may also play a crucial role in the extremely cold and empty conditions of intergalactic space. We have shown that neutrino degeneracy pressure may lead to an accelerated expansion of the Universe, which would explain not only the Supernovae Type Ia data but also the current value of $H_0$. If this explanation is correct, then the expansion of the Universe will continue forever, since the driving principle is not going to go away. It also means that planned efforts to improve the neutrino mass sensitivity in studies of $^3$H decay to the 0.2 eV/$c^2$ level [@Mainz] may well pay off in the form of a precise measurement of this important parameter. Acknowledgment {#acknowledgment .unnumbered} ============== The author would like to thank the Istituto Nazionale di Fisica Nucleare in Frascati, Italy, and in particular its director, Dr. Sergio Bertolucci, for their hospitality during a sabbatical stay and for the opportunity to think about the Universe in an inspiring environment. [99.]{} A.G. Riess (High-Z Supernova Search), Astron. J. [**116**]{}, 1009 (1998). S. Perlmutter (Supernova Cosmology Project), Astrophys. J. [**517**]{}, 565 (1999). S. Eidelman , Phys. Lett. [**B592**]{}, 1 (2004). S. Tremaine and J.E. Gunn, Phys. Rev. Lett. [**42**]{}, 407 (1979). J.C. Mather , Astrophys. J. [**512**]{}, 511 (1999). F. Boehm and P. Vogel, [*Physics of Massive Neutrinos*]{}, 2nd ed., Cambridge University Press, p. 221 (1992). A. Osipowicz , [*KATRIN: A next generation tritium $\beta$-decay experiment with sub-eV sensitivity for the $\nu_e$ mass*]{}, e-print archive hep-ex/0109033 (2001). Y. Fukuda , Phys. Rev. Lett. [**81**]{}, 1562 (1998). Q.R. Ahmad , Phys. Rev. Lett. [**87**]{}, 071301 (2001).
--- abstract: 'We introduce shower deconstruction, a method to look for new physics in a hadronic environment. The method aims to be a full information approach using small jets. It assigns to each event a number $\chi$ that is an estimate of the ratio of the probability for a signal process to produce that event to the probability for a background process to produce that event. The analytic functions we derive to calculate these probabilities mimic what full event generators like [Pythia]{} or [Herwig]{} do and can be depicted in a diagrammatic way. As an example, we apply this method to a boosted Higgs boson produced in association with a $Z$-boson and show that this method can be useful to discriminate this signal from the $Z$+jets background.' author: - 'Davison E. Soper' - Michael Spannowsky date: 2 August 2011 title: Finding physics signals with shower deconstruction --- Introduction ============ A central problem for data analysis at the Large Hadron Collider (LHC) is to find the signal for the production of a new heavy particle or particles against a background of jets produced by standard model processes that do not involve the sought heavy particle. Examples include searches for supersymmetric partners of the quarks and gluons and searches for the Higgs boson. While such searches focus on leptonic final states, most of the sought new physics resonances have a large branching ratio to hadrons. Thus, it is of great importance to be able to disentangle hadronically decaying particles with masses around the electroweak scale from large QCD backgrounds. The decay products of a new very heavy particle will appear in the detector as one or more jets. There may also be jets from initial state radiation. The jets will contain subjets. In this paper, we call the subjets [*microjets*]{}. They are defined with a standard jet algorithm but with a small effective cone size $R$. The pattern of microjets in events arising from the new particle decay will differ from the pattern of microjets in background events that do not involve new particles. One can take advantage of this difference to separate signal from background. In this paper, we propose a method for separating signal from background by analyzing the distribution of the microjets. This method has the potential to be effective in quite general circumstances. However, for a first application, we choose a process in which we are looking at the microjets contained in a larger jet that results from the decay of a heavy particle with large transverse momentum, that is a highly boosted heavy particle. There are several methods already available for the analysis of the structure of the microjets produced by the decay of a highly boosted heavy particle. Two of these methods, trimming [@Krohn:2009th] and pruning [@Ellis:2009su; @Ellis:2009me] can be characterized as generic in that they have the potential to discover new physics signals even if one does not have in mind a particular new physics scenario. Other methods, including the one proposed here, are adapted to searches for particular new physics signals. These include mass drop with filtering and b-quark tagging [@Butterworth:2008iy], the matrix element method [@Kondo:1988yd; @Kondo:1991dw; @Fiedler:2010sg; @Alwall:2010cq], and the template overlap method [@Almeida:2010pa]. These last two methods bear some resemblance to the method proposed in this paper. One can also combine methods [@Soper:2010xk]. For further applications see Refs. [@Butterworth:2002tt; @Butterworth:2009qa; @Seymour:1993mx; @Thaler:2008ju; @Kaplan:2008ie; @Plehn:2009rk; @Chen:2010wk; @Falkowski:2010hi; @Kribs:2009yh; @Kribs:2010hp; @Plehn:2010st; @Bhattacherjee:2010za; @Hackstein:2010wk; @Englert:2010ud; @Katz:2010mr; @Almeida:2008yp; @Thaler:2010tr; @Kim:2010uj; @Kribs:2010ii; @Fan:2011jc; @Plehn:2011tf] and for a review see Ref. [@Abdesselam:2010pt]. The example that we consider in this paper is the production of a Higgs boson in association with a high transverse momentum $Z$-boson, where the $Z$-boson decays into $e^+ + e^-$ or $\mu^+ + \mu^-$ and the Higgs boson decays into $b + \bar b$. This example was analyzed in Ref. [@Butterworth:2008iy]. Since the Higgs boson recoils against a high transverse momentum $Z$-boson, the Higgs boson has a large transverse momentum and is easier to find than if it had low transverse momentum. Nevertheless, there is a large background to this process from standard model processes that do not involve the Higgs boson, so some ingenuity is required to separate the signal from the background. The idea of this paper is to define an observable $\chi$ that is a function of the observed configuration of the final state microjets in an event and distinguishes between a sought signal and the background. To do that, we define $\chi$ as the ratio of the probability that the microjet configuration observed would arise in a signal event to the probability that it would arise in a background event. We use a parton shower algorithm for this purpose. However, our parton shower algorithm is massively simplified compared to [Pythia]{} [@Pythia] or [Herwig]{} [@Herwig] in order that we can compute the probability for a given microjet configuration analytically. We call the method proposed here shower deconstruction. Overview and event selection ============================ As stated in the introduction, the idea of this paper is to define an observable $\chi$ that is a function of the configuration of the final state in an event and distinguishes between a sought signal and the background. The method that we propose is quite general, but in order to explain it with reasonable clarity, we need to consider a specific process. Our choice of process is guided by the desire to have a case that is relatively simple to explain. The example that we use is the search for the Higgs boson using the process $p+p \to H + Z + X$ where the $Z$-boson decays to $\mu^+ + \mu^-$ (or $e^+ + e^-$) while the Higgs boson $H$ decays to $b + \bar b$. We try to separate this from the background process $p+p \to {\it jets} + Z + X$ [@Butterworth:2008iy]. Event selection {#sec:EventSelection} --------------- We simulate an analysis of data by using events generated by [Pythia]{} [@Pythia]. In order to make the Higgs boson easier to find, we demand that the $Z$-boson against which it recoils has a large transverse momentum. Specifically, we select events consistent with a leptonically decaying $Z$-boson for which the leptons are central ($|y_l| <2.5$) and fairly hard ($p_{T,l} > 15~\rm{GeV} $). The invariant mass of the leptons is required to match the $Z$-boson mass, $$|m_{l^+l^-} - m_Z| < 10~\rm{GeV} \;\;. $$ The reconstructed $Z$-boson is required to be highly boosted in the transverse plane, $$\label{eq:pTcut} p_{T,l^+l^-} > p_{T,{\rm min}} \equiv 200~\rm{GeV} \;\;. $$ We next combine final state hadrons in simulated detector cells of size $0.1\times 0.1$ and adjust the absolute value of the momentum in each cell so that the four-momentum is massless. We remove cells with energy less than $0.5\ {\rm GeV}$. We then use these cells as input to the anti-$k_T$ jet-finding algorithm [@antiKT] with a large effective cone size, $R_{\mathrm{F}}= 1.2$. For the recombination of the jet constituents we use [Fastjet]{} [@Cacciari:2005hq]. We find the jet with the highest transverse momentum of all such jets in the event and require its transverse momentum to be larger than $p_{T,{\rm min}}$. This is the “fat jet.” Those selection cuts force the Higgs boson to recoil against the $Z$-boson with a large transverse momentum, so that the decay products of the Higgs boson are fairly well collimated. We denote the cross section for signal events that pass these cuts by $\sigma_{\mathrm{MC}}({\mathrm{S}})$ and denote the cross section for background events that pass these cuts by $\sigma_{\mathrm{MC}}({\mathrm{B}})$. With some help from next-to-leading order calculations, we estimate [^1] $$\begin{split} \label{eq:sigmatotSB} \sigma_{\mathrm{MC}}({\mathrm{S}}) ={}& 1.57\ {\rm fb} \;\;, \\ \sigma_{\mathrm{MC}}({\mathrm{B}}) ={}& 2613\ {\rm fb} \;\;, \\ \frac{\sigma_{\mathrm{MC}}({\mathrm{S}})}{\sigma_{\mathrm{MC}}({\mathrm{B}})} ={}& \frac{1}{1664} \;\;. \end{split}$$ Our analysis makes use of events generated by a Monte Carlo event generator that we use and regard as an accurate representation of nature. We renormalize the event generator cross sections by constant factors for signal and background calculations so as to match the cross sections given in Eq. (\[eq:sigmatotSB\]). We will generally use “MC” subscripts to denote quantities calculated by a Monte Carlo event generator supplemented by some next-to-leading order information. As noted above, we use [Pythia]{} in our calculations; in Sec. \[sec:results\], we also present results using [Herwig]{}. Variables describing the final state {#sec:FinalStateVariables} ------------------------------------ In principle, the final state could be described by the momenta and flavors of all final state particles. However, we simplify this. First, we select events and use the anti-$k_T$ algorithm to define the “fat jet” that recoils against the $Z$-boson, as described above. We use the $k_T$ jet-finding algorithm [@kTjets] to group the fat jet into subjets, which we call microjets. We choose the effective cone size in the $k_T$ jet-finding algorithm to be $R = 0.15$. This size is chosen to correspond roughly to the angular resolution of calorimeter topological clusters in the ATLAS experiment and to be a little larger than the ALTAS calorimeter angular resolution of about 0.1 [@AtlasAngularResolution]. We do not want any of the microjets to be exactly massless, so we add $0.1\ {\rm GeV}$ to the energy of each microjet. Typically, the number of microjets found is between six and ten, but a few events have even more microjets. The computational time needed to analyze an event increases quite quickly with the number of microjets. Accordingly, we choose a number $N_{\rm max}$ with default value $N_{\rm max}=7$ and discard the lowest transverse momentum microjets if there are more than $N_{\rm max}$ microjets, keeping the $N_{\rm max}$ microjets that have the highest transverse momenta. In fact, we find that the lowest transverse momentum microjets carry little useful information: we have varied $N_{\rm max}$ between 5 and 9 and find that the statistical significance of the results that we obtain, as discussed in Sec. \[sec:conclusions\], increases only slowly with $N_{\rm max}$. The microjets found by this procedure are described, in part, by their momenta $\{p\}_N = \{p_1, \dots, p_N\}$, with $p_i^2 > 0$. For some microjets $j$, we also provide a $b$-quark tag, $t_j$. To qualify for a tag, the microjet must be among the three microjets in the event with the highest $p_T$ values and it must have $p_T > p_T^{\rm tag}$, where our default value is $p_T^{\rm tag} = 15\ {\rm GeV}$. For microjets $j$ that do not qualify for a tag we set $t_j = {\tt none}$. In the simplest implementation, one would take $t_j = {\rm T}$ if microjet $j$ contains a $b$ or $\bar b$ quark and otherwise define $t_j = {\rm F}$. We simulate $b$-tagging of microjets in experiment by using more realistic $b$-tagging for [Pythia]{} events: > $\bullet$ If any hadron in microjet $j$ contains a $b$ or $\bar b$ quark, then we set $t_{j} = {\mathrm{T}}$ with a probability $P({\mathrm{T}}|b)$ and $t_{j} = {\mathrm{F}}$ with a probability $1-P({\mathrm{T}}|b)$. > > $\bullet$ If no hadron in microjet $j$ contains a $b$ or $\overline b$ quark, then we set that $t_{j} = {\mathrm{T}}$ with a probability $P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b)$ and $t_{j} = {\mathrm{F}}$ with a probability $1-P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b)$. Our default value for the $b$-tagging efficiency is $P({\mathrm{T}}|b) = 0.6$ while our default value for the mistag probability is $P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b) = 0.02$ [@btags]. This procedure of defining microjets within the fat jet gives a somewhat “coarse grained” description of the part of the event that is of interest: the momenta and b-quark tags, $\{p,t\}_N = \{p_1, t_1; \dots; p_N,t_N\}$, of the microjets. Probabilities according to Monte Carlo event generator ------------------------------------------------------ We denote by $P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{S}})$ the probability that a signal event has a microjet configuration $\{p,t\}_N$, as determined by the Monte Carlo event generator that we use and regard as an accurate representation of nature:[^2] $$P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{S}}) = \frac{1}{\sigma_{\mathrm{MC}}({\mathrm{S}})}\, \frac{d\sigma_{\mathrm{MC}}({\mathrm{S}})}{d\{p,t\}_N} \;\;. $$ Similarly, we let the probability that a background event has a microjet configuration $\{p,t\}_N$ be $$P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{B}}) = \frac{1}{\sigma_{\mathrm{MC}}({\mathrm{B}})}\, \frac{d\sigma_{\mathrm{MC}}({\mathrm{B}})}{d\{p,t\}_N} \;\;. $$ We now seek an observable that does a good job of distinguishing signal events from background events. Our sought observable is to be a function $\chi(\{p,t\}_N)$ of the microjet configuration. It will also be a function of the parameters of the standard model, especially the mass $m_{H}$ of the Higgs boson. As a preliminary step, we define a quantity $\chi_{\mathrm{MC}}(\{p,t\}_N)$ by $$\chi_{\mathrm{MC}}(\{p,t\}_N) = \frac{P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{S}})}{P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{B}})} \;\;. $$ We would like to use $\chi_{\mathrm{MC}}(\{p,t\}_N)$ as our observable. In fact, if one considers that the Monte Carlo event generator is accurate and if one could construct $\chi_{\mathrm{MC}}$ as a function of $\{p,t\}_N$, then this could be considered to be the ideal observable. Why might one consider $\chi_{\mathrm{MC}}$ to be an ideal observable? To see this in the simplest context, let us suppose that we want to examine data using a cut: we accept events if $C(\{p,t\}_N) > 0$, where $C(\{p,t\}_N)$ is some function that we are at liberty to make up. The signal and background cross sections with this cut are $$\begin{split} \sigma_C({\mathrm{S}}) ={}& \int\! d\{p,t\}_N\ \Theta(C(\{p,t\}_N))\, \frac{d\sigma_{\mathrm{MC}}({\mathrm{S}})}{d\{p,t\}_N} \;\;, \\ \sigma_C({\mathrm{B}}) ={}& \int\! d\{p,t\}_N\ \Theta(C(\{p,t\}_N))\, \frac{d\sigma_{\mathrm{MC}}({\mathrm{B}})}{d\{p,t\}_N} \;\;. \end{split}$$ Choose a value $\sigma_C({\mathrm{S}})$ that we want for the signal cross section and require that the cut produce this value of signal cross section. With this constraint on the signal cross section, we will have the best statistical significance for a measurement if we make $\sigma_C({\mathrm{B}})$ as small as possible. Thus we seek to choose the cut so as to minimize $\sigma_C({\mathrm{B}})$ with $\sigma_C({\mathrm{S}})$ held constant. The solution to this problem is to choose $C(\{p,t\}_N)$ such the surface $C(\{p,t\}_N) = 0$ is a surface of constant $\chi_{\mathrm{MC}}(\{p,t\}_N)$. That is, we should measure the cross section inside a cut defined by $$C(\{p,t\}_N) = \chi_{\mathrm{MC}}(\{p,t\}_N) - \chi_0 $$ for some $\chi_0$. If we make any small adjustment to this by removing an infinitesimal region with $\chi_{\mathrm{MC}}(\{p,t\}_N) > \chi_0$ from the cut and adding a region having the same signal cross section but with $\chi_{\mathrm{MC}}(\{p,t\}_N) < \chi_0$, we raise the total background cross section within the cut while keeping the signal cross section the same. Thus using contours of $\chi_{\mathrm{MC}}(\{p,t\}_N)$ to define our cut is the best that we can do. What value of $\chi_0$ should one choose? For a simple optimized cut based analysis with a given amount of integrated luminosity, one would choose $\chi_0$ so as to maximize the ratio of the expected number of signal events to the square root of the expected number of background events. We discuss this further in Sec. \[sec:results\]. Instead of using an optimized cut on $\chi_{\mathrm{MC}}$ to separate signal from background, one could imagine using a log likelihood ratio constructed from $\chi_{\mathrm{MC}}$. We do not discuss that method in this paper. Now we must face the fact that to construct $\chi_{\mathrm{MC}}(\{p,t\}_N)$, we would need two things: the differential cross section to find microjets $\{p,t\}_N$ in background events and then the differential cross section to find microjets $\{p,t\}_N$ in signal events. In each case, we would consider this differential cross section in a parton shower approximation to the full theory. Unfortunately for us, a parton shower produces $d\sigma_{\mathrm{MC}}({\mathrm{S}})/d\{p,t\}_N$ and $d\sigma_{\mathrm{MC}}({\mathrm{B}})/d\{p,t\}_N$ by producing Monte Carlo events at random according to these distributions. If we have 7 microjets described by 4 momentum variables each and we divide each of these 28 variables into 10 bins, then we have approximately $10^{28}/7! \approx 10^{24}$ total bins (accounting for the interchange symmetry among the 7 microjets). The parton shower Monte Carlo event generator will fill these bins with events, but it will be a long time before we have of order 100 counts per bin in order to estimate $d\sigma_{\mathrm{MC}}({\mathrm{S}})/d\{p,t\}_N$ and $d\sigma_{\mathrm{MC}}({\mathrm{B}})/d\{p,t\}_N$ at each bin center. Thus it is not practical to calculate $\chi_{\mathrm{MC}}(\{p,t\}_N)$ numerically by generating Monte Carlo events. It is also not practical to calculate $\chi_{\mathrm{MC}}(\{p,t\}_N)$ analytically using the shower algorithms in <span style="font-variant:small-caps;">Pythia</span> or <span style="font-variant:small-caps;">Herwig</span>. These programs are very complicated, so that we have no hope of finding $P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{S}})$ and $P_{\mathrm{MC}}(\{p,t\}_N|{\mathrm{B}})$ for either of them. Probabilities according to simplified shower {#sec:probabilities} -------------------------------------------- What we need is an observable $\chi(\{p,t\}_N)$ that is an approximation to $\chi_{\mathrm{MC}}(\{p,t\}_N)$ such that we can calculate $\chi(\{p,t\}_N)$ analytically for any given $\{p,t\}_N$. For this purpose, we define a simple, approximate shower algorithm, which we will call the simplified shower algorithm. We let $P(\{p,t\}_N|{\mathrm{S}})$ and $P(\{p,t\}_N|{\mathrm{B}})$ be the probabilities to produce the microjet configuration $\{p,t\}_N$ in, respectively, signal and background events according to the simplified shower algorithm. Define $$\label{eq:chidef} \chi(\{p,t\}_N) = \frac{P(\{p,t\}_N|{\mathrm{S}})}{P(\{p,t\}_N|{\mathrm{B}})} \;\;. $$ This function, $\chi(\{p,t\}_N)$ without the “MC” subscript, is the observable that we use. We may call the calculation of $\chi(\{p,t\}_N)$ shower deconstruction. The parton state with $N$ microjets is a possible intermediate state in a parton shower. We seek to determine the probability that this intermediate state with parameters $\{p,t\}_N$ is generated. We try to build enough into the simpler shower to provide a reasonable approximation to QCD and the rest of the standard model. Furthermore, we can define the shower so that the deconstruction is as simple as we can make it, even if that means that the corresponding shower algorithm is not so practical as an event generator. For instance, an implementation of the simplified shower algorithm as an event generator might generate weighted events in a way that makes unweighting the events costly in computer time. Additionally, probability conservation might be only approximate, so that the generated weights for different outcomes do not sum exactly to one. No matter: we are not going to use the simplified shower algorithm to generate events anyway. Additionally, we can ignore any factors in $P(\{p,t\}_N|{\mathrm{S}})$ and $P(\{p,t\}_N|{\mathrm{B}})$ that are common between them for each $\{p,t\}_N$ since such factors cancel in $\chi$. Our construction will be far from perfect, and it can be useful even if it is not perfect. We will use <span style="font-variant:small-caps;">Pythia</span> to measure the cross section $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ to have signal events with a given value of $\chi$ and the corresponding cross section $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ to have background events with this value of $\chi$. In Fig. \[fig:SandBvschi\], we show these two functions for the simplified shower as defined in the following sections. In this illustration, we see that increasing $\chi$ favors signal compared to background. ![ $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ for background events (upper curve) and $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ for signal events (lower curve) for samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span>. We use the cuts described in Sec. \[sec:EventSelection\]. []{data-label="fig:SandBvschi"}](SBvsChiAllTags.pdf){width="8.0cm"} There is another way to present the results in Fig. \[fig:SandBvschi\] that is more informative. Let us define integrated signal and background cross sections above a cut: $$\begin{split} \label{eq:sandbdef} s(\chi) ={}& \int_{\chi}^\infty\!d\bar\chi\ \frac{d\sigma_{\mathrm{MC}}({\mathrm{S}})}{d\bar\chi} \;\;, \\ b(\chi) ={}& \int_{\chi}^\infty\!d\bar\chi\ \frac{d\sigma_{\mathrm{MC}}({\mathrm{B}})}{d\bar\chi} \;\;. \end{split}$$ It is useful to use $s$ in plots as the independent variable. With this definition, $s$ runs from 0 to $\sigma_{\mathrm{MC}}({\mathrm{S}})$ and $s = 0$ corresponds to $\chi = \infty$. We can then examine the ratio of signal to background cross sections, $s/b$, considered as a function of $s$. ![ Plot of $s/b$ versus $s$, where $s$ and $b$ are defined in Eq. (\[eq:sandbdef\]). We use samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span> as in Fig. \[fig:SandBvschi\]. []{data-label="fig:Rvss"}](RvssAllTags.pdf){width="8.0cm"} In Fig. \[fig:Rvss\], we display the information in Fig. \[fig:SandBvschi\] as a plot of $s/b$ versus $s$. We have used here the $\chi(\{p,t\}_N)$ from our simplified shower algorithm. If we could somehow use $\chi_{\mathrm{MC}}(\{p,t\}_N)$, based on the same Monte Carlo event generator that we used to generate events, then we would obtain a curve for $s/b$ versus $s$ that is everywhere higher. No algorithm could produce a curve above this limiting curve, but we have no way of determining the limiting curve. We see in Fig. \[fig:Rvss\] that $s/b$ is small for large $s$ but that there is a region of $s$ in which $s/b$ is not too small. This is what one hopes to accomplish with shower deconstruction. We will return in Sec. \[sec:results\] to a discussion of numerical results. In the following sections, we describe how shower deconstruction works. Conceptually, it is very simple. However, there are quite a few ingredients. That is because we seek to approximate the probability that a parton shower will give a certain set of microjets and there are quite a few ingredients in a parton shower. The simplified parton shower that we describe in the following sections is modeled on the general parton shower algorithm described in Ref. [@NSI] and, in particular, on its leading color, spin-averaged version [@NSII]. It is basically a virtuality ordered shower, although we modify the evolution variable in Refs. [@NSI; @NSII] to be virtuality/energy instead of just virtuality. This shower is a partitioned dipole shower, and we choose a dipole partitioning function from Ref. [@NSIII]. A shower algorithm in which one can calculate the probability to produce a given parton configuration has been proposed in Ref. [@Bauer:2008qj]. The aims of this algorithm are rather different from ours in that the algorithm of Ref. [@Bauer:2008qj] is designed to be practical as an event generator. Accordingly, the methods used are rather different from ours. Organization of shower deconstruction ===================================== In this section, we explain the overall organization of shower deconstruction, beginning with the concept of a shower history. Shower histories {#sec:histories} ---------------- In general, a shower history $h$ is a tree Feynman diagram showing how $N$ final state partons (the microjets) could have evolved starting with a hard scattering process for signal or background events. In our application, we simplify quite a lot. First, we look not at the whole event, but only at the microjets that make up the fat jet. For background events, we assume that the microjets came from a parton shower induced by a high $p_T$ parton plus parton showers starting from initial state radiation (including radiation from the underlying event), as illustrated in Fig. \[fig:HistoryBackground\]. For signal events, we assume that the microjets came from the decay products of a Higgs boson (through $H \to b + \bar b$) plus parton showers starting from initial state radiation, as illustrated in Fig. \[fig:HistorySignal\]. ![A shower history for a background event. The “star” vertex represents the production of a high $p_T$ parton from the hard interaction. The “diamond” vertices represent production of partons by initial state radiation. Each parton can split into two daughter partons at a shower vertex, represented by a small circle. In this background event, one of the gluons splits into a light $q$-$\bar q$ pair.[]{data-label="fig:HistoryBackground"}](HistoryBackgroundNew.pdf){width="14.0cm"} ![A shower history for a signal event. The dashed line is the Higgs boson, produced in the hard interaction. It decays into a $b$-quark and a $\bar b$-quark, which carry arrows representing the flow of $b$-flavor. The QCD shower splitting of a $b$-quark is to a $b$-quark plus a gluon. In this event, one of the gluons splits into further gluons.[]{data-label="fig:HistorySignal"}](HistorySignalNew.pdf){width="14.0cm"} Each parton in the shower history carries a flavor label $f_i$. We make some simplifications in the flavor structure of the simplified shower. 1. For shower histories corresponding to signal events, we have a Higgs boson intermediate state. That is, we have a parton with flavor $f_i = H$. 2. The Higgs boson decays into a $b$-quark and a $\bar b$-quark, so we need flavors $f_i = b$ and $f_i = \bar b$. 3. A $b$- or $\bar b$-quark can emit a gluon, so we have partons in our shower histories with flavor $f_i = g$. 4. A gluon can split to a $b$-quark and a $\bar b$-quark. 5. A gluon can also split to a light quark and a light antiquark, so we have partons in our shower with flavors $f_i = q$ and $f_i = \bar q$. We do not distinguish whether the light quark pairs are $({\mathrm u}, \bar {\mathrm u})$, $({\mathrm d}, \bar {\mathrm d})$, $({\mathrm s}, \bar {\mathrm s})$, or $({\mathrm c}, \bar {\mathrm c})$. Instead, we simply multiply the emission probability for one flavor of light quark by $n_{\mathrm{f}}- 1 = 4$, where $n_{\mathrm{f}}= 5$ is the number of quark flavors including the $b$ quark. 6. As an approximation, we treat the initial hard parton in a background event as being a gluon. Similarly, we treat partons radiated from the incoming initial state partons as being gluons. A shower history in which a gluon splits into a $b$-$\bar b$ pair is illustrated in Fig. \[fig:HistoryBackgroundgbb\]. ![A shower history for a background event in which a high $p_T$ gluon splits to a $b + \bar b$ pair. The QCD shower splitting of a $b$-quark is to a $b$-quark plus a gluon. The $b$ and $\bar b$ quarks radiate gluons and one of the gluons splits into two gluons.[]{data-label="fig:HistoryBackgroundgbb"}](HistoryBackgroundgbbNew.pdf){width="14.0cm"} The probabilities $P(\{p,t\}_N|{\mathrm{B}})$ and $P(\{p,t\}_N|{\mathrm{S}})$ in our shower model will consist of a sum of partial probabilities corresponding to different shower histories. In the following sections, we assume that we have picked a shower history $h$ and we seek to construct the probability $P(\{p,t\}_N|{\mathrm{B}},h)$ or $P(\{p,t\}_N|{\mathrm{S}},h)$ corresponding to that shower history. We will return in Sec. \[ConstructingHistories\] to the question of how to construct the shower histories in a reasonably efficient fashion. First, though, we need to define the factors corresponding to the vertices and propagators in our shower history diagrams. We begin with a description of the color flow. Color connections ----------------- We work in the standard leading color approximation and will need to keep track of color connections. Consider a final state splitting in which a gluon labeled $J$ splits into two daughter gluons. Let the label of the daughter that carries the $\overline {\bm 3}$ color of the mother parton $J$ be $A$. We draw this daughter parton on the left in our diagrams. Let the label of the daughter parton that carries the ${\bm 3}$ color of parton $J$ be $B$. We draw this daughter parton on the right in our diagrams. We track the angle variables of two color connected partner partons to parton $J$. Parton $k{(J)}_{{\mathrm{L}}}$ carries the $\bf 3$ color that is connected to the $\overline {\bm 3}$ color line of parton $J$. Parton $k{(J)}_{{\mathrm{R}}}$ carries the $\overline {\bm 3}$ color that is connected to the $\bf 3$ color line of parton $J$. The labels $k{(J)}_{{\mathrm{L}}}$ and $k{(J)}_{{\mathrm{R}}}$ specify lines in the shower history diagram, not necessarily final microjets. Given the labels of the color connected partners to the mother parton $J$, we assign the color connected partners of the daughter partons. The two daughter partons are color connected partners of each other and each inherits one of the color connected partners of the mother. That is $$k{(A)}_{\mathrm{L}}= k{(J)}_{\mathrm{L}}, \qquad k{(A)}_{\mathrm{R}}= B \;\;, $$ and $$k{(B)}_{\mathrm{L}}= A, \qquad k{(B)}_{\mathrm{R}}= k{(J)}_{\mathrm{R}}\;\;. $$ If parton $J$ is a quark, then it has a color connected partner $k(J)_{\mathrm{R}}$ that carries the $\overline {\bm 3}$ color connected to the quark’s $\bm 3$ color. There is no $k(J)_{\mathrm{L}}$ partner. The quark can split into daughter quark $A$ and a daughter gluon $B$, which we draw on the right because it carries the $\bm 3$ color of the mother quark. The color connected partners of the daughter partons are then $$k{(A)}_{\mathrm{R}}= B \;\;, $$ and $$k{(B)}_{\mathrm{L}}= A, \qquad k{(B)}_{\mathrm{R}}= k{(J)}_{\mathrm{R}}\;\;. $$ Similarly, if parton $J$ is an antiquark, then it has a color connected partner $k(J)_{\mathrm{L}}$ that carries the ${\bm 3}$ color connected to the antiquark’s $\overline {\bm 3}$ color. There is no $k(J)_{\mathrm{R}}$ partner. The antiquark can split into daughter antiquark $B$ and a daughter gluon $A$, which we draw on the left because it carries the $\overline {\bm 3}$ color of the mother antiquark. The color connected partners of the daughter partons are then $$k{(A)}_{\mathrm{L}}= k{(J)}_{\mathrm{L}}, \qquad k{(A)}_{\mathrm{R}}= B \;\;, $$ and $$k{(B)}_{\mathrm{L}}= A \;\;. $$ Consider a final state splitting in which a gluon with label $J$ splits into $q + \bar q$ ( or $b + \bar b$). Let the label of the daughter antiquark be A; we draw it to the left because it carries the $\overline {\bm 3}$ color of the mother parton $J$. Let the label of the daughter quark be B; we draw it to the right because it carries the ${\bm 3}$ color of the mother parton. The color connected partners of the daughter partons are $$k{(A)}_{\mathrm{L}}= k{(J)}_{\mathrm{L}}, \qquad k{(B)}_{\mathrm{R}}= k{(J)}_{\mathrm{R}}\;\;. $$ Finally, consider the decay of a Higgs boson, labelled $J$, into $b + \bar b$. Since the Higgs boson is a color singlet, the $b$ and $\bar b$ quarks are each other’s color connected partners. We draw the $b$-quark on the left and call its label $A$, while we draw the $\bar b$-quark on the right and call its label $B$. The color connected partners of the daughter partons are $$k{(A)}_{\mathrm{R}}= B , \qquad k{(B)}_{\mathrm{L}}= A \;\;. $$ These procedures define color connections recursively. To start the recursion the initial hard parton in a background event has undefined color connected partners: $k{(J)}_{\mathrm{L}}= k{(J)}_{\mathrm{R}}= {\tt undefined}$. If we knew the complete Feynman diagram representing a shower history, then all color connected partners would be defined, but we know about only partons that are part of the fat jet, so we have an incomplete shower history. The true color partners of the initial hard parton could be partons that are not in the fat jet, or they could be partons from initial state radiation. Because we do not know the true color connections, we leave them undefined. Similarly, partons created as initial state radiation have [undefined]{} color connections in our approximation. As the shower progresses, the [undefined]{} color connections are inherited, but most partons later in the shower have defined color connections.[^3] Kinematics ---------- We need to describe the kinematics of a splitting of a parton $J$ into two partons, call them $A$ and $B$. There is a big advantage to making the simplest choice for the relation among the corresponding momenta: $$\label{eq:momentumconservation} p_J = p_{{\mathrm{A}}} + p_{{\mathrm{B}}} \;\;. $$ This means that $p_J^2 > 0$ even if $p_A^2 = 0$ and $p_B^2 = 0$. In shower generation (as distinguished from shower deconstruction) one does not do this. One wants $p^2 = 0$ for all intermediate partons since one does not know the virtualities of daughter partons at the time that the splitting is generated. When all partons have $p^2 = 0$, one has to take some momentum from somewhere in order to balance momentum. If we did that for shower deconstruction, the required treatment would be difficult. For shower deconstruction, we simply use Eq. (\[eq:momentumconservation\]) and allow all partons to have $p^2 > 0$. Then each parton (or jet) is characterized by four variables, one of which is $\mu^2 \equiv p^2$. With this choice, each parton is described by four variables: its virtuality $\mu^2$, its rapidity $y$, its azimuthal angle $\phi$, and the absolute value $k$ of its transverse momentum. The $(+,-,1,2)$ components of the momentum of the parton are then[^4] $$\label{eq:momentumdecomposition} p = \left( \frac{1}{\sqrt 2}\,\sqrt{k^2 + \mu^2}\, e^y, \frac{1}{\sqrt 2}\,\sqrt{k^2 + \mu^2}\, e^{-y}, k \cos\phi, k\sin\phi \right) \;\;. $$ We are now ready to turn to the vertices of our shower history diagrams. The hard interaction vertex {#sec:hard vertex} =========================== We first need a factor to represent the hard scattering process that creates the starting high $p_T$ parton that forms the fat jet, or, more exactly, forms the part of the fat jet that is not from initial state emissions. This factor is represented by the “star” vertex, as in Fig. \[fig:HardInteraction\]. We consider first the hard vertex for background events. ![Probability to create the initial parton in the hard interaction. The left hand vertex is for the background process, the right hand vertex is for the signal process.[]{data-label="fig:HardInteraction"}](HardIntNew.pdf){width="7.0cm"} Background {#sec:Hhard_background} ---------- First, we impose a requirement that the scattering process that creates the starting high $p_T$ parton is indeed the dominant hard scattering process in the event. We define $Q^2$ to be the square of the transverse momentum of the fat jet plus the square of its mass, $$\label{eq:Qsqdef} Q^2 = \left(\sum_{i\in {\rm fat\ jet}} \vec p_{T,i}\right)^{\!\!2} +\left(\sum_{i\in {\rm fat\ jet}} p_i\right)^{\!\!2} \;\;. $$ We then define $\vec k_{T,{\mathrm{I}}}$ to be the transverse momentum of all microjets that are part of the fat jet but are not in the decay products of the initial hard parton. That is, $\vec k_{T,{\mathrm{I}}}$ is the transverse momentum of all microjets associated with initial state and underlying event radiation. We demand that $$\label{eq:IScut} k_{T,{\mathrm{I}}}^2 < Q^2/4 \;\;. $$ For the probability density associated with the creation of the initial hard parton, we use a factor $$\label{eq:Hstart} H_g = N_{\rm pdf}^g \left(\frac{p_{T,{\rm min}}^2}{k_0^2}\right)^{\!\! N_{\rm pdf}^g} \frac{1}{k_0^2}\ \Theta(k_{T,{\mathrm{I}}}^2 < Q^2/4) \;\;. $$ Here $k_0$ is the transverse momentum of the initial hard parton. The factor $1/k_0^2$ is an approximation to the $k_0^2$ dependence of the square of the hard matrix element. The hard scattering cross section is also proportional to a product of parton distribution functions. We approximate the dependence on the parton distribution functions by including a factor $1/(k_0^2)^{N_{\rm pdf}^g}$, where our default value for the exponent is $N_{\rm pdf}^g = 2$. This value yields an approximation to the one jet inclusive cross section at the Large Hadron Collider, as illustrated in Fig. 11 of ref. [@JetErrors]. The parameter $p_{T,{\rm min}}$ is the smallest allowed transverse momentum of the $Z$-boson against which the initial hard parton recoils, $p_{T,{\rm min}} = 200\ {\rm GeV}$, Eq. (\[eq:pTcut\]). The normalization factor $N_{\rm pdf} (p_{T,{\rm min}}^2)^{N_{\rm pdf}}$ is chosen so that the integral $\int dk_0^2\,H$ from $p_{T,{\rm min}}^2$ to infinity is 1. There is an additional normalization factor that we omit because it cancels between the hard scattering cross sections for background and for signal. Signal {#sec:Hhard_signal} ------ We also need a factor to represent the hard scattering process that creates the Higgs boson. For this purpose, we use a factor $$\label{eq:signalstart} H_H = N_{\rm pdf}^H \left(\frac{p_{T,{\rm min}}^2 + m_H^2} {k_H^2 + m_H^2}\right)^{\!\! N_{\rm pdf}^H} \frac{1}{k_H^2 + m_H^2}\ \Theta(k_{T,{\mathrm{I}}}^2 < Q^2/4) \;\;, $$ as in Eq. (\[eq:Hstart\]). Here $k_H$ is the transverse momentum of the Higgs boson, $m_H$ is the Higgs boson mass, $k_{T,{\mathrm{I}}}$ is the total transverse momentum of all partons emitted in the initial state, and $Q^2$ is defined in Eq. (\[eq:Qsqdef\]). The remaining factors provide an approximation to the dependence on the parton distribution functions. The default values of the parameters are $N_{\rm pdf}^H = 2$ and $p_{T,{\rm min}} = 200\ {\rm GeV}$ as in Eq. (\[eq:Hstart\]). Initial state and underlying event radiation {#sec:ISradiation} ============================================ We have seen how to model the hard interaction that creates either a high $p_T$ QCD parton or a Higgs boson. Now we need to model initial state and underlying event radiation, defining an emission probability $H_{\rm IS}$ as illustrated in Fig. \[fig:ISemission\]. Consider the probability for the emission of a gluon with positive rapidity from an initial state parton that participates in the hard interaction. Since the gluon has positive rapidity, this emission is predominantly from the active parton “${\mathrm{a}}$” from hadron A. We use “${\mathrm{b}}$” as the label for the other active incoming quark, from hadron $B$. We take $p_{\mathrm{a}}$ to be in the $+$ direction and $p_{\mathrm{b}}$ to be in the $-$ direction. We suppose that the emitting parton “a” has a color connected partner with label $k$. For the processes that we examine, the initial state partons are likely to be quarks, so there is only one color connected partner. The emitted parton carries the label $J$. As a simple approximation, we assume that it is a gluon. We start with the dipole formula for the squared matrix element for the emission, $$\label{eq:ISdipolesplitting0} H_{\rm dipole} \approx \frac{C_{\mathrm{A}}}{2}\, (4\pi {\alpha_{\mathrm{s}}}) \frac{2\, p_{\mathrm{a}}\cdot p_k} {p_J \cdot p_{\mathrm{a}}\ p_J \cdot p_k} \;\;. $$ Writing $p_J \cdot p_k$ in components, this is $$\label{eq:ISdipolesplitting1} H \approx \frac{4\pi {\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}\, p_{\mathrm{a}}^+ p_k^-} {p_{\mathrm{a}}^+ p_J^- \,(p_J^+ p_k^- + p_J^- p_k^+ - \vec k_{\perp,J}\cdot \vec k_{\perp,k})} \;\;. $$ In order to simplify this, we assume that $p_J^+ p_k^- \gg p_J^- p_k^+$ and $p_J^+ p_k^- \gg |\vec k_{\perp,J}\cdot \vec k_{\perp,k}|$. With this approximation, $$\label{eq:ISdipolesplitting2} H \approx \frac{8\pi {\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}} {2\, p_J^- p_J^+ } \;\;. $$ This is exactly $$\label{eq:ISdipolesplitting3} H \approx \frac{8\pi {\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}}{k_J^2 + \mu_J^2} \;\;. $$ This emission probability applies for emitted gluons with positive rapidity, emitted from the active parton in hadron A. It also applies for emitted gluons with negative rapidity, emitted from the active parton in hadron B. To cover all gluons emitted in the central region, we simply use Eq. (\[eq:ISdipolesplitting3\]) for both positive and negative rapidity. (We note that $H$ is independent of rapidity with the approximations that we have used.) ![Probability to create a parton by initial state radiation, including both perturbative and nonperturbative radiation.[]{data-label="fig:ISemission"}](SplittingISNew.pdf){width="3.0cm"} In Eq. (\[eq:ISdipolesplitting3\]), we choose the squared transverse momentum $k_J^2$ as the argument of ${\alpha_{\mathrm{s}}}$ and we neglect $\mu_J^2$ compared to $k_J^2$: $$\label{eq:ISdipolesplitting4} H \approx \frac{8\pi {\alpha_{\mathrm{s}}}(k_J^2)\,C_{\mathrm{A}}}{k_J^2} \;\;. $$ This expression should then be a fairly good approximation for the emission probability as long as $k_J^2$ is large enough for the emission to be purely perturbative and small enough for the parton momentum fraction carried away by the emitted gluon to be negligible. If the parton momentum fraction carried away by the emitted gluon is not negligible, there should be an additional factor $$R = \frac{(1-z)\,f(x/(1-z),k_J^2)}{f(x,k_J^2)} \;\;, $$ where $x$ is the momentum fraction of the parton after emitting the gluon, $z x/(1-z)$ is the momentum fraction of the emitted gluon, $x/(1-z)$ is the momentum fraction of the parton before emitting the gluon and the functions $f$ are parton distribution functions. (See Eq. (8.26) of Ref. [@NSI]). When $k_J^2 \ll Q^2$ we have $z \ll 1$ and $R \approx 1$. However, the approximation $R\approx 1$ breaks down for values of $k_J^2 / Q^2$ at which initial state radiation is still significant. We do not want our simplified shower model to depend on parton distribution functions, so we make a rather crude approximation, $$R = \frac{1}{(1 + c_R\, k_J/Q)^{n_R}} \;\;, $$ where our default values for the parameters are $c_R = 2$ and $n_R = 1$. With this factor $R$ included, we should have a fairly good approximation for the emission probability as long as $k_J^2$ is large enough for the emission to be purely perturbative. To give ourselves some flexibility at small $k_J^2$, we replace $k_J^2$ by $k_J^2 + \kappa_{\mathrm{p}}^2$ in the argument of ${\alpha_{\mathrm{s}}}$ and the factor $1/k_J^2$. Our default value for the parameter here is $\kappa_{\mathrm{p}}^2 = 4\ {\rm GeV}^2$. Then the perturbative $H$ is frozen when $k_J$ gets to be much smaller than $\kappa_{\mathrm{p}}$. We then add back a simple non-perturbative function that gives us a chance to adjust the amount of radiation for smaller values of $k_J$. ![The distribution of initial state jets as a function of their transverse momentum $k_J$ as produced in [Pythia]{} compared to the distribution produced by $H_{\rm IS}$ and its perturbative and non-perturbative parts. The distributions are integrated over all azimuthal angles and over the rapidity range $-2<y<2$. For our model, we use $H_{\rm IS}$ from Eq. (\[eq:ISemission\]), calling the first term $H_{\rm IS}^{\rm pert}$ and the second term $H_{\rm IS}^{\rm n.p.}$. The distribution from $H_{\rm IS}$ is shown as a heavy line, while the steeper line below is from $H_{\rm IS}^{\rm n.p.}$ while shallower line below is from $H_{\rm IS}^{\rm pert}$.[]{data-label="fig:UEModelFit"}](UE_Model_Fit.pdf){width="8.0cm"} This gives the complete initial state emission probability $$\label{eq:ISemission} H_{\rm IS} = 8\pi\,C_{\mathrm{A}}\, \frac{{\alpha_{\mathrm{s}}}(k_J^2 + \kappa_{\mathrm{p}}^2)}{k_J^2 + \kappa_{\mathrm{p}}^2}\ \frac{1}{(1 + c_R\, k_J/Q)^{n_R}} + \frac{16\pi\,c_{\rm np}(\kappa_{\rm np}^2)^{n_{\rm np} - 1}} {[k_J^2 + \kappa_{\rm np}^2]^{n_{\rm np}}} \;\;. $$ Our default values for the non-perturbative parameters are $c_{\rm np} = 1$, $\kappa_{\rm np}^2 = 4\ {\rm GeV}^2$, and $n_{\rm np} = 3/2$. It is intended that, with adjustment of parameters, we can include perturbative radiation from the active initial state partons together with radiation at central rapidities and small transverse momenta that is associated with the underlying event and with event pileup. Our choice for the parameters is based on comparisons with results from [Pythia]{}, including the representation in [Pythia]{} of the effects of the underlying event. We used [Pythia]{} to produce events for $p + p \to H + Z + X$ where both the Higgs boson and the $Z$-boson decay to muons. For this process, all hadrons are produced by initial state radiation. Although we did not impose a $P_T$ cut on the $Z$-boson, the hard scattering scale here is similar to that for our signal and background processes. We looked for jets that were produced by the initial state radiation, selecting jets using the $k_T$ algorithm with $R = 0.2$ and counting all jets with rapidities in the range $-2 < y < 2$. The resulting distribution as a function of the jet transverse momentum $k_J$ is shown in Fig. \[fig:UEModelFit\]. This distribution is to be compared with $$\frac{dN_{\rm IS}}{dk_J}= \int \frac{d^4 p}{(2\pi)^{4}}\ 2\pi\delta(p^2)\ \delta(|\vec p_T| - k_J)\ \Theta(|y_p| < 2)\ H_{\rm IS} \;\;. $$ This curve, with our choice of parameters, is shown in Fig. \[fig:UEModelFit\] along with two more curves corresponding to the two terms in $H_{\rm IS}$. The jets described by $H_{\rm IS}$ are primary jets that can split to produce the jets modeled by [Pythia]{}, so we have made the primary jet spectrum somewhat harder than the [Pythia]{} jet spectrum. In Sec. \[sec:results\], we comment on whether the choice of these and other parameters affects the numerical results from shower deconstruction. Final state QCD shower splittings {#sec:finalshower} ================================= In this section, we define the main part of the simplified shower, QCD shower splittings. Splitting probability for $g \to g + g$ {#sec:gggsplitting} --------------------------------------- The splitting vertex for a QCD splitting $g \to g + g$ is represented by a function $H_{ggg}$ as illustrated in Fig. \[fig:Splittingggg\]. We call these the conditional splitting probabilities. Here the condition is that the mother parton has not split already at a higher virtuality. Let us examine what we should choose for $H_{ggg}$ for a $g \to g + g$ splitting. We take the mother parton to carry the label $J$ and we suppose that the daughter partons are labelled $A$ and $B$ as shown in the figure. The form of the splitting probability depends on which of the two daughter partons is the softer. We let $h$ be the label of the harder daughter parton and $s$ be the label of the softer daughter parton: $k_s < k_h$. ![Splitting function for final state $g \to g + g$ splittings.[]{data-label="fig:Splittingggg"}](SplittinggggNew.pdf){width="4.0cm"} By definition, $k_s < k_h$. We first look at the splitting in the limit $k_s \ll k_h$. The splitting probability is then dominated by graphs in which parton $s$ is emitted from a dipole consisting of parton $J$ and some other parton, call it parton $k$. If $s = A$, then the emitting dipole is formed from parton $h=B$ and parton $k = k(J)_{\mathrm{L}}$, while if $s = B$, then the emitting dipole is formed from parton $h=A$ and parton $k = k(J)_{\mathrm{R}}$. The choice of $k$ depends on which of the two daughter partons is parton $s$, so where needed we will use the notation $k(s)$ instead of simply $k$. For $H$, we start with the dipole approximation for the squared matrix element (with $\mu_s^2 = \mu_h^2 = 0$), $$\label{eq:dipolesplitting} H_{\rm dipole} \approx \frac{C_{\mathrm{A}}}{2}\, (4\pi {\alpha_{\mathrm{s}}}) \frac{2\, p_h\cdot p_k} {p_s \cdot p_h\ p_s \cdot p_k} \;\;. $$ We use $$\begin{split} 2\, p_s \cdot p_h ={}& 2k_s k_h[\cosh(y_s - y_h) - \cos(\phi_s - \phi_h)] \\ \approx{}& k_s k_h[(y_s - y_h)^2 + (\phi_s - \phi_h)^2] \\ ={}& k_s k_h\,\theta_{sh}^2 \;\;, \\ 2\, p_s \cdot p_k \approx{}& k_s k_k\,\theta_{sk}^2 \;\;, \\ 2\, p_h \cdot p_k \approx{}& k_h k_k\,\theta_{hk}^2 \;\;, \end{split}$$ where $$\begin{split} \label{eq:thetasqdef} \theta_{sh}^2 ={}& (y_s - y_h)^2 + (\phi_s - \phi_h)^2 \;\;, \\ \theta_{sk}^2 ={}& (y_s - y_k)^2 + (\phi_s - \phi_k)^2 \;\;, \\ \theta_{hk}^2 ={}& (y_h - y_k)^2 + (\phi_h - \phi_k)^2 \;\;. \end{split}$$ Thus $$\label{eq:dipolesplitting2} H_{\rm dipole} \approx \frac{8\pi{\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}}{k_s^2}\ \frac{\theta_{hk}^2} {\theta_{sh}^2\, \theta_{sk}^2} \;\;. $$ This function is singular when parton $s$ is soft, since it is proportional to $1/k_s^2$. It is singular when parton $s$ is parallel to parton $h$. It is also singular when parton $s$ is parallel to parton $k$. We can partition $H_{\rm dipole}$ into two parts, one, $H_{sh}$, associated with emission from parton $h$ and one, $H_{sk}$, associated with emission from parton $k$. (Here we treat parton $s$ as very soft and regard parton $h$ after the emission and parton $J$ before the emission as the same.) We write $$\begin{split} H_{sh} ={}& H_{\rm dipole}\times A'_{hk} \;\;, \\ H_{sk} ={}& H_{\rm dipole}\times A'_{kh} \;\;, \end{split}$$ where $$\begin{split} A'_{hk} ={}& \frac{\theta_{sk}^2} {\theta_{sh}^2 + \theta_{sk}^2} \;\;, \\ A'_{kh} ={}& \frac{\theta_{sh}^2} {\theta_{sh}^2 + \theta_{sk}^2} \;\;, \end{split}$$ so that $$A'_{hk} + A'_{kh} = 1 \;\;. $$ This dipole partitioning function is that of Ref. [@NSIII], Eq. (7.12), adapted to the small angle approximations used here. For a Catani-Seymour dipole shower, one uses a different dipole partitioning function. With this choice, we have $$\label{eq:dipolesplitting3} H_{sh} = \frac{8\pi{\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}}{k_s^2}\ \frac{\theta_{hk}^2} {\theta_{sh}^2[\theta_{sh}^2 + \theta_{sk}^2]} \;\;. $$ We can improve this a little so that it works better when parton $s$ is not extremely soft. We recall that, for parton $s$ soft, $\mu_J^2 \approx k_s k_h \theta_{sh}^2$ and that $k_h \approx k_J$ and the angles of parton $J$ are close to those of parton $h$. Thus we take $$\label{eq:dipolesplitting4} H_{sh} \approx \frac{8\pi{\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}}{\mu_J^2}\, \frac{k_J^2}{k_s k_h}\, \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2} \;\;. $$ The angular factor $$\label{eq:anglefactor} g(y_s,\phi_s) = \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2} $$ is of some interest. We plot it in Fig. \[fig:angularg\]. It enhances radiation into the region between parton $h$ and parton $k$ and disfavors radiation at angles much greater than the angle between parton $h$ and parton $k$. The variable “pull” [@pull] is designed to separate signal and background events based on this factor. Here, the same effect appears as a natural part of a parton shower based on color dipoles. So far, we have an approximation that is good in the limit of emission of a soft gluon. This approximation is also good when the gluon labeled $s$ is collinear with the mother parton direction as long as $k_s \ll k_J$. When the two daughter partons are nearly collinear, we have $$\begin{split} \frac{k_h}{k_J} \approx{}& z \;\;, \\ \frac{k_s}{k_J} \approx{}& 1-z \;\;, \end{split}$$ where $z$ is the momentum fraction carried by gluon $h$. Our splitting function is proportional to $$\frac{k_J^2}{k_s k_h} \approx \frac{1}{z(1-z)} \;\;. $$ This is right for $(1-z) \ll 1$ but it has corrections when $1-z$ is not small. The complete DGLAP splitting kernel for collinear splittings is $$P_{gg}(z) = 2C_A\ \frac{[1-z(1-z)]^2}{z(1-z)} \;\;. $$ Thus we should replace $$\frac{k_J^2}{k_s k_h} \to \frac{k_J^2}{k_s k_h} \left[1 - \frac{k_s k_h}{k_J^2}\right]^2 \;\;. $$ Thus we take $$\label{eq:dipolesplitting5} H_{sh} \approx \frac{8\pi{\alpha_{\mathrm{s}}}\,C_{\mathrm{A}}}{\mu_J^2}\, \frac{k_J^2}{k_s k_h}\, \left[1 - \frac{k_s k_h}{k_J^2}\right]^2 \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2} \;\;. $$ ![ The angular enhancement factor $g(y_s,\phi_s)$ of Eq. (\[eq:anglefactor\]). The coordinates are $(y_s - y_h, \phi_s - \phi_h)$. The color connected parton $k$ is at coordinates $(0.1, 0)$. This figure is adapted from Ref. [@NSII]. []{data-label="fig:angularg"}](angleplot.pdf){width="10.0cm"} We need to add another ingredient: $\mu_J^2$ cannot be too large. Suppose that the mother of parton $J$ is parton $K$ and the sister is parton $J'$. We need to be able to neglect $\mu_J^2$ and $\mu_{J'}^2$ in the calculation of $(p_J + p_{J'})^2 \equiv \mu_K^2$. With a little kinematic analysis, we see that neglecting $\mu_J^2$ and $\mu_{J'}^2$ is a good approximation if $$\begin{split} \frac{\mu_J^2}{k_J}\, \ll{}& \frac{\mu_K^2}{k_K} \;\;, \\ \frac{\mu_{J'}^2}{k_{J'}}\, \ll{}& \frac{\mu_K^2}{k_K} \;\;. \end{split}$$ We can enforce this condition in an approximate way by requiring $$\begin{split} \label{eq:hardnesscut} 2\,\frac{\mu_J^2}{k_J}\, <{}& \frac{\mu_K^2}{k_K} \;\;, \\ 2\,\frac{\mu_{J'}^2}{k_{J'}}\, <{}& \frac{\mu_K^2}{k_K} \;\;. \end{split}$$ For this reason, we include in $H$ a factor $\Theta(2\mu_J^2/k_J < \mu_K^2/k_K)$. We know $\mu_K^2$ from the shower history. If there is no mother parton because parton $J$ was produced in the hard interaction or by initial state bremsstrahlung, we take $\mu_K^2/k_K = 2 k_J$, so that the virtuality ordering condition becomes simply $\mu_J^2 < k_J^2$. This same condition, iterated, restricts the daughter virtualities: $$\begin{split} 2\,\frac{\mu_h^2}{k_h} <{}& \frac{\mu_J^2}{k_J} \;\;, \\ 2\,\frac{\mu_s^2}{k_s} <{}& \frac{\mu_J^2}{k_J} \;\;. \end{split}$$ This gives a splitting probability $H$: $$\label{eq:splittingH} H_{ggg} = 8\pi C_A\,\frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{\mu_J^2}\, \frac{k_J^2}{k_s k_h} \left[1 - \frac{k_s k_h}{k_J^2}\right]^2 \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2}\ \Theta\!\left(2\,\frac{\mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. $$ Here we evaluate ${\alpha_{\mathrm{s}}}$ at the virtuality scale of the splitting. When there is no color connected parton visible, we are forced to simplify this to $$\label{eq:splittingHnopartner} H_{\text{no-}k} = 8\pi C_A\,\frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{\mu_J^2}\, \frac{k_J^2}{k_s k_h} \left[1 - \frac{k_s k_h}{k_J^2}\right]^2 \Theta\!\left(2\,\frac{\mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. $$ Here there is no restriction on the angles $y_s,\phi_s$ of the emitted soft parton. This is potentially a very bad approximation, but in our case the approximation is tolerable because the emitted soft parton is necessarily within the fat jet. When, in addition, there is no mother parton $K$, this becomes $$\label{eq:splittingHnomother} H_{\text{no-}K} = 8\pi C_A\,\frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{\mu_J^2}\, \frac{k_J^2}{k_s k_h} \left[1 - \frac{k_s k_h}{k_J^2}\right]^2 \Theta\!\left(\mu_J^2 < k_J^2\right) \;\;. $$ Splitting probability for $q \to q + g$ and $\bar q \to \bar q + g$ {#sec:bbgsplitting} ------------------------------------------------------------------- ![Splitting functions for final state QCD splittings of a quark or antiquark, including a $b$ or $\bar b$ quark.[]{data-label="fig:Splittingqgq"}](Splittingqgq.pdf){width="7.0cm"} Quarks and antiquarks can radiate gluons. These splittings are represented by the splitting probabilities $H_{qqg}$ and $H_{\bar q g \bar q}$ that are illustrated in Fig. \[fig:Splittingqgq\]. We treat the splitting of a bottom quark as identical to the splitting of a light quark, neglecting the bottom quark mass. We take the splitting probability to be $$\label{eq:splittingHqqg} H_{qqg} = H_{\bar q g \bar q} = 8\pi C_{\mathrm{F}}\, \frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{\mu_J^2}\ \frac{k_J}{k_g} \left[ 1 + \left(\frac{k_q}{k_J}\right)^2 \right] \frac{\theta_{qk}^2} {\theta_{gq}^2 + \theta_{gk}^2}\, \Theta\!\left(2\,\frac{\mu_J^2}{k_J} < \frac{\mu^2_K}{k_K}\right) \;\;. $$ The derivation follows the derivation that led to Eq. (\[eq:splittingH\]). Here $k_g$ is the transverse momentum of the gluon, $k_q$ is the transverse momentum of the quark or antiquark, and $k_J$ is the transverse momentum of the mother quark. Then using $k_q/k_J \approx z$ and $k_g/k_J \approx (1-z)$, the factor containing these ratios gives the collinear splitting function $$\label{eq:Pqq} P_{qq} = C_F\, \frac{1 + z^2}{1-z} $$ in the collinear limit. There is an angle factor in which $q$ labels daughter quark or antiquark, $g$ labels the emitted gluon, and $k$ labels the color connected partner of the quark or antiquark. If there is no color connected partner in the fat jet, this angle factor is to be omitted. There is a theta function that restricts the mass $\mu_J^2$ of the daughter pair to be less than $\mu_K^2 k_J/(2 k_K)$, where $K$ labels the mother of parton $J$. With our approximations for shower histories, a quark or antiquark always has a mother parton. Splitting probability for $g \to q + \bar q$ {#sec:gqqsplitting} -------------------------------------------- We need one more QCD splitting probability, for $g \to q + \bar q$, including $g \to b + \bar b$ as illustrated in Fig. \[fig:Splittinggqq\]. Note that this splitting is important because $g \to b + \bar b$ is the main background for the $H \to b + \bar b$ signal, so we need to keep track of $g \to b + \bar b$ splittings even if they have a small probability. ![Splitting function for final state QCD splittings that produce a $q\bar q$ pair.[]{data-label="fig:Splittinggqq"}](Splittinggqq.pdf){width="4.0cm"} To construct the splitting function that we need, we can start with the $q \to q + g$ splitting function in Eq. (\[eq:splittingHqqg\]). We can take the collinear limit, setting the angle factor to 1. Then we replace $P_{qq}$, Eq. (\[eq:Pqq\]), with $z \approx k_q/k_J$ and $(1-z) \approx k_{g}/k_J$, by $$P_{qg} = T_{\mathrm{R}}\, [z^2 + (1-z)^2] $$ with $z \approx k_q/k_J$ and $(1-z) \approx k_{\bar q}/k_J$. This gives $$\label{eq:splittinggtob} H_{g\bar q q} = 8\pi T_{\mathrm{R}}\, \frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{\mu_J^2} \frac{k_q^2 + k_{\bar q}^2}{k_J^2}\ \Theta\!\left(2\,\frac{\mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. $$ Note that this function is big for small $\mu_J^2$ in the limit in which the quark pair is collinear, but that there is no additional singularity when the quark or antiquark is soft. For a gluon splitting to $b + \bar b$ we use $H_{g\bar b b} = H_{g\bar q q}$ as given above. For a gluon splitting to $(u,\bar u)$, $(d,\bar d)$, $(s,\bar s)$, and $(c,\bar c)$, we include all four cases at once by using $(n_{\mathrm{f}}- 1) H_{g\bar q q}$, where $(n_{\mathrm{f}}- 1) = 4$. There is a theta function that restricts the mass $\mu_J^2$ of the daughter pair to be less than $\mu_K^2 k_J/(2 k_K)$, where $K$ labels the mother of parton $J$. If there is no mother parton $K$, this theta function becomes $\Theta\!\left(\mu_J^2 < k_J^2\right)$. The Sudakov factor in the final state shower {#sec:Sudakov} ============================================ We have given definitions for splitting probabilities in the simplified shower. An important part of a parton shower event generator is the probability that a parton that was created at a virtuality scale $\mu_K^2$ has not split before it finally does split at a scale $\mu_J^2$. This is the Sudakov factor and has the form $\exp(-S)$, where $S$ is the integral of the splitting probability down to the scale $\mu_J^2$. In this section, we explore how to approximate $S$. Variables for parton splitting {#sec:splittingvariables} ------------------------------ To evaluate the Sudakov exponent, we need to understand in some detail the integrations for combining two partons. We use $$\label{eq:d4ptomukyphi} \int\!\frac{d^4 p}{(2\pi)^4}\ \cdots = \int_0^\infty\!\frac{d\mu^2}{2\pi}\ \frac{1}{4(2\pi)^3} \int_0^\infty\!dk^2 \int_{-\infty}^{\infty}\!dy \int_0^{2\pi}\!d\phi\ \cdots \;\;. $$ We consider integrations over the momenta of partons A and B that we would like to combine to make parton $J$: $$\label{eq:Idef} I = \int\!\frac{d\mu_A^2}{2\pi}\ \frac{1}{4(2\pi)^3} \int\!dk_A^2 \int\!dy_A \int\!d\phi_A\ \int\!\frac{d\mu_B^2}{2\pi}\ \frac{1}{4(2\pi)^3} \int\!dk_B^2 \int\!dy_B \int\!d\phi_B\ \cdots \;\;. $$ Now we insert $$1 = \int \!\frac{d^4 p_J}{(2\pi)^4}\ (2\pi)^4\delta^4(p_A + p_B - p_J) $$ and use Eq. (\[eq:d4ptomukyphi\]) for $\int d^4 p_J$. This gives $$\begin{split} I ={}& \frac{1}{4(2\pi)^3} \int\!dk_J^2 \int\!dy_J \int\!d\phi_J \int\!\frac{d\mu_J^2}{2\pi}\ \int\!\frac{d\mu_A^2}{2\pi}\int\!\frac{d\mu_B^2}{2\pi} \\ &\times \frac{1}{(2\pi)^2}\,\frac{1}{16} \int\!dk_A^2 \int\!dy_A \int\!d\phi_A \int\!dk_B^2 \int\!dy_B \int\!d\phi_B \\ & \times \delta^4(p_A + p_B - p_J) \cdots \;\;. \end{split}$$ In the second line, we have six variables, $k_A$, $y_A$, $\phi_A$, $k_B$, $y_B$, and $\phi_B$, restricted by four delta functions. This leaves an integration over two variables. We choose one of the variables to be the momentum fraction $$\begin{split} \label{eq:zdef0} z ={}& \frac{k_A}{k_A + k_B} \;\;, \\ 1-z ={}& \frac{k_B}{k_A + k_B} \;\;. \end{split}$$ For the other integration variable describing the splitting, we use $\varphi$ defined by $$\label{eq:tanvarphi0} \tan\varphi = \frac{\sinh(\Delta y/2)\cos(\Delta\phi/2)}{\cosh(\Delta y/2)\sin(\Delta\phi/2)} \;\;, $$ where $$\begin{split} \Delta y ={}& y_A - y_B \;\;, \\ \Delta \phi ={}& \phi_A - \phi_B \;\;. \end{split}$$ Thus $\varphi$ is approximately the angle about the origin in the $(\Delta \phi,\Delta y)$ plane. Then $$\begin{split} \label{eq:Iresult} I ={}& \int\!\frac{d\mu_A^2}{2\pi}\int\!\frac{d\mu_B^2}{2\pi}\ \frac{1}{4(2\pi)^3}\int\!dk_J^2 \int\!dy_J \int\!d\phi_J \\&\times \frac{1}{4(2\pi)^2} \int\!\frac{d\mu_J^2}{2\pi} \int\!dz \int\! d\varphi\ J \cdots \;\;, \end{split}$$ where $J$ is a jacobian to be discussed presently. We think about this as follows. We combine two subjets $A$ and $B$ of mass $\mu_A$ and $\mu_B$. We display integrations over $\mu_A$ and $\mu_B$, but these integrations remain unaltered between the original integral (\[eq:Idef\]) and the result Eq. (\[eq:Iresult\]). In the original integral, we integrate over $k^2$, $y$ and $\phi$ for the two constituent jets, with the standard factor[^5] $1/[4 (2\pi)^3]$ for each. The subjets are combined to make a jet $J$ described by $k_J^2$, $y_J$ and $\phi_J$. We integrate over these variables with the standard factor $1/[4 (2\pi)^3]$. This leaves variables $\mu_J$, $z$ and $\varphi$ that describe the splitting. Integration over these variables comes with a factor $1/[4(2\pi)^3]$ and a jacobian $J$. In Eq. (\[eq:Iresult\]), a “strong ordering” approximation applies for jet masses, $\mu_A \ll \mu_J$ and $\mu_B \ll \mu_J$. In turn, $\mu_J$ is small compared to $k_A$, $k_B$ and $k_J$. For this reason, it is a sufficient approximation to set $\mu_A = \mu_B = 0$ in $J$. In the appendix of this paper, we calculate $J$ with $\mu_A = \mu_B = 0$. We find a quite simple result, $$J = \frac{\sinh^2(\Delta y/2) + (1 + \mu_J^2/k_J^2)\sin^2(\Delta\phi/2)} {\sinh^2(\Delta y/2)\cosh^2(\Delta y/2) + (1 + \mu_J^2/k_J^2)\sin^2(\Delta\phi/2)\cos^2(\Delta\phi/2)} \;\;. $$ This result is even simpler when $\Delta y$ and $\Delta\phi$ are small. Since $\cosh(\Delta y/2) \approx 1$ and $\cos(\Delta\phi/2) \approx 1$ for small angles, we have $$J \approx 1 $$ for small angles. Splitting probability and the Sudakov exponent ---------------------------------------------- We will insert a splitting probability into each integration over the splitting variables, so that the splitting probability differential in the splitting variables $\mu_J^2, z ,\varphi$ is $$\begin{split} \label{eq:DifferentialSplittingProbability} d{\cal P} = {}& \frac{1}{4(2\pi)^{3}} d\mu_J^2\ dz\ d\varphi\ H e^{-S} \end{split}$$ Here we have approximated the jacobian $J$ by its small angle form, $J\approx 1$. We also use small angle approximations in $H$, as in our expressions in Sec. \[sec:finalshower\]. For instance, we take $k_A/k_J \approx z$ and $k_B/k_J \approx (1-z)$. The corresponding total splitting probability is $$\begin{split} \label{eq:splitprobability} \int d{\cal P} = {}& \frac{1}{4(2\pi)^3} \int\!d\mu_J^2 \int\!dz \int\!d\varphi\ H e^{-S} \;\;. \end{split}$$ Here $H$ is the conditional splitting probability for a mother parton to split if it has not split at a higher virtuality than $\mu_J^2$ and $e^{-S}$ is the probability, derived from $H$, that the mother parton has not split at a higher virtuality. Given the physical meaning of the Sudakov factor, one would like $$\label{eq:SudakovExponent0} S \approx \frac{1}{4(2\pi)^3} \int\!d\bar\mu_J^2 \,\Theta(\mu_J^2 < \bar\mu_J^2) \int\!d\bar z \int\!d\bar\varphi\ H(\bar p_A,\bar p_B)\, \Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet}) \;\;. $$ Here $\bar p_A$ and $\bar p_B$ denote the momenta of the daughter partons in a possible splitting and $\bar\mu_J^2$, $\Delta \bar y$, and $\Delta \bar\phi$ denote parameters of the possible splitting. The theta function $\Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet})$ is present for the following reason. Parton $J$ has, in each interval of virtuality $d\bar\mu_J^2$, a probability to emit a soft, wide angle gluon that is not seen because it is outside the boundary of the fat jet. The probability for emission of such a ghost gluon is most substantial when the color connected partner for the emission is itself outside the fat jet. Fortunately, the momentum of the emitted ghost gluon is small, since it must be a soft, wide angle gluon. Thus it is a sensible approximation to ignore this momentum loss. Since we cannot see the ghost emissions, we ignore them completely. This means that we ignore them in the Sudakov exponent $S$ by integrating only over splittings in which both daughter partons are in the fat jet. Sudakov exponent for gluon splitting {#sec:sudakov} ------------------------------------ As stated in the previous subsection, the Sudakov factor is the probability that the mother parton $J$ did not split at a virtuality above $\mu_J^2$. Thus the Sudakov factor is $\exp(-S)$, where $S$ is the probability for the mother parton to have split at a value of $\mu_J$ that is greater than the value at which the splitting did, in fact, occur. The corresponding Sudakov factors are associated with the propagators in our shower history diagrams. For instance, for a gluon, the factor $\exp(-S_g)$ is indicated in Fig. \[fig:Sudakovggg\]. There are three contributions to $S_g$, corresponding to $g \to g + g$, $g \to q + \bar q$, and $g \to b + \bar b$. Note that the total $S_g$ appears in $\exp(-S_g)$ independently of whether the gluon ultimately decays to $g + g$, $q + \bar q$, or $b + \bar b$. In this section, we work out the contribution from $g \to g + g$. ![Sudakov factor between final state splittings for a gluon.[]{data-label="fig:Sudakovggg"}](SudakovgNew.pdf){width="4.0cm"} We start with in Eq. (\[eq:splittingH\]) for $H_{ggg}$ with $k_s/k_J$ replaced by $z$ and $k_h/k_J$ replaced by $(1-z)$ in the case that the label $s$ of the softer daughter parton is $s=A$ or $k_h/k_J$ replaced by $z$ and $k_s/k_J$ replaced by $(1-z)$ in the case that $s=B$. Since $H$ is symmetric under $k_s \leftrightarrow k_h$, the choice of $s$ does not affect the form of the result. However, now $s = A$ corresponds to $z < 1/2$ and $s = B$ corresponds to $z > 1/2$. This gives $$\label{eq:H0} H_{ggg} \approx 8\pi C_{\mathrm{A}}\,\frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{\bar\mu_J^2}\, \frac{[1 - z(1-z)]^2}{z (1-z)} \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2}\ \Theta\!\left(2\,\frac{\bar\mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. $$ In the angular factor ${\theta_{hk}^2}/ {[\theta_{sh}^2 + \theta_{sk}^2]}$, we use the notation from Eq. (\[eq:thetasqdef\]) that $\theta_{\alpha\beta}^2 = (y_\alpha - y_\beta)^2 + (\phi_\alpha - \phi_\beta)^2$. The angular factor is one for small angles $\theta_{sh}$ and is small when $\theta_{sh} \gg \theta_{hk}^2$. Thus it is approximately a theta function that requires $\theta_{sh} < \theta_{hk}$. Here $\theta_{hk}$ is approximately the angle $\theta_{k(s)}$ between the mother parton and the parton $k(s)$ that carries the color line of the mother parton that is carried by the emitted soft parton. Thus we replace $$\label{eq:anglereplacement} \frac{\theta_{hk}^2} {\theta_{sh}^2 + \theta_{sk}^2} \to \Theta(\theta < \theta_{k(s)}) \;\;. $$ This is the angle-ordering approximation to the true dipole matrix element [@MarchesiniWebber]. It is a rather crude approximation locally in angle space, but is a pretty good approximation after integrating from large $\theta$ to small $\theta$. With this approximation, we have $$\label{eq:H1} H_{ggg} \approx 8\pi C_{\mathrm{A}}\,\frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{\bar\mu_J^2}\, \frac{[1 - z(1-z)]^2}{z (1-z)} \Theta\big(\theta_{sh}^2 < \theta_{k(\bar s)}^2\big)\, \Theta\!\left(2\,\frac{\bar \mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. $$ We can translate the restrictions on $\theta_{sh}$ to restrictions on $z$. From Eq. (\[eq:z1mz\]) of the appendix, we have, in the limit of small angles, $$\label{eq:mutotheta} \frac{\mu_J^2}{k_J^2} \approx z(1-z)\, \theta^2_{sh} \;\;. $$ Thus for $z < 1/2$ the relation $\theta_{sh}^2 < \theta_{k(\bar s)}^2$ becomes $$z(1-z) > \frac{1}{\theta_{k(A)}^2}\, \frac{\bar \mu_J^2}{k_J^2} \;\;. $$ Presuming that the right hand side of this inequality is much smaller than 1, we can simplify this approximately to $$z > \frac{1}{\theta_{k(A)}^2}\, \frac{\bar \mu_J^2}{k_J^2} \;\;. $$ Similarly, we have a restriction on how small $(1-z)$ can be, $$(1-z) > \frac{1}{\theta_{k(B)}^2}\, \frac{\bar \mu_J^2}{k_J^2} \;\;. $$ These inequalities can be combined as $$\frac{1}{\theta_{k(A)}^2}\, \frac{\bar \mu_J^2}{k_J^2} < z < 1 - \frac{1}{\theta_{k(B)}^2}\, \frac{\bar \mu_J^2}{k_J^2} \;\;. $$ Thus $$\begin{split} \label{eq:H2} H_{ggg} \approx {}& 8\pi C_{\mathrm{A}}\,\frac{ {\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{\bar\mu_J^2}\, \frac{[1 - z(1-z)]^2}{z (1-z)} \\ & \times \Theta\!\left(\frac{1}{\theta_{k(A)}^2}\, \frac{\bar \mu_J^2}{k_J^2} < z < 1 - \frac{1}{\theta_{k(B)}^2}\, \frac{\bar \mu_J^2}{k_J^2}\right)\, \Theta\!\left(2\,\frac{\bar\mu_J^2}{k_J} < \frac{\mu_K^2}{k_K}\right) \;\;. \end{split}$$ For the theta function $\Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet})$ in Eq. (\[eq:SudakovExponent0\]), we note that if $\theta_{k(s)}$ is much smaller than the fat jet radius $R_{\mathrm{F}}$, the theta function that imposes angular ordering, $\Theta(\theta^2 < \theta_{k(s)}^2)$, will almost always enforce that $\bar p_A$ and $\bar p_B$ are in the fat jet, so that $\Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet}) = 1$. On the other hand, sometimes there is no color connected parton with label $k(\bar s)$ in the fat jet. Then we use Eq. (\[eq:splittingHnopartner\]), which effectively defines $\theta_{k(\bar s)} = \infty$. In this case, the theta function $\Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet})$ limits $\theta$ to a maximum value on the order of the fat jet radius $R_{\mathrm{F}}$. We take a simple approximation and replace $\Theta(\{\bar p_A,\bar p_B\} \in {\rm fat\ jet})$ by $\Theta(\theta^2 < R_0^2)$, where $R_0$ is an adjustable parameter with default value $R_0 = R_{\mathrm{F}}$. Thus we understand that we should make the replacement $$\label{eq:nokreplacement} \theta_{k(\bar s)} \to R_0 $$ when there is no color connected parton $k(s)$. In the case that parton $J$ is the parton that has no mother parton $K$ because it originates a jet, we use Eq. (\[eq:splittingHnomother\]) for $H$. This amounts to making the replacement $$\label{eq:noKreplacement} \frac{\mu_K^2}{2k_K} \to k_J $$ when there is no mother parton $K$. With these approximations for $H$, can insert $H_{ggg}$ into Eq. (\[eq:SudakovExponent0\]) to obtain $$\begin{split} \label{eq:SudakovExponent4} S_{ggg} \approx{}& \int\! \frac{d\bar\mu_J^2}{\bar\mu_J^2}\ \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right)\, \frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{2\pi} \\&\times \int\!dz\ \Theta\!\left(\frac{1}{\theta_{k(A)}^2}\, \frac{\bar \mu_J^2}{k_J^2} < z < 1 - \frac{1}{\theta_{k(B)}^2}\, \frac{\bar \mu_J^2}{k_J^2}\right)\, C_{\mathrm{A}}\, \frac{[1 - z(1-z)]^2}{z (1-z)} \;\;, \end{split}$$ where we understand that we are to make the replacement (\[eq:nokreplacement\]) in the case that there is no color connected parton $k(s)$ and the replacement (\[eq:noKreplacement\]) in the case that there is no mother parton $K$. Here we have performed the integration over $\varphi$ since, with our approximations, the integrand does not depend on $\varphi$. Note the structure of this. We integrate half the DGLAP kernel over $\bar\mu_J^2$ and $z$, with limits on the $z$ integral from the angular ordering approximation to the quantum coherence of soft gluon emission from color dipoles. We have half of the DGLAP kernel for $g \to g + g$ because we are integrating over the phase space for two identical particles and need a statistical factor 1/2. We can perform the integration over $z$, giving $$\begin{split} \label{eq:SudakovExponent5} S_{ggg} \approx{}& 2 C_{\mathrm{A}}\int\! \frac{d\bar\mu_J^2}{\bar\mu_J^2}\ \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right)\, \frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{2\pi} \left[ \log\left(\theta_{k(A)}\theta_{k(B)}k_J^2/\bar \mu_J^2\right) - \frac{11}{12} \right] \;\;. \end{split}$$ Here we have omitted terms that are suppressed by a power of $\bar \mu_J^2/[k_J^2 \theta_{k(A)}^2]$ or $\bar \mu_J^2/[k_J^2 \theta_{k(B)}^2]$. We can perform the integration over $\bar \mu$ by changing variables to ${\alpha_{\mathrm{s}}}$ using $$\label{eq:asevolution} \frac{d{\alpha_{\mathrm{s}}}(\mu^2)}{d\log(\mu^2)} = - b_0\, {\alpha_{\mathrm{s}}}(\mu^2)^2 \;\;, $$ where $b_0 = ({33 - 2 n_{\mathrm{f}}})/({12\pi})$. We take the number of flavors to be $n_{\mathrm{f}}= 5$. We write $$\label{eq:asresult} \log\left( \frac{\theta_{k(A)}\theta_{k(B)}k_J^2}{\bar\mu_J^2}\right) = \frac{1}{b_0} \left[ \frac{1}{{\alpha_{\mathrm{s}}}\!\left(\theta_{k(A)}\theta_{k(B)} k_J^2 \right)} - \frac{1}{{\alpha_{\mathrm{s}}}\!\left(\bar\mu_J^2\right)} \right] \;\;. $$ This gives $$\begin{split} \label{eq:SudakovExponent6} S_{ggg} \approx{}& \frac{C_{\mathrm{A}}}{\pi b_0^2} \biggl\{ \log\!\left(\frac{{\alpha_{\mathrm{s}}}\!\left(\mu_J^2\right)} {{\alpha_{\mathrm{s}}}\!\left(k_J\mu_K^2/(2 k_K)\right)}\right) \left[\frac{1}{{\alpha_{\mathrm{s}}}(\theta_{k(A)}\theta_{k(B)}k_J^2)} - \frac{11 b_0}{12} \right] \\&\quad + \frac{1}{{\alpha_{\mathrm{s}}}\!\left(\mu_J^2\right)} - \frac{1}{{\alpha_{\mathrm{s}}}\!\left( k_J\mu_K^2/(2 k_K)\right)} \biggr\} \;\;. \end{split}$$ Since $\mu_J^2 < k_J\mu_K^2/(2 k_K)$, this quantity is positive as long as the partner angles $\theta_{k(A)}$ and $\theta_{k(A)}$ are not too small. However, since $S < 0$ is unphysical, we replace $S_{ggg} \to S_{ggg} \Theta(S_{ggg} > 0)$ just to be sure that we are never enhancing an unphysical region by having $e^{-S} > 1$. We also evaluate the Sudakov exponent for a $g \to q + \bar q$ splitting. Here we use $H_{g\bar q q}$ from Eq. (\[eq:splittinggtob\]). This gives $$\label{eq:SudakovExponentgqq1} S_{g\bar q q} \approx \int\! \frac{d\bar\mu_J^2}{\bar\mu_J^2}\, \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right) \frac{{\alpha_{\mathrm{s}}}(\mu_J^2) }{2\pi} \int\!dz\ \,T_{\mathrm{R}}\, [{z^2 + (1-z)^2}] \;\;. $$ We can perform the $z$-integration to give $$\label{eq:SudakovExponentgqq2} S_{g\bar q q} \approx \frac{2T_{\mathrm{R}}}{3} \int\! d\bar\mu_J^2\, \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right) \frac{{\alpha_{\mathrm{s}}}(\mu_J^2) }{2\pi}\, \frac{1}{\bar\mu_J^2} \;\;. $$ Then, we can perform the $\bar \mu^2$ integration using Eq. (\[eq:asevolution\]) to give $$\label{eq:SudakovExponentgqq3} S_{g\bar q q} \approx \frac{T_{\mathrm{R}}}{3 \pi b_0}\, \log\!\left(\frac{{\alpha_{\mathrm{s}}}(\mu_J^2)}{{\alpha_{\mathrm{s}}}(k_J\mu_K^2/(2 k_K))}\right) \;\;. $$ Adding $S_{ggg}$ and one copy of $S_{g\bar q q}$ for each quark flavor, including the $b$-quark, we obtain the complete Sudakov exponent for gluon splitting $$\label{eq:totalSg} S_g = S_{ggg}\, \Theta(S_{ggg} > 0) + n_{\mathrm{f}}S_{g\bar q q} \;\;. $$ ![Sudakov factor between final state emission of a gluon from a quark or antiquark. The quark or antiquark flavor can be $b$ or $u$, $d$, $s$ or $c$. The previous splitting can be either a gluon emission, a $g \to q + \bar q$ or $g \to b + \bar b$ splitting or a Higgs boson decay to $b + \bar b$.[]{data-label="fig:Sudakovq"}](Sudakovq.pdf){width="7.0cm"} Sudakov exponent for quark splitting {#sec:sudakovb} ------------------------------------ The Sudakov factor for a quark splitting is illustrated in Fig. \[fig:Sudakovq\]. The corresponding Sudakov exponent is given by Eq. (\[eq:SudakovExponent0\]) using $H_{qqg}$ from Eq. (\[eq:splittingHqqg\]). In $H_{qqg}$ we replace the angular factor ${\theta_{qk}^2}/ [{\theta_{gq}^2 + \theta_{gk}^2}]$ by $\Theta(\theta < \theta_{k})$ as in Eq. (\[eq:splittingHqqg\]). In turn, the restriction on $\theta$ amounts to a restriction on $z$, $$(1-z) > \frac{1}{\theta_{k}^2}\, \frac{\bar \mu_J^2}{k_J^2} \;\;. $$ This gives $$\begin{split} \label{eq:SudakovExponentqqg1} S_{qqg} \approx {}& \int\! \frac{d\bar\mu_J^2}{\bar\mu_J^2}\, \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right) \frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{2\pi} \int\!dz\ \Theta\left((1-z) > \frac{1}{\theta_{k}^2}\, \frac{\bar \mu_J^2}{k_J^2}\right) \\&\times C_{\mathrm{F}}\, \frac{1+z^2}{1-z} \;\;. \end{split}$$ We can perform the $z$-integration to obtain $$\begin{split} \label{eq:SudakovExponentqqg2} S_{qqg} \approx {}& 2C_{\mathrm{F}}\int\! \frac{d\bar\mu_J^2}{\bar\mu_J^2}\, \Theta\!\left(\mu_J^2 < \bar\mu_J^2 < \frac{k_J}{2k_K}\,\mu_K^2\right) \frac{{\alpha_{\mathrm{s}}}(\bar\mu_J^2)}{2\pi}\ \left[\log\left(\theta_k^2\,{k_J^2}/{\bar \mu_J^2}\right) - \frac{3}{4} \right] \;\;. \end{split}$$ Here we have neglected terms suppressed by a power of ${\bar \mu_J^2}/({k_J^2}\theta_k^2)$. We can now use Eqs. (\[eq:asevolution\]) and (\[eq:asresult\]) to perform the $\bar \mu_J^2$ integration, giving $$\begin{split} \label{eq:SudakovExponentqqg3} S_{qqg} \approx{}& \frac{C_{\mathrm{F}}}{\pi b_0^2} \biggl\{ \log\!\left(\frac{{\alpha_{\mathrm{s}}}\!\left(\mu_J^2\right)} {{\alpha_{\mathrm{s}}}\!\left(k_J\mu_K^2/(2 k_K)\right)}\right) \left[\frac{1}{{\alpha_{\mathrm{s}}}(\theta_{k}^2 k_J^2)} - \frac{3 b_0}{4} \right] \\&\quad + \frac{1}{{\alpha_{\mathrm{s}}}\!\left(\mu_J^2\right)} - \frac{1}{{\alpha_{\mathrm{s}}}\!\left( k_J\mu_K^2/(2 k_K)\right)} \biggr\} \;\;, \end{split}$$ As in the case of gluon splitting, it is possible that, after our approximations, $S_{qqg}$ is negative. Since $S < 0$ is unphysical, we define the complete Sudakov exponent for a quark to be $$S_q = S_{qqg}\,\Theta(S_{qqg} > 0) $$ just to be sure that we are never enhancing an unphysical region by having $e^{-S} > 1$. Sometimes there is no color connected parton with label $k$ in the fat jet. Then, as in Eq. (\[eq:nokreplacement\]) for $S_g$, we make the replacement $\theta_{k} \to R_0$. After the last splitting ------------------------ If, in the shower history $h$, parton $J$ does not split, then we look at its virtuality $\mu_J^2$ and include a factor $e^{-S_g}$ or $e^{-S_q}$, as illustrated in Fig. \[fig:SudakovEnd\], that represents the probability for parton $J$ not to have split at a virtuality above the final virtuality $\mu_J^2$. In principle, we should also include a factor $\int\! dH$ representing the probability that parton $J$ did finally split at virtuality $\mu_J^2$. We do not know the splitting angle $\theta$ for this splitting. We do know that $\theta$ was less than $R_{\rm microjet}$, the radius parameter for the $k_T$-jet algorithm that we used to define the microjets: if $\theta$ were larger than $R_{\rm microjet}$, the jet algorithm would not have merged the daughter partons to form the microjet. Thus we would calculate $\int\! dH$ by integrating the differential splitting function over the region $\theta < R_{\rm microjet}$. We do not, in fact, include a splitting factor $\int\! dH$ because this factor is independent of the shower history $h$ and independent of whether we are looking at signal histories or background histories. Thus it cancels from $\chi$. Since we do not need this factor, we do not calculate it. ![Sudakov factors for partons with no further splittings.[]{data-label="fig:SudakovEnd"}](SudakovEndNew.pdf){width="10.0cm"} Sudakov factor for initial state emissions {#ISsudakov} ------------------------------------------ What are the Sudakov factors for the initial state emissions? The initial state emissions can conveniently be ordered according to the value of $k_J^2$. The Sudakov exponent to go from a previous emission scale $k_K^2$ to the new scale $k_J^2$ without a visible initial state emission is, using Eq. (\[eq:ISemission\]), $$\begin{split} S ={}& \frac{2}{(2\pi)^2} \int_{k_J^2}^{k_K^2}\! d \bar k^2 \left[\frac{C_{\mathrm{A}}}{2}\ \frac{{\alpha_{\mathrm{s}}}(\bar k^2 + \kappa_{\mathrm{p}}^2)}{\bar k^2 + \kappa_{\mathrm{p}}^2}\ \frac{1}{(1 + c_R\, \bar k/Q)^{n_R}} + \frac{c_{\rm np}(\kappa_{\rm np}^2)^{n_{\rm np} - 1}} {[\bar k^2 + \kappa_{\rm np}^2]^{n_{\rm np}}} \right] \\ & \times \int\! d\bar y \int\! d\bar \phi\ \Theta(\bar p \in {\rm fat\ jet}) \;\;. \end{split}$$ Here we only count emissions into the region in which the decay products of the emitted parton will be seen as part of the fat jet. Approximately, we can take $$\int\! d\bar y \int\! d\phi\ \Theta(\bar p \in {\rm fat\ jet}) = \pi R_{{\mathrm{F}}}^2 \;\;, $$ where $R_{\mathrm{F}}$ is the radius parameter that defines the fat jet. Then $$\begin{split} S ={}& \frac{R_{\mathrm{F}}^2}{2\pi} \int_{k_J^2}^{k_K^2}\! d \bar k^2 \left[\frac{C_{\mathrm{A}}}{2}\ \frac{{\alpha_{\mathrm{s}}}(\bar k^2 + \kappa_{\mathrm{p}}^2)}{\bar k^2 + \kappa_{\mathrm{p}}^2}\ \frac{1}{(1 + c_R\, \bar k/Q)^{n_R}} + \frac{c_{\rm np}(\kappa_{\rm np}^2)^{n_{\rm np} - 1}} {[\bar k^2 + \kappa_{\rm np}^2]^{n_{\rm np}}} \right] \;\;. \end{split}$$ The initial state shower starts at a transverse momentum scale equal to the scale $Q^2/4$, where $Q^2$ is defined in Eq. (\[eq:Qsqdef\]) and represents the scale of the hard interaction. It ends at a scale $k_{\rm cut}^2$, where $k_{\rm cut}$ is the smallest transverse momentum of a microjet that can register in the detector, for instance $k_{\rm cut} = 0.5\ {\rm GeV}$. In general, there are multiple initial state emissions. We get a Sudakov factor for each one, times a factor for not having an emission between the last one and $k_{\rm cut}^2$. The product of these is $\exp(-S_{\rm IS})$ where $$\begin{split} S_{\rm IS} ={}& \frac{R_{\mathrm{F}}^2}{2\pi} \int^{Q^2/4}_{k_{\rm cut}^2}\! d \bar k^2 \left[\frac{C_{\mathrm{A}}}{2}\ \frac{{\alpha_{\mathrm{s}}}(\bar k^2 + \kappa_{\mathrm{p}}^2)}{\bar k^2 + \kappa_{\mathrm{p}}^2}\ \frac{1}{(1 + c_R\, \bar k/Q)^{n_R}} + \frac{c_{\rm np}(\kappa_{\rm np}^2)^{n_{\rm np} - 1}} {[\bar k^2 + \kappa_{\rm np}^2]^{n_{\rm np}}} \right] \;\;. \end{split}$$ The factor $\exp(-S_{\rm IS})$ is independent of the splitting values $k^2_{J_A}$, $k^2_{J_B}$, …, $k^2_{J_n}$. It does depend on the hard scattering scale $Q^2$, which varies from event to event. However, note that $Q^2$ is independent of the shower history and is the same for shower histories that represent background and signal processes. Thus the factor $\exp(-S_{\rm tot})$ will cancel exactly between signal and background factors in our observable $\chi$, so we can simply replace $$\exp(-S_{\rm IS}) \to 1 \;\;. $$ Higgs decay probability {#sec:Higgsdecay} ======================= A light Higgs boson decays most often into $b + \bar{b}$. Since we consider only the $b + \bar{b}$ decay mode, it suffices to treat the Higgs boson as if it always decayed to $b + \bar{b}$. In the sections on splittings in a parton shower, we have specified a conditional splitting probability $H$, the probability for a splitting at a given virtuality $\mu_J^2$ if the parton has not split at a higher $\mu_J^2$. The total splitting probability is then $H e^{-S}$, where $e^{-S}$ is the probability that the parton has not split at a higher $\mu_J^2$. In this section, for the Higgs decay, we specify the total decay probability $H e^{-S}$, depicted in Fig. \[fig:SplittingHbb\]. The light Higgs boson is a very narrow object. In the narrow width approximation, the differential decay probability is $$H e^{-S} = 16\pi^2\,\delta(m_{b\bar b}^2 - m_H^2) \;\;. $$ The normalization is arranged so that the total probability that the Higgs decays, using the integration measure in in Eq. (\[eq:splitprobability\]), is 1: $$\frac{1}{4(2\pi)^3}\int dm_{b\bar b}^2\int dz\int d\varphi\ H e^{-S} = 1 \;\;. $$ Although a low mass Higgs boson is a very narrow object, the precision of its mass reconstruction is limited by detector resolution effects and by the loss of momentum resolution caused by grouping final state particles into microjets. To take these issues into account, we treat the Higgs boson decay as if the invariant mass of its decay products can be anything within a $\pm \Delta m_H$ window around the physical Higgs mass, $m_H$. Thus we artificially modify the differential decay probability to $$\label{eq:Higgsdecay} H e^{-S} = 16\pi^2\, \frac{\Theta(|m_{b\bar b} - m_H| < \Delta m_H)} {4 m_H \,\Delta m_H} \;\;. $$ Our default value for $\Delta m_H$ is 10 GeV. $b$-tags {#sec:btags} ======== We have described in Sec. \[sec:FinalStateVariables\] how we assign $b$-tags T, F, or [none]{} to microjets produced by [Pythia]{} or [Herwig]{} in a way that mimics imperfect $b$-tagging in an experiment. Tags T or F are assigned only to microjets that are among the three highest $p_T$ microjets in the event and, additionally, have $p_T > p_T^{\rm tag}$, where we take $p_T^{\rm tag} = 15\ {\rm GeV}$. In this section, we examine how to assign probabilities that a given $b$-tag value will be generated in the simplified shower. We seek to simulate the probabilities with which the algorithm specified above generates $t_j$ values T, F, or [none]{} when operating on events generated by the full [Pythia]{} or [Herwig]{}. We suppose that we are given a microjet state, with momenta $p_j$ for each microjet and with a T or F $b$-tag for each microjet that has large enough transverse momentum. We need to estimate the probability $P_j({\mathrm{T}})$ that microjet $j$ receives a tag $t_{j} = {\mathrm{T}}$ and and the probability $P_j({\mathrm{F}})$ that microjet $j$ receives a tag $t_{j} = {\mathrm{F}}$. Then if, in fact, $t_{j} = {\mathrm{T}}$, we include in $P(\{p,t\}_N|{\mathrm{S}},h)$ (for a signal history $h$) or $P(\{p,t\}_N|{\mathrm{B}},h)$ (for a background history $h$) a factor $P_j({\mathrm{T}})$. If $t_{j} = {\mathrm{F}}$, we include factor $P_j({\mathrm{F}})$. How should we calculate $P_j({\mathrm{T}})$ and $P_j({\mathrm{F}})$? We note that the situation is simpler than for a real [Pythia]{} or [Herwig]{} shower because each microjet consists of precisely one parton and each parton $i$ has a definite flavor $f_i$ which can be $b$ or $\bar b$ or could be a flavor that is not $b$ or $\bar b$, namely $q$ or $\bar q$ or $g$. We make the definition as follows, using the probabilities $P({\mathrm{T}}|b)$ and $P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b)$ defined in Sec. \[sec:FinalStateVariables\]: > $\bullet$ If a microjet $j$ is a $b$ or $\bar b$ quark, then we say that $t_{j} = {\mathrm{T}}$ with a probability $P_j({\mathrm{T}}) = P({\mathrm{T}}|b)$ and $t_{j} = {\mathrm{F}}$ with a probability $P_j({\mathrm{F}}) = 1-P({\mathrm{T}}|b)$. > > $\bullet$ If microjet $j$ is not a $b$ or $\bar b$ quark, then we say that $t_{j} = {\mathrm{T}}$ with a probability $P_j({\mathrm{T}}) = P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b)$ and $t_{j} = {\mathrm{F}}$ with a probability $P_j({\mathrm{F}}) = 1-P({\mathrm{T}}|{{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}}b)$. Constructing shower histories {#ConstructingHistories} ============================= We have now described how to calculate a probability $P(\{p,t\}_N|{\mathrm{S}},h)$ for each signal history $h$ and a probability $P(\{p,t\}_N|{\mathrm{B}},h)$ for each background history $h$. We simply look at the diagram that describes the shower history and associate a factor with each element of the diagram. Now we need to generate shower histories. Because our method for combining daughter jets to form a mother jet is so simple, we can construct a set of possible shower histories in a fairly simple fashion. We begin with a list of the starting microjets. We divide these into two sets in all possible ways. One set consists of decay products of partons emitted as initial state or underlying event radiation, the second consists of the decay products of the parton (a gluon for background or a Higgs boson for signal) that is produced in the hard interaction and creates bulk of the fat jet. We divide the set of the microjets associated with initial state emissions into any number of non-empty subsets. Each of these subsets is associated with one parton emitted in the initial state. Now consider the set of microjets associated with the hard parton. In a shower history, the hard parton splits into two partons. The first of these eventually splits to make a subset of the final partons. Call this the set $L$. The second of these eventually splits to make the complementary subset of the final partons. Call this the set $R$. Thus to generate the first splitting of the hard parton, we choose the set $L$ and the set $R$. For each possible first splitting, we proceed to the second splittings. We can start with the set $L$. We divide this into subsets $LL$ and $LR$. Each of these choices represents a possible splitting. We can simply continue this way until we reach a parton that consists of exactly one microjet. Each parton emitted in the initial state, as constructed above, consists of a subset of the microjets. If there are more than one microjets in this subset, we can divide it into left and right subsets, which describes a splitting of this parton. Again, this process can be continued until we reach a parton that consists of exactly one microjet. Note that each parton in the developing shower history consists of a subset of the microjets. Thus we know that the momentum of this parton is $\sum p_i$, summed over this subset. We do not need to know anything about the later shower history of this parton to calculate its momentum. Thus as soon as we have generated a parton splitting, we have the information to calculate the probability for this splitting. The splitting probabilities contain various theta functions that can make the splitting probability equal to zero. When this happens, we can abandon the splitting and try another. Evidently, the shower histories and the corresponding probabilities can be calculated recursively with a simple computer program. That is what we have done. Numerical results {#sec:results} ================= ![ Plot of $s^2/b$ versus $s$, where $s$ and $b$ are defined in Eq. (\[eq:sandbdef\]). We use samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span> as in Fig. \[fig:SandBvschi\]. This is the same plot as in Fig. \[fig:Rvss\] except that we plot $s^2/b$ instead of $s/b$. The total signal cross section with the cuts used is $\sigma_{\rm MC}({\mathrm{S}}) = 1.57\ {\rm fb}$. We also show a point corresponding to a signal cross section $\sigma_{\rm BDRS}({\mathrm{S}}) = 0.22\ {\rm fb}$ and background cross section $\sigma_{\rm BDRS}({\mathrm{B}}) = 0.44\ {\rm fb}$ that we obtained using the method of Ref. [@Butterworth:2008iy]. []{data-label="fig:sRvss"}](sRvssAllTags.pdf){width="8.0cm"} We have now seen what shower deconstruction is. In this section, we explore how effective it is for separating signal from background for $p+p \to H + Z + X \to H + \ell^+ + \ell^- + X$. We apply the shower deconstruction method to events generated by [Pythia]{}, with some comparisons using [Herwig]{} also. The event selection was described in Sec. \[sec:EventSelection\]. Suppose that we base our analysis on counting events above a cut $\chi$, using the integrated cross sections $s(\chi)$ and $b(\chi)$ defined in Eq. (\[eq:sandbdef\]).[^6] What value of $\chi$ should one choose? If integrated luminosity $\int\! dL$ is available, the expected statistical significance of counting events with $\chi(\{p,t\}_N) > \chi$ is $$\frac{N({\mathrm{S}})}{\sqrt{N({\mathrm{B}})}} = \left[ ({\textstyle\int\! dL})\ \frac{s(\chi)^2}{b(\chi)} \right]^{1/2} \;\;. $$ Thus one would choose the value of $\chi$ that maximizes $s^2/b$. In Fig. \[fig:SandBvschi\], we displayed the $\chi$ distribution for signal and background. We used this information to display $s/b$ as a function of $s$ in Fig. \[fig:Rvss\]. In order to understand the statistical significance of a counting experiment with a simple cut on $\chi$, we have seen above that one wants to look at the maximum of $s^2/b$. For that reason, in Fig. \[fig:sRvss\], we display the information from Fig. \[fig:Rvss\] as a plot of $s^2/b$ versus $s$. We have used here the function $\chi(\{p,t\}_N)$ from our simplified shower algorithm. If we could somehow use $\chi_{\mathrm{MC}}(\{p,t\}_N)$, using the same Monte Carlo that we use to generate events, we would obtain a curve for $s^2/b$ versus $s$ that is everywhere higher. No algorithm could produce a curve above this limiting curve, but we have no way of determining the limiting curve. We see in Fig. \[fig:sRvss\] that one can achieve a fairly good statistical significance with, say, an integrated luminosity of $\int\! dL = 30\ {\rm fb}^{-1}$. With $s^2/b \approx 0.26$ and this luminosity we have $N({\mathrm{S}})/\sqrt{N({\mathrm{B}})} \approx 2.8$. We can compare to the method of Ref. [@Butterworth:2008iy] (BDRS). Applying this method with our data sample, we find a signal cross section $\sigma_{\rm BDRS}({\mathrm{S}}) = 0.22\ {\rm fb}$ and background cross section $\sigma_{\rm BDRS}({\mathrm{B}}) = 0.44\ {\rm fb}$. We have plotted this point in Fig. \[fig:sRvss\]. The corresponding statistical significance with $\int\! dL = 30\ {\rm fb}^{-1}$ is 1.8. Of course, this analysis ignores all systematic uncertainties. In the analysis presented above, we include events with zero, one, and two $b$-tags. Then shower deconstruction has to overcome a signal to background ratio of about 1/1700 in the complete event sample in order to extract a few events with a signal to background ratio of order 1. One suspects that, in fact, the events with zero or one $b$-tags do not contribute much to the discriminating power of the method. Accordingly, we now explore what happens when we give shower deconstruction an easier job by restricting the event sample to just events in which there are two $b$-tagged microjets among the three microjets with the highest transverse momenta that have, additionally, $p_{T} > 15~\rm{GeV}$. With these cuts, the signal sample is 0.39 fb and the background sample is 11 fb. We lose a lot of signal events, but now the signal to background ratio in the event sample is only about 1/30, so the job remaining for shower deconstruction is easier. ![ $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ for background events (upper curve) and $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ for signal events (lower curve) for samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span>. We use the cuts described in Sec. \[sec:EventSelection\] and, in addition, require that at least two of the three highest $p_T$ microjets with $p_T > 15\ {\rm GeV}$ have positive $b$-tags. []{data-label="fig:TwobSandBvschi"}](SBvsChiTwoTags.pdf){width="8.0cm"} In Fig. \[fig:TwobSandBvschi\] we display the functions $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ and $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ for the two $b$-tag sample. We again find a region with $s>b$. In Fig. \[fig:TwobsRvss\], we display the information from Fig. \[fig:TwobSandBvschi\] as a plot of $s^2/b$ versus $s$. We also show the $s^2/b$ versus $s$ curve from Fig. \[fig:sRvss\] for all events with no restriction on $b$-tags and the point that we obtained using the method of Ref. [@Butterworth:2008iy].[^7] We see that for $s \gtrsim 2.5\ {\rm fb}$, $s^2/b$ with the restricted event sample is smaller than it is with the unrestricted event sample. However for $s \lesssim 2.0\ {\rm fb}$, $s^2/b$ with the restricted event sample is about the same as with the unrestricted event sample. ![ Plot of $s^2/b$ versus $s$ for events with at least two $b$-tags among the three highest $p_T$ microjets that have $p_{T} > 15~\rm{GeV}$ in addition. We use samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span> as in Fig. \[fig:TwobSandBvschi\]. We also show the curve from Fig. \[fig:sRvss\] for all events with no restriction on $b$-tags (dashed curve) and the point that we obtained using the method of Ref. [@Butterworth:2008iy]. []{data-label="fig:TwobsRvss"}](sRvssTwoTags.pdf){width="8.0cm"} The formulas that define the simplified shower used to construct Fig. \[fig:TwobsRvss\] contain a number of parameters that reflect nonperturbative physics. Among them are $c_{\rm np}$, $\kappa_{\rm np}^2$, $n_{\rm np}$, $c_R$, $n_R$, and $\kappa_{\mathrm{p}}^2$ in Eq. (\[eq:ISemission\]), $N_{\rm pdf}^g$ in Eq. (\[eq:Hstart\]), and $N_{\rm pdf}^H$ in Eq. (\[eq:signalstart\]). There are other parameters like the factor 2 for the hardness cut on splittings in Eq. (\[eq:hardnesscut\]) that could have been set differently. We have not systematically tested whether the performance of shower deconstruction as reflected in Fig. \[fig:TwobsRvss\] is sensitive to the parameter choices, but we have tried some variations. Typically we found that $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ for background events and $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ for signal events change in the same direction. Thus we find that the curve in Fig. \[fig:TwobsRvss\] is not very sensitive to the parameter variations that we tested.[^8] We have used [Pythia]{} [@Pythia] for our comparisons. What would happen if we used [Herwig]{} [@Herwig] instead? We show in Fig. \[fig:HPvschi\] the cross sections $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ and $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ for two $b$-tag samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span> and by <span style="font-variant:small-caps;">Herwig</span>. We have normalized the cross sections within our cuts to be the same for both <span style="font-variant:small-caps;">Pythia</span> and <span style="font-variant:small-caps;">Herwig</span>, so that we are looking at differences in shape rather than normalization. We see that the behaviors obtained with the two event generators are quite similar but that with <span style="font-variant:small-caps;">Herwig</span> a somewhat larger fraction of the background events have large $\chi$. That there are differences is not a surprise since both event generators work at leading order in perturbation theory for their splitting kernels and make approximations with respect to color and spin of partons. One lesson from this is that in experimental applications of shower deconstruction or of other jet substructure measures one will want to test the Monte Carlo cross sections against experiment. In Fig. \[fig:PythiaHerwig\] we compare results from the two $b$-tag sample using <span style="font-variant:small-caps;">Pythia</span> and <span style="font-variant:small-caps;">Herwig</span> for $s^2/b$ as a function of $s$. We also show results using <span style="font-variant:small-caps;">Pythia</span> and <span style="font-variant:small-caps;">Herwig</span> for $s^2/b$ using the BDRS method. For <span style="font-variant:small-caps;">Pythia</span>, these are the results that were exhibited in Fig. \[fig:TwobsRvss\]. We see that there is about a 30% difference between <span style="font-variant:small-caps;">Pythia</span> and <span style="font-variant:small-caps;">Herwig</span> results. Again, this level of difference using leading order event generators is not a surprise. ![ $d\sigma_{\mathrm{MC}}({\mathrm{B}})/ d\log\chi$ for background events and $d\sigma_{\mathrm{MC}}({\mathrm{S}})/ d\log\chi$ for signal events for samples of signal and background events generated by <span style="font-variant:small-caps;">Pythia</span> and by <span style="font-variant:small-caps;">Herwig</span>. We use the cuts described in Sec. \[sec:EventSelection\] and, in addition, require that at least two of the three highest $p_T$ microjets with $p_T > 15\ {\rm GeV}$ have positive $b$-tags. The solid (blue) lines are for [Pythia]{} while the dashed (red) lines are for [Herwig]{}. At small $\chi$, the background curves are on the top and the signal curves are on the bottom. []{data-label="fig:HPvschi"}](HerPyVsChi.pdf){width="8.0cm"} ![ Plot of $s^2/b$ versus $s$ for events with two positive $b$-tags. We compare the distribution of $s^2/b$ for events generated with [Pythia]{} as in Fig. \[fig:TwobsRvss\], to the same distribution using events generated with [Herwig]{}. We normalize the total signal and background cross sections with these cuts to be $\sigma_{\rm MC}({\mathrm{S}}) = 0.39\ {\rm fb}$, $\sigma_{\rm MC}({\mathrm{B}}) = 11\ {\rm fb}$. We also show points that we obtained using the method of Ref. [@Butterworth:2008iy]. Using [Pythia]{} we found $\sigma_{\rm BDRS}({\mathrm{S}}) = 0.22\ {\rm fb}$ and $\sigma_{\rm BDRS}({\mathrm{B}}) = 0.44\ {\rm fb}$, as in Fig. \[fig:TwobsRvss\], while using [Herwig]{} we found $\sigma_{\rm BDRS}({\mathrm{S}}) = 0.20\ {\rm fb}$ and $\sigma_{\rm BDRS}({\mathrm{B}}) = 0.49\ {\rm fb}$. []{data-label="fig:PythiaHerwig"}](HerwigPythia.pdf){width="8.0cm"} Conclusions {#sec:conclusions} =========== We have proposed a method, shower deconstruction, for separating signal and background events when we have a definite theory in mind for the signal as well as for the standard model background with the signal process omitted. We have explained the method using a simple signal process, $p+p \to H + Z + X \to H + \ell^+ + \ell^- + X$. Here the event selection is chosen so that the Higgs boson that we hope to find is boosted to a substantial transverse momentum. The shower deconstruction method itself is quite general and could be applied to signal processes with more structure or perhaps to signal processes in which the sought massive objects are not highly boosted. The idea of shower deconstruction can be described in very few words. With data at hand, one begins by clustering final state particles in a region of the detector (the “fat jet” in our example) into much smaller jets, the microjets, using the $k_T$-jet algorithm. Alternatively one could use some other jet algorithm or one could use topological clusters defined directly using the calorimetry of the experiment. This gives a fairly fine grained description of the event, with the momenta $p_i$ and possibly flavor tags $t_i$ for each microjet. In order to keep within reasonable bounds for computer resources, one can limit the number $N$ of microjets by discarding the lowest transverse momentum microjets as necessary. One wants to be fine grained enough to see not only the direct decay products of a sought heavy particle but also gluon radiation that reflects the color structure of the signal or background final state. Then one computes approximately the probability $P(\{p,t\}_N|{\mathrm{S}})$ to obtain the observed microjet state $\{p,t\}_N$ from the signal process and the probability $P(\{p,t\}_N|{\mathrm{B}})$ to obtain the microjet state from a background process. We construct the observable $\chi(\{p,t\}_N) = {P(\{p,t\}_N|{\mathrm{S}})}/{P(\{p,t\}_N|{\mathrm{B}})}$ as the ratio of these and use $\chi$ to distinguish signal from background. The value of $\chi$ is calculated using a simplified shower algorithm that tries to mimic what a partitioned dipole shower with initial state radiation and underlying event contributions would give. The microjets are treated as intermediate state partons in the shower. We want the calculation to be as accurate as possible, but it needs to be an analytic calculation that can be executed with a not-to-large amount of computer time for each event. There is a tension between these goals. We expect that other workers will be able to improve on the compromise algorithm that we have described in this paper. This method is similar in spirit to the matrix element method [@Kondo:1988yd; @Kondo:1991dw; @Fiedler:2010sg; @Alwall:2010cq]. There, if one started from the microjet configuration $\{p,t\}_N$, one would compute $\chi(\{p,t\}_N)$ from the squared matrix element for the signal or background process convoluted with the parton distribution functions, integrated over the momenta of unobserved partons. If one were to use a number of partons $N$ that is greater than the minimum possible number for the desired signal and background and if one were to calculate $\chi(\{p,t\}_N)$ analytically, one would have something close to the shower deconstruction method. In one sense, one would then have a better approximation to nature than the simplified shower algorithm of this paper because one would be using the exact squared matrix element rather than a soft-collinear approximation to it. However, one would be missing the Sudakov factors. Without Sudakov factors, the probability for a parton splitting becomes infinite as the virtuality of the splitting tends to zero. With Sudakov factors, the probability for a parton splitting approaches zero as the virtuality of the splitting tends to zero. For this reason, one needs the Sudakov factors. We have found that in our simple example the shower deconstruction can achieve a signal/background discrimination superior to that of Ref. [@Butterworth:2008iy]. Furthermore, shower deconstruction has some features that suggest that it may prove useful as a practical tool. First, it is quite general, although further development is needed to apply the general method to other signal processes. Second, it is modular, with modules corresponding to QCD parton splitting, initial state radiation, underlying event contributions, Sudakov factors, and heavy particle decay. The modules can be improved independently and inserted into the general scheme. Third, the method has at least the potential to work for quite complicated signal processes. The jacobian {#sec:jacobiancalc} ============ In this appendix, we analyze the integral $$\begin{split} \label{eq:IJdefagain} I_J \equiv {}& \frac{1}{16} \int\!dk_A^2 \int\!dy_A \int\!d\phi_A \int\!dk_B^2 \int\!dy_B \int\!d\phi_B\ \delta^4(p_A + p_B - p_J) \times f \;\;. \end{split}$$ Here $p_A$ and $p_B$ are the momenta of two jets that together form the jet with momentum $p_J$. In our application, the two constituent jets have non-zero masses, $\mu_A$ and $\mu_B$. However, the masses $\mu_A$ and $\mu_B$ are small compared to the jet transverse momenta $k_A$ and $k_B$ and compared to the combined jet mass, $\mu_J$. Thus it is a good approximation to neglect the constituent jet masses; furthermore, doing so leads to a substantially simpler result. We therefore set $\mu_A = \mu_B = 0$. With this choice, the $(+,-,1,2)$ components of the momenta of the jets are (with $p^\pm = (p^0 \pm p^3)/\sqrt 2$) $$\begin{split} \label{eq:momentumdecompositions} p_A ={}& \left( \frac{1}{\sqrt 2}\,k_A\, e^{y_A}, \frac{1}{\sqrt 2}\,k_A\, e^{-y_A}, k_A \cos\phi_A, k_A \sin\phi_A \right) \;\;, \\ p_B ={}& \left( \frac{1}{\sqrt 2}\,k_B\, e^{y_B}, \frac{1}{\sqrt 2}\,k_B, e^{-y_B}, k_B \cos\phi_B, k_B \sin\phi_B \right) \;\;, \\ p_J ={}& \left( \frac{1}{\sqrt 2}\,\sqrt{k_J^2 + \mu_J^2}\, e^{y_J}, \frac{1}{\sqrt 2}\,\sqrt{k_J^2 + \mu_J^2}\, e^{-y_J}, k_J \cos\phi_J, k_J \sin\phi_J \right) \;\;. \end{split}$$ We wish to write $I_J$ in the form $$\label{eq:Jdef} I_J = \int\!dz \int\!d\varphi\ J \times f \;\;. $$ Here $z$ is a momentum fraction defined by $$\label{eq:zdef} z = \frac{k_A}{k_A + k_B} \;\;. $$ Then $$1-z = \frac{k_B}{k_A + k_B} $$ and $$z(1-z) = \frac{k_A k_B}{(k_A + k_B)^2} \;\;. $$ We define the variable $\varphi$ by $$\label{eq:tanvarphi} \tan\varphi = \frac{\sinh(\Delta y/2)\cos(\Delta\phi/2)}{\cosh(\Delta y/2)\sin(\Delta\phi/2)} \;\;, $$ where $$\begin{split} \Delta y ={}& y_A - y_B \;\;, \\ \Delta \phi ={}& \phi_A - \phi_B \;\;. \end{split}$$ Thus $\varphi$ is approximately the angle about the origin in the $(\Delta \phi,\Delta y)$ plane. We need to calculate the jacobian $J$. To proceed, we define unit vectors $$\begin{split} n_0 ={}& \left(\frac{1}{\sqrt 2}\,e^{y_J}, \frac{1}{\sqrt 2}\,e^{-y_J},0,0\right) \;\;, \\ n_3 ={}& \left(-\frac{1}{\sqrt 2}\,e^{y_J}, \frac{1}{\sqrt 2}\,e^{-y_J},0,0\right) \;\;, \\ n_1 ={}& \left(0, 0, \cos\phi_J, \sin\phi_J\right) \;\;, \\ n_2 ={}& \left(0, 0, -\sin\phi_J, \cos\phi_J\right) \;\;. \end{split}$$ These are orthogonal to each other and normalized as unit vectors along the coordinate axes in a convenient reference frame: $n_\mu \cdot n_\nu = g_{\mu\nu}$. We thus have $$\begin{split} \label{eq:IJdef1} I_J = {}& \frac{1}{16} \int\!dy_A \int\!dy_B \int\!dk_A^2 \int\!dk_B^2 \int\!d\phi_A \int\!d\phi_B \\ &\times \delta((p_A + p_B - p_J)\cdot n_1)\, \delta((p_A + p_B - p_J)\cdot n_2) \\ &\times \delta((p_A + p_B - p_J)\cdot n_0)\, \delta((p_A + p_B - p_J)\cdot n_3) \times f \;\;. \end{split}$$ Let us examine the effect of $$\delta((p_A + p_B - p_J)\cdot n_2) = \delta((\bm k_A + \bm k_B)\cdot \bm n_2) = \delta(k_A\sin(\phi_A - \phi_J) + k_B\sin(\phi_B - \phi_J)) \;\;. $$ Here we use boldface symbols to represent transverse vectors. We can use the delta function to perform the integration over $\phi_B$: $$\int\!d\phi_A \int\!d\phi_B\ \delta((p_A + p_B - p_J)\cdot n_2) \cdots = \int\!d\phi_A\ \frac{1}{k_B\,|\cos(\phi_B - \phi_J)|} \cdots \;\;. $$ Then $$\label{eq:getphiB} \sin(\phi_B - \phi_J) = - \frac{k_A}{k_B}\sin(\phi_A - \phi_J) \;\;. $$ We want to change the integration variable to $\Delta \phi = \phi_A - \phi_B$. From Eq. (\[eq:getphiB\]) we have $$k_A\sin(\phi_A-\phi_J) = - k_B[ \cos(\phi_A - \phi_B)\sin(\phi_A - \phi_J) -\sin(\phi_A - \phi_B)\cos(\phi_A - \phi_J) ] \;\;. $$ That is $$\tan(\phi_A-\phi_J) = \frac{k_B \sin \Delta \phi}{k_A + k_B \cos \Delta \phi} \;\;. $$ Thus $$d(\phi_A-\phi_J) = k_B \cos^2(\phi_A-\phi_J)\, \frac{k_B + k_A \cos \Delta \phi}{(k_A + k_B \cos \Delta \phi)^2}\ d\Delta\phi \;\;. $$ We also derive $$\cos^2(\phi_A - \phi_J) = \frac{(k_A + k_B \cos\Delta \phi)^2}{k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi} \;\;, $$ so $$d(\phi_A-\phi_J) = k_B \frac{k_B + k_A \cos \Delta \phi} {k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi}\ d\Delta \phi \;\;. $$ Since also $$\cos^2(\phi_B - \phi_J) = \frac{(k_B + k_A \cos\Delta \phi)^2}{k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi} \;\;, $$ we have $$d(\phi_A-\phi_J) = \frac{k_B\cos(\phi_B - \phi_J)} {[k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi]^{1/2}}\ d\Delta \phi \;\;. $$ Additionally, we note that $$[k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi]^{1/2} = k_J \;\;. $$ Thus $$\int\!d\phi_A\int\!d\phi_B\ \delta((p_A + p_B - p_J)\cdot n_2) \cdots = \int\!\frac{d\Delta \phi} {k_J} \cdots \;\;. $$ With this result, we have $$\begin{split} \label{IJ1} I_J = {}& \frac{1}{16} \int\!dy_A \int\!dy_B \int\!dk_A^2 \int\!dk_B^2 \int\!\frac{d\Delta \phi} {k_J} \\ &\times \delta((p_A + p_B - p_J)\cdot n_1)\, \delta((p_A + p_B - p_J)\cdot n_0)\, \delta((p_A + p_B - p_J)\cdot n_3) \times f \;\;. \end{split}$$ We next turn to the elimination of the delta function with $n_3$. We note that $$\delta((p_A + p_B - p_J)\cdot n_3) = \delta(k_A\sinh(y_A-y_J) + k_B\sinh(y_B-y_J)) \;\;. $$ We can use this delta function to eliminate the integration over $y_B$: $$\int\!dy_A\int\!dy_B\ \delta((p_A + p_B - p_J)\cdot n_3) \cdots = \int\!dy_A\ \frac{1}{k_B\,\cosh(y_B-y_J)} \cdots \;\;. $$ We want to change the integration variable to $\Delta y = y_A - y_B$. We have $$k_A\sinh(y_A-y_J) = - k_B\sinh(y_B-y_J) \;\;. $$ Thus $$k_A\sinh(y_A-y_J) = - k_B[ \cosh(y_A - y_B)\sinh(y_A - y_J) -\sinh(y_A - y_B)\cosh(y_A - y_J) ] \;\;. $$ That is $$\tanh(y_A-y_J) = \frac{k_B \sinh \Delta y}{k_A + k_B \cosh \Delta y} \;\;. $$ Thus $$d(y_A-y_J) = k_B \cosh^2(y_A-y_J)\, \frac{k_B + k_A \cosh \Delta y}{(k_A + k_B \cosh \Delta y)^2}\ d\Delta y \;\;. $$ We also derive $$\cosh^2(y_A - y_J) = \frac{(k_A + k_B \cosh\Delta y)^2}{k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y} \;\;, $$ so $$d(y_A-y_J) = k_B \frac{k_B + k_A \cosh \Delta y} {k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y}\ d\Delta y \;\;. $$ Since also $$\cosh^2(y_B - y_J) = \frac{(k_B + k_A \cosh\Delta y)^2}{k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y} \;\;, $$ we have $$d(y_A-y_J) = \frac{k_B\cosh(y_B - y_J)} {[k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y]^{1/2}}\ d\Delta y \;\;. $$ We also note that $$\begin{split} k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y ={}& k_A^2 + k_B^2 + 2 k_A k_B\cos\Delta\phi \\& + 2 k_A k_B (\cosh\Delta y - \cos\Delta\phi) \\ ={}& k_J^2 + 2 p_A\cdot p_B \\ ={}& k_J^2 + \mu_J^2 \;\;. \end{split}$$ Thus $$\int\!dy_A\int\!dy_B\ \delta((p_A + p_B - p_J)\cdot n_3) \cdots = \int\!\frac{d\Delta y} {\sqrt{k_J^2 + \mu_J^2}} \cdots \;\;. $$ With this result, we have $$\begin{split} \label{eq:IJ2} I_J = {}& \frac{1}{16} \int\!dk_A^2 \int\!dk_B^2 \int\!\frac{d\Delta \phi} {k_J} \int\!\frac{d\Delta y} {\sqrt{k_J^2 + \mu_J^2}} \\ &\times \delta((p_A + p_B - p_J)\cdot n_1)\, \delta((p_A + p_B - p_J)\cdot n_0) \times f \;\;. \end{split}$$ Now we would like to use the remaining delta functions to eliminate the integrations over $k_A^2$ and $k_B^2$. For the delta function involving $n_0$, we have $$\delta((p_A + p_B - p_J)\cdot n_0) = \delta\!\left(k_A\cosh(y_A-y_J) + k_B\cosh(y_B-y_J) - a_J \right) \;\;, $$ where we abbreviate $$a_J = \sqrt{k_J^2 + \mu_J^2} \;\;. $$ Using our results expressing $\cosh(y_A-y_J)$ and $\cosh(y_B-y_J)$ in terms of $\Delta y$, this is $$\delta((p_A + p_B - p_J)\cdot n_0) = \delta\!\left( \frac{k_A(k_A + k_B \cosh\Delta y) + k_B(k_B + k_A \cosh\Delta y)} {[k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y]^{1/2}} - a_J \right) \;\;. $$ That is $$\delta((p_A + p_B - p_J)\cdot n_0) = \delta\left( [k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y]^{1/2} - a_J \right) \;\;. $$ We can write $$[k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y]^{1/2} - a_J =\frac{k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y - a_J^2} {[k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y]^{1/2} + a_J} \;\;. $$ The denominator is not singular, so we can factor it out and evaluate it at the point at which the numerator vanishes: $$\label{eq:deltan0} \delta((p_A + p_B - p_J)\cdot n_0) = 2 a_J\, \delta\left( k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y - a_J^2 \right) \;\;. $$ We will use this result below at Eq. (\[eq:IJ3\]). For the delta function involving $n_1$, we have $$\delta((p_A + p_B - p_J)\cdot n_1) = \delta( k_A\cos(\phi_A-\phi_J) + k_B\cos(\phi_B-\phi_J) - k_J) \;\;. $$ Using our results expressing $\cos(\phi_A-\phi_J)$ and $\cos(\phi_B-\phi_J)$ in terms of $\Delta \phi$, this is $$\delta((p_A + p_B - p_J)\cdot n_1) = \delta\!\left( \frac{k_A(k_A + k_B \cos\Delta \phi) + k_B(k_B + k_A \cos\Delta \phi)} {[k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi]^{1/2}} - k_J \right) \;\;. $$ That is $$\delta((p_A + p_B - p_J)\cdot n_1) = \delta\left( [k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta \phi]^{1/2} - k_J \right) \;\;. $$ We can write $$[k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta\phi]^{1/2} - k_J =\frac{k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta\phi - k_J^2} {[k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta\phi]^{1/2} + k_J} \;\;. $$ The denominator is not singular, so we can factor it out and evaluate it at the point at which the numerator vanishes: $$\delta((p_A + p_B - p_J)\cdot n_1) = 2 k_J\, \delta\!\left( k_A^2 + k_B^2 + 2 k_A k_B \cos\Delta\phi - k_J^2 \right) \;\;. $$ It will prove convenient to write this as $$\label{eq:n1result} \delta((p_A + p_B - p_J)\cdot n_1) = 2 k_J\, \delta\!\left( (k_A + k_B)^2 - 2 k_A k_B (1 - \cos\Delta\phi) - k_J^2 \right) \;\;. $$ We will use this result below at Eq. (\[eq:dkAdkB\]). Now let us change integration variables to $(k_A + k_B)^2$ and $2 k_A k_B$, with $$dk_A^2\,dk_B^2 = \frac{k_A k_B}{|k_A^2 - k_B^2|}\ d(k_A + k_B)^2\, d(2 k_A k_B) \;\;. $$ When we make this change of variables, we ought to introduce also a sum over the discrete variable that distinguishes between $k_A$ and $k_B$, since $(k_A + k_B)$ and $(2 k_A k_B)$ are invariant under interchange of $k_A$ and $k_B$. However, we omit a special notation for this because we will soon change back to a variable $z$ that does distinguish between $k_A$ and $k_B$. We can eliminate the integration over $(k_A + k_B)^2$ at fixed $2 k_A k_B$ using the $n_1$ delta function from Eq. (\[eq:n1result\]): $$\label{eq:dkAdkB} dk_A^2\,dk_B^2\ \delta((p_A + p_B - p_J)\cdot n_1) = d(2 k_A k_B)\ \frac{2 k_J k_A k_B}{|k_A^2 - k_B^2|} \;\;. $$ Here $$\label{eq:kApluskB} (k_A + k_B)^2 = k_J^2 + 2 k_A k_B (1 - \cos\Delta\phi) \;\;. $$ This gives $$\begin{split} \label{eq:IJ3} I_J = {}& \frac{1}{8} \int\!d\Delta \phi \int\!d\Delta y \int\!dt\ \frac{t}{|k_A^2 - k_B^2|}\, \delta(A(t)) \times f \;\;, \end{split}$$ where we have defined $$t = 2 k_A k_B $$ and where $A$ is the argument of the delta function in Eq. (\[eq:deltan0\]), $$A = k_A^2 + k_B^2 + 2 k_A k_B \cosh\Delta y - k_J^2 - \mu_J^2 \;\;. $$ From Eq. (\[eq:kApluskB\]), we have $$k_A^2 + k_B^2 = k_J^2 - t \cos\Delta\phi \;\;. $$ Thus $$A(t) = t\, (\cosh\Delta y - \cos\Delta\phi) - \mu_J^2 \;\;, $$ so that $$\begin{split} \label{eq:IJ4} I_J = {}& \frac{1}{4} \int\!d\Delta \phi \int\!d\Delta y\ \frac{k_A k_B}{|k_A^2 - k_B^2|}\, \frac{1}{\cosh\Delta y - \cos\Delta\phi} \times f \;\;, \end{split}$$ where $$\label{eq:tresult} 2k_A k_B\, (\cosh\Delta y - \cos\Delta\phi) = \mu_J^2 \;\;. $$ This nearly completes the task set at the beginning of this appendix. Now, let us change to some more useful integration variables. Let us define a momentum fraction $z$ according to Eq. (\[eq:zdef\]). We need to express $z(1-z)$ as a function of $\Delta y$ and $\Delta \phi$. Using Eqs. (\[eq:tresult\]) and (\[eq:kApluskB\]), we have $$\begin{split} k_A k_B ={}& \frac{\mu_J^2/2}{\cosh\Delta y - \cos\Delta\phi} \;\;, \\ (k_A + k_B)^2 ={}& \frac{k_J^2(\cosh\Delta y - \cos\Delta\phi) + \mu_J^2(1 - \cos\Delta\phi)} {\cosh\Delta y - \cos\Delta\phi} \;\;. \end{split}$$ Thus $$\label{eq:z1mz} z(1-z) = \frac{\mu_J^2/2} {k_J^2(\cosh\Delta y - \cos\Delta\phi) + \mu_J^2(1 - \cos\Delta\phi)} \;\;. $$ From this, we calculate $$\begin{split} \frac{\partial z(1-z)}{\partial \Delta\phi} ={}& - \frac{2z^2(1-z)^2k_J^2}{\mu_J^2}\,(1+R) \sin \Delta\phi \;\;, \\ \frac{\partial z(1-z)}{\partial \Delta y} ={}& - \frac{2z^2(1-z)^2 k_J^2 }{\mu_J^2}\,\sinh \Delta y \;\;. \end{split}$$ where $$R = \frac{\mu_J^2}{k_J^2} \;\;. $$ We need another variable, $\varphi$, which we define according to Eq. (\[eq:tanvarphi\]). The gradient of $\tan\varphi$ is $$\begin{split} \frac{\partial \tan\varphi}{\partial \Delta\phi} ={}& - \frac{\tan\varphi}{\sin \Delta\phi} \;\;, \\ \frac{\partial \tan\varphi}{\partial \Delta y} ={}& \frac{\tan\varphi }{\sinh \Delta y} \;\;. \end{split}$$ We can use the partial derivatives to calculate the jacobian, giving $$d\Delta\phi\ d\Delta y = \frac{\mu_J^2}{2 z^2(1-z)^2 k_J^2}\ \frac{\sinh\Delta y\, \sin\Delta\phi} {\sinh^2\Delta y + (1+R)\sin^2\Delta\phi}\ d(z(1-z))\ \frac{d\tan\varphi}{\tan\varphi} \;\;. $$ That is, $$\label{eq:phiytozvarphi} d\Delta\phi\ d\Delta y = \frac{\mu_J^2}{2 z^2(1-z)^2 k_J^2 }\ \frac{|1-2z|} {\sinh^2\Delta y + (1+R)\sin^2\Delta\phi}\ \frac{\sinh\Delta y\, \sin\Delta\phi}{\sin\varphi \cos\varphi}\ dz\ d\varphi $$ With a little algebra, we find $$\frac{1}{\sin\varphi \cos\varphi} = 2\frac{\cosh \Delta y - \cos \Delta\phi}{\sinh\Delta y\,\sin\Delta\phi} \;\;. $$ Thus $$\label{eq:phiytozvarphi2} d\Delta\phi\ d\Delta y = \frac{\mu_J^2}{z^2(1-z)^2 k_J^2 }\ \frac{|1-2z| [\cosh \Delta y - \cos \Delta\phi]} {\sinh^2\Delta y + (1+R)\sin^2\Delta\phi}\ dz\ d\varphi $$ Now we insert this result into Eq. (\[eq:IJ4\]). There is a factor $$\frac{k_A k_B}{|k_A^2 - k_B^2|} = \frac{z(1-z)}{|1-2z|} \;\;, $$ which cancels the $|1-2z|$ in the numerator of Eq. (\[eq:phiytozvarphi\]). Then $$\begin{split} \label{eq:IJ5} I_J = {}& \frac{1}{4} \int\!dz \int\!d\varphi\ \frac{\mu_J^2}{z(1-z) k_J^2 }\ \frac{1} {\sinh^2\Delta y + (1+R)\sin^2\Delta\phi}\ \times f \;\;. \end{split}$$ We can use Eq. (\[eq:z1mz\]) to express $\mu_J^2$ in terms of $z(1-z)$ and the angles $(\Delta y,\Delta\phi)$: $$\begin{split} \label{eq:IJ6} I_J = {}& \frac{1}{2} \int\!dz \int\!d\varphi\ \frac{(\cosh\Delta y - \cos\Delta\phi) + R(1 - \cos\Delta\phi)} {\sinh^2\Delta y + (1+R)\sin^2\Delta\phi}\ \times f \;\;. \end{split}$$ We can rewrite this as $$\begin{split} \label{eq:IJ7} I_J = {}& \frac{1}{4} \int\!dz \int\!d\varphi\ \frac{\sinh^2(\Delta y/2) + (1+R)\sin^2(\Delta\phi/2)} {\sinh^2(\Delta y/2)\cosh^2(\Delta y/2) + (1+R)\sin^2(\Delta\phi/2)\cos^2(\Delta\phi/2)}\ \times f \;\;. \end{split}$$ This is the result that we sought. We note that since $\cosh(\Delta y/2) \approx 1$ and $\cos(\Delta\phi/2) \approx 1$ for small angles $(\Delta y,\Delta \phi)$, we have approximately $$\begin{split} \label{eq:IJ8} I_J \approx {}& \frac{1}{4} \int\!dz \int\!d\varphi\ f \;\;. \end{split}$$ when the integration is dominated by the small angle region. [99]{} D. Krohn, J. Thaler and L. T. Wang, [*Jet Trimming*]{}, JHEP [**1002**]{}, 084 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,1002,084). S. D. Ellis, C. K. Vermilion and J. R. Walsh, [*Techniques for improved heavy particle searches with jet substructure*]{}, Phys. Rev.  D [**80**]{}, 051501 (2009) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D80,051501). S. D. Ellis, C. K. Vermilion, J. R. Walsh, [*Recombination Algorithms and Jet Substructure: Pruning as a Tool for Heavy Particle Searches*]{}, Phys. Rev.  [**D81**]{}, 094023 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?bb=ARXIV:0912.0033). J. M. Butterworth, A. R. Davison, M. Rubin and G. P. Salam, [*Jet substructure as a new Higgs search channel at the LHC*]{}, Phys. Rev. Lett.  [**100**]{}, 242001 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,100,242001). K. Kondo, [*Dynamical Likelihood Method For Reconstruction Of Events With Missing Momentum. 1: Method And Toy Models*]{}, J. Phys. Soc. Jap.  [**57**]{}, 4126-4140 (1988) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JUPSA,57,4126). K. Kondo, [*Dynamical likelihood method for reconstruction of events with missing momentum. 2: Mass spectra for 2 $\to$ 2 processes*]{}, J. Phys. Soc. Jap.  [**60**]{}, 836-844 (1991) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JUPSA,60,836). F. Fiedler, A. Grohsjean, P. Haefner [*et al.*]{}, [*The Matrix Element Method and its Application in Measurements of the Top Quark Mass*]{}, Nucl. Instrum. Meth.  [**A624**]{}, 203-218 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=NUIMA,A624,203). J. Alwall, A. Freitas, O. Mattelaer, [*The Matrix Element Method and QCD Radiation*]{}, \[arXiv:1010.2263 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1010.2263). L. G. Almeida, S. J. Lee, G. Perez, G. Sterman and I. Sung, [*Template Overlap Method for Massive Jets*]{}, Phys. Rev.  D [**82**]{}, 054034 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D82,054034). D. E. Soper, M. Spannowsky, [*Combining subjet algorithms to enhance ZH detection at the LHC*]{}, JHEP [**1008**]{}, 029 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,1008,029); K. Black, J. Gallicchio, J. Huth [*et al.*]{}, [*Comprehensive multivariate discrimination and the Higgs + W/Z search*]{}, \[arXiv:1010.3698 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1010.3698); Y. Cui, Z. Han, M. D. Schwartz, [*W-jet Tagging: Optimizing the Identification of Boosted Hadronically-Decaying W Bosons*]{}, \[arXiv:1012.2077 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1012.2077). J. M. Butterworth, B. E. Cox, J. R. Forshaw, [*WW scattering at the CERN LHC*]{}, Phys. Rev.  [**D65**]{}, 096014 (2002) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D65,096014). J. M. Butterworth, J. R. Ellis, A. R. Raklev [*et al.*]{}, [*Discovering baryon-number violating neutralino decays at the LHC*]{}, Phys. Rev. Lett.  [**103**]{}, 241803 (2009) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,103,241803). M. H. Seymour, [*Searches for new particles using cone and cluster jet algorithms: A Comparative study*]{}, Z. Phys.  [**C62**]{}, 127-138 (1994) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=ZEPYA,C62,127). J. Thaler, L. -T. Wang, [*Strategies to Identify Boosted Tops*]{}, JHEP [**0807**]{}, 092 (2008). [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,0807,092). D. E. Kaplan, K. Rehermann, M. D. Schwartz [*et al.*]{}, [*Top Tagging: A Method for Identifying Boosted Hadronically Decaying Top Quarks*]{}, Phys. Rev. Lett.  [**101**]{}, 142001 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,101,142001). T. Plehn, G. P. Salam, M. Spannowsky, [*Fat Jets for a Light Higgs*]{}, Phys. Rev. Lett.  [**104**]{}, 111801 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,104,111801). C. -R. Chen, M. M. Nojiri, W. Sreethawong, [*Search for the Elusive Higgs Boson Using Jet Structure at LHC*]{}, JHEP [**1011**]{}, 012 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?eprint=arXiv:1006.1151). A. Falkowski, D. Krohn, L. -T. Wang [*et al.*]{}, [*Unburied Higgs*]{}, [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1006.1650). G. D. Kribs, A. Martin, T. S. Roy and M. Spannowsky, [*Discovering the Higgs Boson in New Physics Events using Jet Substructure*]{}, Phys. Rev.  D [**81**]{}, 111501 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D81,111501). G. D. Kribs, A. Martin, T. S. Roy and M. Spannowsky, [*Discovering Higgs Bosons of the MSSM using Jet Substructure*]{}, Phys. Rev.  D [**82**]{}, 095012 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D82,095012). T. Plehn, M. Spannowsky, M. Takeuchi and D. Zerwas, [*Stop Reconstruction with Tagged Tops*]{}, JHEP [**1010**]{}, 078 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,1010,078). B. Bhattacherjee, M. Guchait, S. Raychaudhuri and K. Sridhar, [*Boosted Top Quark Signals for Heavy Vector Boson Excitations in a Universal Extra Dimension Model*]{}, Phys. Rev.  D [**82**]{}, 055006 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D82,055006). C. Hackstein, M. Spannowsky, [*Boosting Higgs discovery: The Forgotten channel*]{}, Phys. Rev.  [**D82**]{}, 113012 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D82,113012). C. Englert, C. Hackstein, M. Spannowsky, [*Measuring spin and CP from semi-hadronic ZZ decays using jet substructure*]{}, Phys. Rev.  [**D82**]{}, 114024 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D82,114024). A. Katz, M. Son, B. Tweedie, [*Jet Substructure and the Search for Neutral Spin-One Resonances in Electroweak Boson Channels*]{}, \[arXiv:1010.5253 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1010.5253). L. G. Almeida, S. J. Lee, G. Perez [*et al.*]{}, [*Substructure of high-$p_T$ Jets at the LHC*]{}, Phys. Rev.  [**D79**]{}, 074017 (2009) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D79,074017). J. Thaler, K. Van Tilburg, [*Identifying Boosted Objects with N-subjettiness*]{}, \[arXiv:1011.2268 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1011.2268). J. -H. Kim, [*Rest Frame Subjet Algorithm With SISCone Jet For Fully Hadronic Decaying Higgs Search*]{}, \[arXiv:1011.1493 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?eprint=arXiv:1011.1493). G. D. Kribs, A. Martin, T. S. Roy, [*Higgs Discovery through Top-Partners using Jet Substructure*]{}, \[arXiv:1012.2866 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1012.2866). J. Fan, D. Krohn, P. Mosteiro [*et al.*]{}, [*Heavy Squarks at the LHC*]{}, \[arXiv:1102.0302 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1102.0302). T. Plehn, M. Spannowsky, M. Takeuchi, [*Boosted Semileptonic Tops in Stop Decays*]{}, \[arXiv:1102.0557 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1102.0557). A. Abdesselam, E. B. Kuutmann, U. Bitenc [*et al.*]{}, [*Boosted objects: A Probe of beyond the Standard Model physics*]{}, \[arXiv:1012.5412 \[hep-ph\]\] [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?=ARXIV:1012.5412). T. Sjostrand, S. Mrenna and P. Z. Skands, [*PYTHIA 6.4 Physics and Manual*]{}, JHEP [**0605**]{}, 026 (2006) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,0605,026). M. Bahr, S. Gieseke, M. A. Gigg [*et al.*]{}, [*Herwig++ Physics and Manual*]{}, Eur. Phys. J.  [**C58**]{}, 639-707 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?eprint=arXiv:0803.0883). M. Cacciari, G. P. Salam and G. Soyez, [*The anti-$k_t$ jet clustering algorithm*]{}, JHEP [**0804**]{}, 063 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?eprint=arXiv:0802.1189). M. Cacciari, G. P. Salam, [*Dispelling the N\*\*3 myth for the k(t) jet-finder*]{}, Phys. Lett.  [**B641**]{}, 57-61 (2006) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHLTA,B641,57); M. Cacciari, G. P. Salam and G. Soyez, http://fastjet.fr. J. M. Campbell, R. K. Ellis, [*MCFM for the Tevatron and the LHC*]{}, Nucl. Phys. Proc. Suppl.  [**205-206**]{}, 10-15 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHZ,205-206,10); http://mcfm.fnal.gov/. S. D. Ellis, D. E. Soper, [*Successive combination jet algorithm for hadron collisions*]{}, Phys. Rev.  [**D48**]{}, 3160-3166 (1993) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D48,3160); S. Catani, Y. L. Dokshitzer, M. H. Seymour, B. R. Webber, [*Longitudinally invariant K(t) clustering algorithms for hadron hadron collisions*]{}, Nucl. Phys.  [**B406**]{}, 187-224 (1993) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHA,B406,187). The ATLAS Collaboration, [*Measurement of jet mass and substructure for inclusive jets in $\sqrt s =\ 7\ {\rm TeV}$ collisions with the ATLAS experiment*]{}, ATLAS-CONF-2011-073. ATLAS Collaboration, [*ATLAS Sensitivity to the Standard Model Higgs in the $HW$ and $HZ$ Channels at High Transverse Momenta*]{}, unpublished note ATL-PHYS-PUB-2009-088 [\[CERN\]](http://cdsweb.cern.ch/record/1201444?ln=en); G. Piacquadio, [*Identification of b-jets and investigation of the discovery potential of a Higgs boson in the $WH\to l \nu b \bar{b}$ channel with the ATLAS experiment*]{}, unpublished thesis CERN-THESIS-2010-027 [\[SPIRES\]](http://inspirebeta.net/record/887066?ln=en); C. Weiser, [*A combined secondary vertex based B-tagging algorithm in CMS*]{}, unpublished note CMS-NOTE-2006-014 [\[CERN\]](http://cdsweb.cern.ch/record/927399?ln=en). Z. Nagy and D. E. Soper, [*Parton showers with quantum interference*]{}, JHEP [**0709**]{}, 114 (2007) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHA,B406,187). Z. Nagy and D. E. Soper, [*Parton showers with quantum interference: leading color, spin averaged*]{}, JHEP [**0803**]{}, 030 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,0807,025). Z. Nagy, D. E. Soper, [*Parton showers with quantum interference: Leading color, with spin*]{}, JHEP [**0807**]{}, 025 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,0807,025). C. W. Bauer, F. J. Tackmann, J. Thaler, [*GenEvA. II. A Phase space generator from a reweighted parton shower*]{}, JHEP [**0812**]{}, 011 (2008) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=JHEPA,0812,011). F. I. Olness, D. E. Soper, [*Correlated theoretical uncertainties for the one-jet inclusive cross section*]{}, Phys. Rev.  [**D81**]{}, 035018 (2010) \[arXiv:0907.5052 \[hep-ph\]\]. [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D81,035018). J. Gallicchio, M. D. Schwartz, [*Seeing in Color: Jet Superstructure*]{}, Phys. Rev. Lett.  [**105**]{}, 022001 (2010) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,105,022001). G. Marchesini, B. R. Webber, [*Simulation of QCD Jets Including Soft Gluon Interference*]{}, Nucl. Phys.  [**B238**]{}, 1 (1984) [\[SPIRES\]](http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHA,B238,1). [^1]: We generate events for $Z+\it{jet}\rightarrow l^+l^-+\it{jet}$ and $HZ\rightarrow b \bar{b}~l^+l^-$ using [Pythia]{} in a configuration with large transverse momentum and normalize the cross section to the one obtained from [MCFM]{} [@MCFM] with the same cuts. Then we calculate the cross section after selection cuts based on the number of events that pass the selection cuts. [^2]: Here the differential $dp_j$ for each microjet $j$ can just mean $d^4p_j$. [^3]: As we will see, partons with [undefined]{} color connections are allowed to radiate soft partons into an unrestricted angular region. Since all of our partons are contained in the angular region of the fat jet, this does not cause much of a problem. However, if we wanted to increase the angular region considered in shower deconstruction, we would need to specify color connected partners for all partons. [^4]: We use momentum components $p^\pm = (p^0 \pm p^3)/\sqrt 2$. [^5]: The Feynman rules that we use for calculating squared matrix elements assume that momentum integrations are $(2\pi)^{-4}\int\! d^4p\ (2\pi)\,\delta(p^2 - \mu^2)$, which gives this factor to accompany integrations over $k^2$, $y$, and $\phi$ as in Eq. (\[eq:d4ptomukyphi\]). [^6]: It would be better to use a likelihood ratio based on the full distribution of $ds(\chi)/d\chi$ and $db(\chi)/d\chi$, but the use of a simple cut is easier to describe. [^7]: The method of Ref. [@Butterworth:2008iy] uses only events with two $b$-tags. [^8]: We did find that $s^2/b$ could be increased by making the Sudakov exponent for gluon splitting a bit larger, but we have not explored this further.
--- abstract: | We prove that in every separable Banach space $X$ with a Schauder basis and a $C^k$-smooth norm it is possible to approximate, uniformly on bounded sets, every equivalent norm with a $C^k$-smooth one in a way that the approximation is improving as fast as we wish on the elements depending only on the tail of the Schauder basis. Our result solves a problem from the recent monograph of Guirao, Montesinos and Zizler. address: - | Petr Hájek: Mathematical Institute\ Czech Academy of Science\ Žitná 25\ 115 67 Praha 1\ Czech Republic and Department of Mathematics\ Faculty of Electrical Engineering\ Czech Technical University in Prague\ Zikova 4, 160 00, Prague - | Tommaso Russo: Dipartimento di matematica\ Università degli Studi di Milano\ via Saldini 50, 20133 Milano, Italy author: - Petr Hájek - Tommaso Russo title: Some remarks on smooth renormings of Banach spaces --- [^1] Introduction ============ The problem of smooth approximation of continuous mappings is one of the classical themes in analysis. An important special case of this problem is the existence of $C^k$-smooth approximations of norms on an infinite-dimensional real Banach space. More precisely, assume that the real Banach space $X$ admits a $C^k$-smooth norm. Let $\|\cdot\|$ be an equivalent norm on $X$, $\varepsilon>0$. Does there exist a $C^k$-smooth renorming $\||\cdot\||$ of $X$ such that $1\le\frac{\||x\||}{\|x\|}\le1+\varepsilon$ holds for all $0\ne x\in X$? In its full generality, this problem is still open, even in the case $k=1$ (no counterexample is known). For $k=1$, the problem can be solved easily by using Smulyan’s criterion, once a dual LUR norm is present on $X^*$. This covers a wide range of Banach spaces, in particular all WCG (hence all separable, and all reflexive) spaces [@dgz]. In the absence of a dual LUR renorming, the problem appears to be completely open. For $k\ge2$ the problem seems to be more difficult, and no dual approach is available. To begin with, Deville [@deville-very-smooth] proved that the existence of $C^2$-smooth norm has profound structural consequence for the space. In some sense, such spaces are either superreflexive, or close to $c_0$. To get an idea of the difficulty of constructing smooth norms, we refer to e.g. [@maalev-troyanski], [@haydon-smooth], [@haydon-trees], [@haha], [@bi]. Broadly speaking, the construction of the smooth norm is carried out by techniques locally using only finitely many ingredients. Of course, this idea is present already in the concept of partitions of unity, but in the setting of norms it is harder to implement as we need to preserve the convexity of the involved functions. Probably the first explicit use of this technique in order to construct smooth norms is found in the work of Pechanec, Whitfield and Zizler [@pewhiziz]. The authors construct a particular LUR and $C^1$-smooth norm on $c_0(\Gamma)$ which admits $C^\infty$-approximations. This result has later been generalized to arbitrary WCG spaces [@hp]. Recently, Bible and Smith [@bis] have succeeded in solving the smooth approximation problem for norms on $c_0(\Gamma)$, $k=\infty$. This is essentially the only known nonseparable space where the problem has been solved. In the separable setting, the problem has been completely solved for every separable Banach space and every $k$, in a series of papers [@hajek-locally], [@defoha-separ], [@defoha-polyhedral], and the final solution in [@HaTa]. We refer to the monographs [@dgz] and [@HJ; @book] for a more complete discussion and references, too numerous to be included in our note. The main result of the present note delves deeper into the fine behaviour of $C^k$-smooth approximations of norms in the separable setting. It is in some sense analogous to the condition (ii) in Theorem VIII.3.2 in [@dgz], which claims that in the Banach space with $C^k$-smooth partitions of unity, the $C^k$-smooth approximations to continuous functions exist with a prescribed precision around each point. Our result solves Problem 170 (stated somewhat imprecisely) in [@gmz]. We also hope that the result may be of some use in the context of metric fixed point theory, where several notions are present of properties which asymptotically improve with growing codimension. For example, let us mention the notion of [*asymptotically non-expansive function*]{} or the ones of [*asymptotically isometric copy*]{} of $\ell_1$ or $c_0$. Let us now state our main result. \[thm:Ck norm improving\]Let $(X,\|\cdot\|)$ be a separable real Banach space with a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$ that admits a $C^{k}$-smooth renorming. Then for every sequence $\left\{ \varepsilon_{N}\right\} _{N\geq0}$ of positive numbers, there is a $C^{k}$-smooth renorming $\left|\left|\left|\cdot\right|\right|\right|$ of $X$ such that for every $N\geq0$ $$\Bigl|\,\left|\left|\left|x\right|\right|\right|-\left\Vert x\right\Vert \,\Bigr|\leq\varepsilon_{N}\left\Vert x\right\Vert \qquad\text{for }x\in X^{N},$$ where . In other words, we can approximate the original norm with a $C^{k}$-smooth one in a way that on the “tail vectors” the approximation is improving as fast as we wish. The proof of Theorem \[thm:Ck norm improving\] will be presented in the next section. The rough idea is the following. By the result in [@HaTa], for every $N$ one can find a $C^{k}$-smooth norm $\left\Vert \cdot\right\Vert _{N}$ such that $\Bigl|\,\left\Vert \cdot\right\Vert _{N}-\left\Vert \cdot\right\Vert \,\Bigr|\leq\varepsilon_{N}\left\Vert \cdot\right\Vert $. One is tempted to use the standard gluing together in a $C^{k}$-smooth way and hope that the resulting norm will be as desired. Unfortunately, in this way there is no possibility to assure that on $X^{N}$ only the $\left\Vert \cdot\right\Vert _{n}$ norms with $n\geq N$ will enter into the gluing procedure. To achieve this feature it is necessary that the norms $\left\Vert \cdot\right\Vert _{N}$ be quantitatively different on $X^{N}$ and $X_{N}=\text{span}\left\{ e_{i}\right\} _{i=1}^{N}$. The first part of the argument, consisting of the geometric Lemma \[lem:main lemma\] and some easy deductions, is exactly aimed at finding new norms which are quantitatively different on tail vectors and "head vectors”. The second step consists in iterating this renorming for every $n$ and rescaling the norms. Finally, we suitably approximate these norms with $C^{k}$-smooth ones and we glue everything together using the standard technique. \[sec:Proof of Th\] Proof of the main result ============================================ In this section we shall prove Theorem \[thm:Ck norm improving\]. Let $X$ be a separable (real) Banach space with norm $\left\Vert \cdot\right\Vert $ and a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$. We denote by $K:=\text{b.c}.\left\{ e_{i}\right\} _{i\geq1}$ the basis constant of the Schauder basis (of course $K$ depends on the particular norm we are using). We also let $P_{k}$ be the usual projection $P_{k}(\sum_{j\geq1}\alpha^{j}e_{j})=\sum_{j=1}^{k}\alpha^{j}e_{j}$ and $P^{k}:=I_{X}-P_{k}$, i.e. $P^{k}(\sum_{j\geq1}\alpha^{j}e_{j})=\sum_{j\geq k+1}\alpha^{j}e_{j}$. It is clear that $\left\Vert P_{k}\right\Vert \leq K$ and $\left\Vert P^{k}\right\Vert \leq K+1$. Finally, we denote $X_{k}:=\text{span}\left\{ e_{i}\right\} _{i=1}^{k}$ and $X^{k}=\overline{\text{span}}\left\{ e_{i}\right\} _{i\geq k+1}$, i.e. the ranges of the two projections respectively. We will make extensive use of convex sets: let us recall that a convex set $C$ in a Banach space $X$ is a *convex body* if it has nonempty interior. Obviously a symmetric convex body is in particular a neighborhood of the origin and the unit ball $B_{X}$ of $X$ is a bounded, symmetric convex body (we shorthand this fact by saying that it is a BCSB). Any other BCSB $B$ in $X$ induces an equivalent norm on $X$ via its Minkowski functional $$\mu_{B}(x):=\inf\left\{ t>0:x\in tB\right\} .$$ We will also denote by $\left\Vert \cdot\right\Vert _{B}$ the norm induced by $B$, i.e. $\left\Vert x\right\Vert _{B}:=\mu_{B}(x)$; obviously $\left\Vert \cdot\right\Vert _{B_{X}}$ is the original norm of the space. Moreover we clearly have $$B\subseteq C\implies\mu_{B}\geq\mu_{C},$$ $$\mu_{\lambda B}=\frac{1}{\lambda}\mu_{B}$$ and passing to the induced norms we see that $$B\subseteq C\subseteq(1+\delta)B\implies\frac{1}{1+\delta}\left\Vert \cdot\right\Vert _{B}\leq\left\Vert \cdot\right\Vert _{C}\leq\left\Vert \cdot\right\Vert _{B}.$$ We now start with the first part of the argument. \[lem:main lemma\]Let $(X,\left\Vert \cdot\right\Vert )$ be a Banach space with a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$ with basis constant $K$. Denote the unit ball of $X$ by $B$, fix $k\in\mathbb{N}$, two parameters $\lambda>0$ and $0<R<1$, and consider the sets $$D:=\left\{ x\in X:\left\Vert P^{k}x\right\Vert \leq R\right\} \cap(1+\lambda)\cdot B,$$ Then $C$ is a BCSB and $$C\cap X^{k}\subseteq\left(1+\lambda\frac{K}{K+1-R}\right)\cdot B.$$ Heuristically, if we modify the unit ball in the direction of $X_{k}$, this modification results in a perturbation of the ball also in the remaining directions, but this modification is significantly smaller. The fact that $C$ is a BCSB is obvious. Let $x\in C\cap X^{k}$ and notice that $0\in\text{Int}C$ (as $B\subseteq C$); by the cone argument we deduce that $tx\in\text{Int}C$ for $t\in[0,1)$. Moreover $\text{conv}\left\{ D,B\right\} $ has non-empty interior, so it is easily seen that its interior equals the interior of its closure, hence $tx\in\text{Int}C=\text{Int}\left(\text{conv}\left\{ D,B\right\} \right)\subseteq\text{conv}\left\{ D,B\right\} $. If we can show that $tx\in\left(1+\lambda\frac{K}{K+1-R}\right)\cdot B$ we then let $t\rightarrow1$ and conclude the proof. In other words we can assume without loss of generality that $x\in X^{k}\cap\text{conv}\left\{ D,B\right\} .$ Hence we can write $x=ty+(1-t)z$ with $t\in[0,1]$, $y\in D$ and $z\in B$, in particular $\left\Vert P^{k}y\right\Vert \leq R$ and $\left\Vert z\right\Vert \leq1$. Moreover $x\in X^{k}$ implies $$\left\Vert x\right\Vert =\left\Vert P^{k}x\right\Vert \leq t\left\Vert P^{k}y\right\Vert +(1-t)\left\Vert P^{k}z\right\Vert \leq tR+(1-t)(K+1);$$ if $\left\Vert x\right\Vert \leq1$ the conclusion of the lemma is clearly true, so we can assume $\left\Vert x\right\Vert \geq1$. Thus we have $1\leq K+1-t(K+1-R)$ and this yields $t\leq\frac{K}{K+1-R}$. Next, we move the points $y,z$ slightly, in such a way that $x$ is still a convex combination of them: fix two parameters $\tau,\eta>0$ to be chosen later and consider $u:=(1-\tau)y$ and $v:=(1+\eta)z$. Obviously $x=\frac{t}{1-\tau}u+\frac{1-t}{1+\eta}v$ and we require this to be a convex combination: $$1=\frac{t}{1-\tau}+\frac{1-t}{1+\eta}\qquad\implies\qquad\tau=\frac{(1-t)\eta}{t+\eta}\leq1$$ (of course this choice implies $1-\tau\geq0$). Since $y\in D$, we have $\left\Vert v\right\Vert \leq1+\eta$ and $\left\Vert u\right\Vert \leq(1-\tau)\left\Vert y\right\Vert \leq(1-\tau)(1+\lambda)$; we want these norms to be both small, so we require (here we use the previous choice of $\tau$) $$1+\eta=(1-\tau)(1+\lambda)\qquad\implies\qquad\eta=\lambda t.$$ With this choice of $\tau$ and $\eta$ we have $\left\Vert u\right\Vert, \left\Vert v\right\Vert \leq1+\eta=1+\lambda t\leq1+\lambda\cdot\frac{K}{K+1-R}$; by convexity the same holds true for $x$ and the proof is complete. We now modify again the obtained BCSB in such a way that on $X^{k}$ the body is an exact multiple of the original ball; this modification does not destroy the properties achieved before. It will be useful to denote by $S:=\left\{ x\in X:\left\Vert P^{k}x\right\Vert \leq R\right\} $; with this notation we have $D:=S\cap(1+\lambda)\cdot B$. In the above setting, let $\gamma:=\frac{K}{K+1-R}$ and Then $\tilde{B}$ is a BCSB and $$B\subseteq\tilde{B}\subseteq(1+\lambda)\cdot B,$$ $$S\cap\tilde{B}=S\cap(1+\lambda)\cdot B,$$ $$X^{k}\cap\tilde{B}=X^{k}\cap(1+\lambda\gamma)\cdot B.$$ It is obvious that $\tilde{B}$ is a BCSB. Of course $B\subseteq C$, so $B\subseteq\tilde{B}$ too; also $D\subseteq(1+\lambda)\cdot B$ implies $C\subseteq(1+\lambda)\cdot B$. Since $\gamma\leq1$ we deduce that $\tilde{B}\subseteq(1+\lambda)\cdot B$. The $\subseteq$ in the second assertion follows from what we have just proved; for the converse inclusion, just observe that $S\cap(1+\lambda)\cdot B=D\subseteq\tilde{B}$. For the last equality, obviously $X^{k}\cap(1+\lambda\gamma)\cdot B\subseteq\tilde{B}$, so the $\supseteq$ inclusion follows. For the converse inclusion, let $p\in X^{k}\cap\tilde{B}$; exactly the same argument as in the first part of the previous proof (with $C$ replaced by $\tilde{B}$) shows that we can assume $p\in\text{conv}\left\{ C,X^{k}\cap(1+\lambda\gamma)\cdot B\right\} \cap X^{k}$. So we can write $p=ty+(1-t)z$ with $y\in C$ and $z\in X^{k}\cap(1+\lambda\gamma)\cdot B$. If $t=0$, $p=z\in X^{k}\cap(1+\lambda\gamma)\cdot B$ and we are done. On the other hand if $t>0$, from $p\in X^{k}$ we deduce that $y\in X^{k}$ too; hence in fact $y\in C\cap X^{k}\subseteq(1+\lambda\gamma)\cdot B$, by the previous lemma. By convexity $p\in(1+\lambda\gamma)\cdot B$ and the proof is complete. The next proposition is essentially a restatement of the above corollary in terms of norms rather than convex bodies; we write it explicitly since in what follows we will use the approach using norms. The general setting is the one above: we have a separable Banach space $X$ with a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$ and we denote by $X_{k}:=\text{span}\left\{ e_{i}\right\} _{i=1}^{k}$ and $X^{k}=\overline{\text{span}}\left\{ e_{i}\right\} _{i\geq k+1}$. Let $B$ be a BCSB in $X$ and let $\left\Vert \cdot\right\Vert _{B}$ be the induced norm; also let $K$ be the basis constant of $\left\{ e_{i}\right\} _{i\geq1}$ relative to $\left\Vert \cdot\right\Vert _{B}$. Fix $k\in\mathbb{N}$ and two parameters $\lambda>0$ and $0<R<1$. Then there is a BCSB $\tilde{B}$ in $X$ such that the induced norm $\left\Vert \cdot\right\Vert _{\tilde{B}}$ satisfies the following properties: : $$\left\Vert \cdot\right\Vert _{\tilde{B}}\leq\left\Vert \cdot\right\Vert _{B}\leq(1+\lambda)\left\Vert \cdot\right\Vert _{\tilde{B}},$$ : $$\left\Vert \cdot\right\Vert _{B}=(1+\lambda\gamma)\left\Vert \cdot\right\Vert _{\tilde{B}}\qquad\text{on }X^{k},$$ : $$\left\Vert x\right\Vert _{B}=(1+\lambda)\left\Vert x\right\Vert _{\tilde{B}}\qquad\text{whenever }\left\Vert P^{k}x\right\Vert \leq\frac{R}{1+\lambda}\left\Vert x\right\Vert ,$$ where $\gamma:=\frac{K}{K+1-R}$. We let $\tilde{B}$ be the convex body defined in the corollary. Then (a) follows immediately from the corollary and (b) is immediate too: for $x\in X^{k}$ $$\left\Vert x\right\Vert _{\tilde{B}}=\inf\left\{ t>0:x\in t\cdot\tilde{B}\right\} =\inf\left\{ t>0:x\in t\cdot\left(\tilde{B}\cap X^{k}\right)\right\}=$$ $$\inf\left\{ t>0:x\in t\cdot\left(X^{k}\cap\left(1+\lambda\gamma\right)\cdot B\right)\right\} =\inf\left\{ t>0:x\in t\left(1+\lambda\gamma\right)\cdot B\right\}$$ $$=\frac{1}{1+\lambda\gamma}\inf\left\{ t>0:x\in t\cdot B\right\} =\frac{1}{1+\lambda\gamma}\left\Vert x\right\Vert _{B}.$$ The last equality is not completely trivial since $S$ is not a cone, so we first modify it and we define $$S_{1}:=\left\{ x\in X:\left\Vert P^{k}x\right\Vert \leq\frac{R}{1+\lambda}\left\Vert x\right\Vert \right\} .$$ We observe that replacing $S$ with $S_{1}$ does not modify the construction: if we set $D_{1}:=S_{1}\cap(1+\lambda)\cdot B$, then we have $C_{1}:=\overline{\text{conv}}\left\{ D_{1},B\right\} =C$. In fact $S_{1}\cap(1+\lambda)\cdot B\subseteq S$ implies $C_{1}\subseteq C$ and the converse inclusion follows from $D\subseteq\text{conv}\left\{ D_{1},B\right\} $. In order to prove this, fix $x\in D$; then $\left\Vert P^{k}x\right\Vert \leq R<1$ and in particular $P^{k}x\in B$. Now set $x_{t}:=P^{k}x+t(x-P^{k}x)$ and choose $t\geq1$ such that $\left\Vert x_{t}\right\Vert =1+\lambda$; with this choice of $t$ we get $\left\Vert P^{k}x_{t}\right\Vert =\left\Vert P^{k}x\right\Vert \leq R=\frac{R}{1+\lambda}\left\Vert x_{t}\right\Vert $, so $x_{t}\in D_{1}$. Since $x$ is a convex combination of $x_{t}$ and $P^{k}x$ we deduce $D\subseteq\text{conv}\left\{ D_{1},B\right\} $. Next, we claim that $$S_{1}\cap\tilde{B}=S_{1}\cap(1+\lambda)\cdot B.$$ In fact $\supseteq$ follows from the analogous relation with $S$, proved in the corollary, and $S_{1}\cap(1+\lambda)\cdot B\subseteq S$. The converse inclusion follows from the usual $\tilde{B}\subseteq(1+\lambda)\cdot B$. Finally we prove (c): pick $x\in S_{1}$ and notice that $$\left\{ t>0:x\in t\tilde{B}\right\} =\left\{ t>0:x\in t\tilde{B}\cap S_{1}\right\} =\left\{ t>0:x\in t\left(\tilde{B}\cap S_{1}\right)\right\}$$ $$=\left\{ t>0:x\in t\left(S_{1}\cap\left(1+\lambda\right)\cdot B\right)\right\} =\left\{ t>0:x\in t\left(1+\lambda\right)\cdot B\right\} ;$$ hence $$\inf\left\{ t>0:x\in t\tilde{B}\right\} =\frac{1}{1+\lambda}\inf\left\{ t>0:x\in t\cdot B\right\} ,$$ which is exactly (c). We now iterate the above renorming procedure: we start with the Banach space $X$ with unit ball $B$ and corresponding norm $\left\Vert \cdot\right\Vert :=\left\Vert \cdot\right\Vert _{B}$ and we apply the proposition with $k=1$, a certain $\lambda_{1}>0$ and $R=1/2$. We let $B_{1}:=\widetilde{B}$ be the obtained body and $\left\Vert \cdot\right\Vert _{1}:=\left\Vert \cdot\right\Vert _{B_{1}}$ be the corresponding norm. Then we have $$\left\Vert \cdot\right\Vert _{1}\leq\left\Vert \cdot\right\Vert \leq(1+\lambda_{1})\left\Vert \cdot\right\Vert _{1},$$ $$\left\Vert \cdot\right\Vert =(1+\lambda_{1}\gamma_{1})\left\Vert \cdot\right\Vert _{1}\qquad\text{on }X^{1},$$ $$\left\Vert x\right\Vert =(1+\lambda_{1})\left\Vert x\right\Vert _{1}\qquad\text{whenever }\left\Vert P^{1}x\right\Vert \leq\frac{1/2}{1+\lambda_{1}}\left\Vert x\right\Vert ,$$ where $\gamma_{1}:=\frac{K}{K+1/2}$. We proceed inductively in the obvious way: fix a sequence $\left\{ \lambda_{n}\right\} _{n\geq1}\subseteq(0,\infty)$ such that $\prod_{i=1}^{\infty}(1+\lambda_{i})<\infty$ and, in order to have a more concise notation, denote by $\left\Vert \cdot\right\Vert _{0}:=\left\Vert \cdot\right\Vert $ the original norm of $X$ and by $K_{0}:=K$. Apply inductively the previous proposition: at the step $n$ we use the proposition with $\lambda=\lambda_{n}$, $R=1/2$, $k=n$ and $B=B_{n-1}$ and we set $B_{n}:=\widetilde{B_{n-1}}$ and $\left\Vert \cdot\right\Vert _{n}:=\left\Vert \cdot\right\Vert _{B_{n}}$. This gives a sequence of norms $\left\{ \left\Vert \cdot\right\Vert _{n}\right\} _{n\geq0}$ on $X$ such that for every $n\geq1$ we have: $$\left\Vert \cdot\right\Vert _{n}\leq\left\Vert \cdot\right\Vert _{n-1}\leq(1+\lambda_{n})\left\Vert \cdot\right\Vert _{n},\label{eq: equiv norm}$$ $$\left\Vert \cdot\right\Vert _{n-1}=(1+\lambda_{n}\gamma_{n})\left\Vert \cdot\right\Vert _{n}\qquad\text{on }X^{n},\label{eq: rel on tails}$$ $$\left\Vert x\right\Vert _{n-1}=(1+\lambda_{n})\left\Vert x\right\Vert _{n}\qquad\text{whenever }\left\Vert P^{n}x\right\Vert _{n-1}\leq\frac{1/2}{1+\lambda_{n}}\left\Vert x\right\Vert _{n-1},\label{eq: rel with projection}$$ where $K_{n}$ denotes the basis constant of $\left\{ e_{i}\right\} _{i\geq1}$ relative to $\left\Vert \cdot\right\Vert _{n}$ and $\gamma_{n}:=\frac{K_{n-1}}{K_{n-1}+1/2}\in(0,1)$. The condition $\left\Vert P^{n}x\right\Vert _{n-1}\leq\frac{1/2}{1+\lambda_{n}}\left\Vert x\right\Vert _{n-1}$ appearing in (\[eq: rel with projection\]) is somewhat unpleasing since the involved norms change with $n$; we thus replace it with the following more uniform, but weaker, condition. $$\left\Vert x\right\Vert _{n-1}=(1+\lambda_{n})\left\Vert x\right\Vert _{n}\qquad\text{whenever }\left\Vert P^{n}x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{0}.\label{eq: uniform rel with projection}$$ The validity of (\[eq: uniform rel with projection\]) is immediately deduced from the validity of (\[eq: equiv norm\]) and (\[eq: rel with projection\]): in fact if $x$ satisfies $\left\Vert P^{n}x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{0}$, then by (\[eq: equiv norm\]) $$\left\Vert P^{n}x\right\Vert _{n-1}\leq\left\Vert P^{n}x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\prod_{i=1}^{n-1}(1+\lambda_{i})\cdot\left\Vert x\right\Vert _{n-1}$$ $$=\frac{1/2}{1+\lambda_{n}}\prod_{i=n+1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{n-1}\leq\frac{1/2}{1+\lambda_{n}}\left\Vert x\right\Vert _{n-1};$$ hence (\[eq: rel with projection\]) implies that $\left\Vert x\right\Vert _{n-1}=(1+\lambda_{n})\left\Vert x\right\Vert _{n}$. In order to motivate the next step, let us notice that for a fixed $x\in X$ the sequence $\left\{ \left\Vert x\right\Vert _{n}\right\} _{n\geq0}$ has the same qualitative behavior since it is a decreasing sequence; on the other hand the quantitative rate of decrease changes with $n$. In fact it is clear that for a fixed $x\in X$, the condition $\left\Vert P^{n}x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{0}$ is eventually satisfied, so the sequence $\left\{ \left\Vert x\right\Vert _{n}\right\} _{n\geq0}$ eventually decreases with rate $(1+\lambda_{n})^{-1}$. On the other hand if $x\in X^{N}$, then for the terms $n=1,\dots,N$ the rate of decrease is $(1+\lambda_{n}\gamma_{n})^{-1}$. This makes it possible to rescale the norms $\left\{ \left\Vert \cdot\right\Vert _{n}\right\} _{n\geq0}$, obtaining norms $\left\{ \left|\left|\left|\cdot\right|\right|\right|_{n} \right\} _{n\geq0}$, in a way to have a qualitatively different behavior, increasing for $n=1,\dots,N$ and eventually decreasing. This property is crucial since it allows us to assure that, for $x\in X^N$, the norms $\left|\left|\left|x\right|\right|\right|_{n}$ for $n=0,\dots ,N-1$ are quantitatively smaller than $\left|\left|\left|x\right|\right|\right|_{N}$ and thus do not enter in the gluing procedure. As we have hinted at at the end of the previous section and as it will be apparent in the proof of Lemma \[lem:||| is equiv norm\], this is exactly what we need in order the approximation on $X^N$ to improve with $N$. Let $$C:=\prod_{i=1}^{\infty}\frac{1+\lambda_{i}\gamma_{i}}{1+\lambda_{i}\frac{1+\gamma_{i}}{2}},$$ $$\left|\left|\left|\cdot\right|\right|\right|_{n}:=C\cdot\prod_{i=1}^{n}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left\Vert \cdot\right\Vert _{n}.$$ For later convenience, let us also set $$\left|\left|\left|\cdot\right|\right|\right|_{\infty}=\sup_{n\geq0}\left|\left|\left|\cdot\right|\right|\right|_{n}.$$ The qualitative behavior of $\left\{ \left|\left|\left|x\right|\right|\right|_{n}\right\} _{n\geq0}$ is expressed in the following obvious, though crucial, properties of the norms $\left|\left|\left|\cdot\right|\right|\right|_{n}$. In particular, (a) will be used to show that the gluing together locally takes into account only finitely many terms; this will allow us to preserve the smoothness in Lemma \[lem:smooth norm\]. (b) expresses the fact that on $X^N$ the norms $\left\{ \left|\left|\left|\cdot\right|\right|\right|_{n} \right\} _{n=0}^{N-1}$ are smaller than $\left|\left|\left|\cdot\right|\right|\right|_{N}$ and will be used in Lemma \[lem:||| is equiv norm\] to obtain the improvement of the approximation. \[Fact propr of |||.|||\] For every $x\in X$ there is $n_{0}\in\mathbb{N}$ such that for every $n\geq n_{0}$ $$\left|\left|\left|x\right|\right|\right|_{n}=\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\left|\left|\left|x\right|\right|\right|_{n-1}.$$ In particular, it suffices to take any $n_{0}$ such that $\left\Vert P^{n}x\right\Vert _{0}\leq\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}\cdot\left\Vert x\right\Vert _{0}$ for every $n\geq n_{0}$. If $x\in X^{N}$, then for $n=1,\dots,N$ we have $$\left|\left|\left|x\right|\right|\right|_{n}=\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}\gamma_{n}}\left|\left|\left|x\right|\right|\right|_{n-1}.$$ \(a) Since $P^{n}x\rightarrow0$ as $n\rightarrow\infty$, condition (\[eq: uniform rel with projection\]) implies that there is $n_{0}$ such that for every $n\geq n_{0}$ we have $\left\Vert x\right\Vert _{n}=(1+\lambda_{n})^{-1}\left\Vert x\right\Vert _{n-1}$. Then it suffices to translate this to the $\left|\left|\left|\cdot\right|\right|\right|_{n}$ norms: $$\left|\left|\left|x\right|\right|\right|_{n}=\left(1+\lambda_{n}\frac{1+\gamma_{n}}{2}\right)\cdot C\cdot\prod_{i=1}^{n-1}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left\Vert x\right\Vert _{n}=$$ $$\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\cdot C\cdot\prod_{i=1}^{n-1}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left\Vert x\right\Vert _{n-1}=\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\left|\left|\left|x\right|\right|\right|_{n-1}.$$ (b) If $x\in X^{N}$ and $n=1,\dots,N$, then $x\in X^{n}$ too; thus by (\[eq: rel on tails\]) we have $\left\Vert x\right\Vert _{n}=(1+\lambda_{n}\gamma_{n})^{-1}\left\Vert x\right\Vert _{n-1}$. Now exactly the same calculation as in the other case gives the result. We can now conclude the renorming procedure: first we smoothen up the norms $\left|\left|\left|\cdot\right|\right|\right|_{n}$ and then we glue together all these smooth norms. Fix a decreasing sequence $\delta_{n}\searrow0$ such that for every $n\geq0$ $$(\dagger)\qquad(1+\delta_{n})\frac{1+\lambda_{n+1}\gamma_{n+1}}{1+\lambda_{n+1}\frac{1+\gamma_{n+1}}{2}}\leq1-\delta_{n}$$ (of course this is possible since $\gamma_{n+1}<1$). Then we apply the main result in [@HaTa] (Theorem 2.10 in their paper) to find $C^{k}$-smooth norms $\left\{ \left|\left|\left|\cdot\right|\right|\right|_{(s),n}\right\} _{n\geq0}$ such that for every $n$ $$\left|\left|\left|\cdot\right|\right|\right|_{n}\leq\left|\left|\left|\cdot\right|\right|\right|_{(s),n}\leq(1+\delta_{n})\left|\left|\left|\cdot\right|\right|\right|_{n}.$$ Next, let $\varphi_{n}:[0,\infty)\rightarrow[0,\infty)$ be $C^{\infty}$-smooth, convex and such that $\varphi_{n}\equiv0$ on $[0,1-\delta_{n}]$ and $\varphi_{n}(1)=1$; note that of course the $\varphi_{n}$’s are strictly monotonically increasing on $[1-\delta_{n},\infty)$. Finally define $\Phi:X\rightarrow[0,\infty]$ by $$\Phi(x):=\sum_{n\geq0}\varphi_{n}\left(\left|\left|\left|x\right|\right|\right|_{(s),n}\right)$$ and let $\left|\left|\left|\cdot\right|\right|\right|$ be the Minkowski functional of the set $\left\{ \Phi\leq1\right\} $. The fact that $\left|\left|\left|\cdot\right|\right|\right|$ is the desired norm is now an obvious consequence of the next two lemmas. In the first one we show that $\left|\left|\left|\cdot\right|\right|\right|$ is indeed a norm and that the approximation on $X^{N}$ improves with $N$. \[lem:||| is equiv norm\]$\left|\left|\left|\cdot\right|\right|\right|$ is a norm, equivalent to the original norm $\left\Vert \cdot\right\Vert $ of $X$. Moreover for every $N\geq0$ we have $$\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)^{-1}\cdot\left\Vert \cdot\right\Vert \leq\left|\left|\left|\cdot\right|\right|\right|\leq \frac{1+\delta_{N}}{1-\delta_{N}}\cdot\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left\Vert \cdot\right\Vert \qquad\text{on }X^{N}.$$ We start by observing that for every $N\geq0$ $$\left\{ x\in X^{N}:\left|\left|\left|x\right|\right|\right|_{\infty}\leq\frac{1-\delta_{N}}{1+\delta_{N}}\right\} \subseteq\left\{ x\in X^{N}:\Phi(x)\leq1\right\}\subseteq \left\{ x\in X^{N}:\left|\left|\left|x\right|\right|\right|_{\infty}\leq1\right\}.$$ In fact, pick $x\in X^{N}$ such that $\Phi(x)\leq1$, so in particular $\varphi_{n}\left(\left|\left|\left|x\right|\right|\right|_{(s),n}\right)\leq1$ for every $n$. The inequality $\left|\left|\left|\cdot\right|\right|\right|_{n}\leq\left|\left|\left|\cdot\right|\right|\right|_{(s),n}$ and the properties of $\varphi_{n}$ then imply $\left|\left|\left|x\right|\right|\right|_{n}\leq1$ for every $n$. This proves the right inclusion. For the first inclusion, we actually show that if $x\in X^N$ satisfies $\left|\left|\left|x\right|\right|\right|_{\infty} \leq\frac{1-\delta_{N}}{1+\delta_{N}}$, then $\Phi(x)=0$. To see this, fix any $n\geq N$; since the function $t\mapsto\frac{1-t}{1+t}$ is decreasing on $[0,1]$ and the sequence $\delta_{n}$ is decreasing too, we deduce $$\left|\left|\left|x\right|\right|\right|_{n}\leq\left|\left|\left|x\right|\right|\right|_{\infty} \leq\frac{1-\delta_{N}}{1+\delta_{N}}\leq\frac{1-\delta_{n}}{1+\delta_{n}}.$$ Hence $\left|\left|\left|x\right|\right|\right|_{(s),n}\leq1-\delta_{n}$ and $\varphi_{n}\left(\left|\left|\left|x\right|\right|\right|_{(s),n}\right)=0$ for every $n\geq N$. For the remaining values $n=0,\dots,N-1$ we use (b) in Fact \[Fact propr of |||.|||\] and condition $(\dagger)$: $$\left|\left|\left|x\right|\right|\right|_{(s),n}\leq(1+\delta_{n})\left|\left|\left|x\right|\right|\right|_{n}= (1+\delta_{n})\frac{1+\lambda_{n+1}\gamma_{n+1}}{1+\lambda_{n+1}\frac{1+\gamma_{n+1}}{2}} \cdot\left|\left|\left|x\right|\right|\right|_{n+1}$$ $$\leq(1-\delta_{n})\left|\left|\left|x\right|\right|\right|_{n+1}\leq1-\delta_{n};$$ hence $\varphi_{n}\left(\left|\left|\left|x\right|\right|\right|_{(s),n}\right)=0$ for $n=0,\dots,N-1$ too. This implies $\Phi(x)=0$ and proves the first inclusion. Taking in particular $N=0$, we see that $\left\{ \Phi\leq1\right\}$ is a bounded neighborhood of the origin in $(X,\left|\left|\left|\cdot\right|\right|\right|_{\infty})$. Since it is clearly convex and symmetric, we deduce that $\left\{ \Phi\leq1\right\} $ is a BCSB relative to $\left|\left|\left|\cdot\right|\right|\right|_{\infty}$. Hence $\left|\left|\left|\cdot\right|\right|\right|$ is a norm on $X$, equivalent to $\left|\left|\left|\cdot\right|\right|\right|_{\infty}$. The fact that $\left|\left|\left|\cdot\right|\right|\right|$ is equivalent to the original norm $\left\Vert \cdot\right\Vert$ follows immediately from the case $N=0$ in the second assertion, which we now prove. Fix $N\geq 0$; in order to estimate the distortion between $\left|\left|\left|\cdot\right|\right|\right|$ and $\left\Vert \cdot\right\Vert $ on $X^N$, we show that, on $X^N$, $\left|\left|\left|\cdot\right|\right|\right|$ is close to $\left|\left|\left|\cdot\right|\right|\right|_{\infty}$, that $\left|\left|\left|\cdot\right|\right|\right|_{\infty}$ is close to $\left|\left|\left|\cdot\right|\right|\right|_{N}$ and finally that $\left|\left|\left|\cdot\right|\right|\right|_{N}$ is close to $\left\Vert \cdot\right\Vert $. First, passing to the associated Minkowski functionals, the above inclusions yield $$(*)\qquad\left|\left|\left|\cdot\right|\right|\right|_{\infty}\leq \left|\left|\left|\cdot\right|\right|\right|\leq \frac{1+\delta_{N}}{1-\delta_{N}}\left|\left|\left|\cdot\right|\right|\right|_{\infty} \qquad\text{on }X^{N}.$$ Next, we compare $\left|\left|\left|\cdot\right|\right|\right|_{\infty}$ with $\left|\left|\left|\cdot\right|\right|\right|_{N}$. Of course $\left|\left|\left|\cdot\right|\right|\right|_{N}\leq\left|\left|\left|\cdot\right|\right|\right|_{\infty}$ and by property (b) in Fact \[Fact propr of |||.|||\] already used above we also have $\left|\left|\left|\cdot\right|\right|\right|_{n}\leq\left|\left|\left|\cdot\right|\right|\right|_{N}$ for $n\leq N$. We thus fix $n>N$ and observe $$\left|\left|\left|\cdot\right|\right|\right|_{n}:=C\prod_{i=1}^{n}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left\Vert \cdot\right\Vert _{n}\leq$$ $$\prod_{i=N+1}^{n}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot C\cdot\prod_{i=1}^{N}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left\Vert \cdot\right\Vert _{N}=$$ $$\prod_{i=N+1}^{n}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}\leq\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}.$$ This yields $$(*)\qquad\left|\left|\left|\cdot\right|\right|\right|_{N}\leq\left|\left|\left|\cdot\right|\right|\right|_{\infty}\leq\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}\qquad\text{on }X^{N}.$$ Finally, we compare $\left|\left|\left|\cdot\right|\right|\right|_{N}$ with $\left\Vert \cdot\right\Vert _{0}$. The subspaces $X^{N}$ are decreasing, so (\[eq: rel on tails\]) implies $\left\Vert \cdot\right\Vert =\prod_{i=1}^{N}\left(1+\lambda_{i}\gamma_{i}\right)\cdot\left\Vert \cdot\right\Vert _{N}$ on $X^{N}$; hence $$\left\Vert \cdot\right\Vert =\prod_{i=1}^{N}\left(1+\lambda_{i}\gamma_{i}\right)\cdot\prod_{i=1}^{\infty}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}\gamma_{i}}\cdot\prod_{i=1}^{N}\left(1+\lambda_{i}\frac{1+\gamma_{i}}{2}\right)^{-1}\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}$$ $$=\prod_{i=N+1}^{\infty}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}\gamma_{i}}\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}.$$ This implies in particular $$(*)\qquad\left|\left|\left|\cdot\right|\right|\right|_{N}\leq\left\Vert \cdot\right\Vert \leq\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left|\left|\left|\cdot\right|\right|\right|_{N}\qquad\text{on }X^{N};$$ combining the $(*)$ inequalities concludes the proof of the lemma. The estimate of the distortion in the particular case $N=0$ is in fact shorter than the general case given above. In fact, property (\[eq: equiv norm\]) obviously implies $\left\Vert \cdot\right\Vert _{n}\leq\left\Vert \cdot\right\Vert \leq\prod_{i=1}^{n}(1+\lambda_{i})\cdot\left\Vert \cdot\right\Vert _{n}$. It easily follows that for every $n$ $$\prod_{i=1}^{\infty}\left(1+\lambda_{i}\right)^{-1}\cdot\left\Vert \cdot\right\Vert \leq \left|\left|\left|\cdot\right|\right|\right|_{n}\leq \prod_{i=1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left\Vert \cdot\right\Vert;$$ it is then sufficient to combine this with the first of the $(*)$ inequalities. We finally check the regularity of $\left|\left|\left|\cdot\right|\right|\right|$. \[lem:smooth norm\] The norm $\left|\left|\left|\cdot\right|\right|\right|$ is $C^{k}$-smooth. We first show that for every $x$ in the set $\left\{ \Phi<2\right\} $ there is a neighborhood $\mathcal{U}$ of $x$ (in $X$) where the function $\Phi$ is expressed by a finite sum. We have already seen in the proof of Lemma \[lem:||| is equiv norm\] that $\Phi=0$ in a neighborhood of $0$, so the assertion is true for $x=0$; hence we can fix $x\neq0$ such that $\Phi(x)<2$. Observe that clearly the properties of $\varphi_{n}$ imply $\varphi_{n}(1+\delta_{n})\geq2$; thus $x$ satisfies $\left|\left|\left|x\right|\right|\right|_{n}\leq\left|\left|\left|x\right|\right|\right|_{(s),n}\leq1+\delta_{n}$ for every $n$. Denote by $c:=\frac{1}{2}\prod_{i=1}^{\infty}(1+\lambda_{i})^{-1}$ and choose $n_{0}$ such that $\left\Vert P^{n}x\right\Vert \leq\frac{c}{2}\cdot\left\Vert x\right\Vert $ for every $n\geq n_{0}$ (this is possible since $P^{n}x\rightarrow0$). Next, fix $\varepsilon>0$ small so that $\frac{c}{2}+K\varepsilon\leq(1-\varepsilon)c$ and $(1+\varepsilon)(1-\delta_{n_{0}})\leq1$, and let $\mathcal{U}$ be the following neighborhood of $x$: $$\mathcal{U}:=\left\{ y\in X:\left\Vert y-x\right\Vert <\varepsilon\left\Vert x\right\Vert \text{ and }\left|\left|\left|y\right|\right|\right|_{n_{0}}<(1+\varepsilon)\left|\left|\left|x\right|\right|\right|_{n_{0}}\right\} .$$ Clearly for $y\in\mathcal{U}$ we have $\left\Vert x\right\Vert \leq\frac{1}{1-\varepsilon}\left\Vert y\right\Vert $; thus for $y\in\mathcal{U}$ and $n\geq n_{0}$ we have $$\left\Vert P^{n}y\right\Vert \leq\left\Vert P^{n}y-P^{n}x\right\Vert +\left\Vert P^{n}x\right\Vert \leq K\varepsilon\left\Vert x\right\Vert +\frac{c}{2}\cdot\left\Vert x\right\Vert \leq(1-\varepsilon)c\left\Vert x\right\Vert \leq c\left\Vert y\right\Vert .$$ Hence (a) of Fact \[Fact propr of |||.|||\] implies that $\left|\left|\left|y\right|\right|\right|_{n}=\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\left|\left|\left|y\right|\right|\right|_{n-1}$ for every $n\geq n_{0}$ and $y\in\mathcal{U}$ (let us explicitly stress the crucial fact that $n_{0}$ does not depend on $y\in\mathcal{U}$). We have $\left|\left|\left|y\right|\right|\right|_{n_{0}}<(1+\varepsilon)\left|\left|\left|x\right|\right|\right|_{n_{0}}\leq(1+\varepsilon)(1+\delta_{n_{0}})$; using this bound and the previous choices of the parameters (in particular we use twice $(\dagger)$ and twice the fact that $\delta_{n}$ is decreasing), for every $n\geq n_{0}+2$ and $y\in\mathcal{U}$ we estimate $$\left|\left|\left|y\right|\right|\right|_{(s),n}\leq(1+\delta_{n})\left|\left|\left|y\right|\right|\right|_{n}=(1+\delta_{n})\prod_{i=n_{0}+1}^{n}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}}\cdot\left|\left|\left|y\right|\right|\right|_{n_{0}}$$ $$\leq(1+\delta_{n})\prod_{i=n_{0}+1}^{n}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}}\cdot(1+\varepsilon)(1+\delta_{n_{0}})\overset{(\dagger)}{\leq}(1+\delta_{n})\prod_{i=n_{0}+2}^{n}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}}\cdot(1+\varepsilon)(1-\delta_{n_{0}})$$ $$\leq(1+\delta_{n})\prod_{i=n_{0}+2}^{n}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}}\leq(1+\delta_{n-1})\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\cdot\prod_{i=n_{0}+2}^{n-1}\frac{1+\lambda_{i}\frac{1+\gamma_{i}}{2}}{1+\lambda_{i}}$$ $$\leq(1+\delta_{n-1})\frac{1+\lambda_{n}\frac{1+\gamma_{n}}{2}}{1+\lambda_{n}}\overset{(\dagger)}{\leq}1-\delta_{n-1}\leq1-\delta_{n}.$$ It follows that $\varphi_{n}\left(\left|\left|\left|y\right|\right|\right|_{(s),n}\right)=0$ for $n\geq n_{0}+2$ and $y\in\mathcal{U}$, hence $$\Phi=\sum_{n=0}^{n_{0}+2}\varphi_{n}\left(\left|\left|\left|\cdot\right|\right|\right|_{(s),n}\right)\qquad\text{on }\mathcal{U}.$$ This obviously implies that $\Phi$ is $C^{k}$-smooth on the set $\left\{ \Phi<2\right\} $ and in particular $\left\{ \Phi<2\right\} $ is an open set. Concerning the regularity of $\Phi$, we also observe here that $\Phi$ is lower semi-continuous on $X$ (this follows immediately from the fact that $\Phi$ is the sum of a series of positive continuous functions). The last step consists in applying the Implicit Function theorem (see e.g. [@HJ; @book], Theorem 1.87) and deduce the $C^{k}$-smoothness of $\left|\left|\left|\cdot\right|\right|\right|$ from the one of $\Phi$; this argument is quite well known, but equally short, so we decided to present it. The set $$V:=\left\{ (x,\rho)\in\left(X\backslash\left\{ 0\right\} \right)\times(0,\infty):\rho^{-1}\cdot x\in\left\{ \Phi<2\right\} \right\}$$ is open in $X\times(0,\infty)$ and the function $\Psi:V\rightarrow\mathbb{R}$ defined by $\Psi(x,\rho):=\Phi(\rho^{-1}\cdot x)$ is $C^{k}$-smooth on $V$. We notice that for every $h\in X\backslash\left\{ 0\right\} $ there is a unique $\rho>0$ such that $(h,\rho)\in V$ and $\Psi(h,\rho)=1$; moreover, $\rho=\left|\left|\left|h\right|\right|\right|$. In fact the functions $\varphi_{n}$ are strictly increasing on the set where they are positive, so $t\mapsto\Phi(th)$ is strictly increasing where it is positive; hence there is at most one $\rho$ as above. Also, $\left|\left|\left|h\right|\right|\right|=\inf\left\{ t>0:\Phi(t^{-1}h)\leq1\right\} $, so for every $\varepsilon>0$ we have $\Phi\left(\frac{1}{\left|\left|\left|h\right|\right|\right|+\varepsilon}h\right)\leq1$; as $\Phi$ is lower semi-continuous, we deduce $\Phi\left(\left|\left|\left|h\right|\right|\right|^{-1}h\right)\leq1$. If it were that $\Phi\left(\left|\left|\left|h\right|\right|\right|^{-1}h\right)<1$, then from the continuity of $\Phi$ on $\left\{ \Phi<2\right\} $ we would deduce $\Phi\left(\frac{1}{\left|\left|\left|h\right|\right|\right|-\varepsilon}h\right)\leq1$ for $\varepsilon>0$ small; however this contradicts $\left|\left|\left|h\right|\right|\right|$ being the infimum. Hence $\Phi\left(\left|\left|\left|h\right|\right|\right|^{-1}h\right)=1$ and in particular the unique $\rho$ as above is $\rho=\left|\left|\left|h\right|\right|\right|$. In other words, the equation $\Psi=1$ on $V$ globally defines a unique implicit function on $X\backslash\left\{ 0\right\} $, which is given by $\rho(h)=\left|\left|\left|h\right|\right|\right|$. Since $$D_{2}\Psi(h,\rho)=\frac{-1}{\rho^{2}}\Phi'(\rho^{-1}h)h=\frac{-1}{\rho^{2}}\sum_{n\geq0}\varphi_{n}'\left(\left|\left|\left|\rho^{-1}h\right|\right|\right|_{(s),n}\right)\left|\left|\left|h\right|\right|\right|_{(s),n}$$ (where $D_{2}\Psi$ denotes the partial derivative of $\Psi$ in its second variable), we have $$D_{2}\Psi(h,\left|\left|\left|h\right|\right|\right|)=\frac{-1}{\left|\left|\left|h\right|\right|\right|^{2}}\sum_{n\geq0}\varphi_{n}'\left(\frac{1}{\left|\left|\left|h\right|\right|\right|}\left|\left|\left|h\right|\right|\right|_{(s),n}\right)\left|\left|\left|h\right|\right|\right|_{(s),n}.$$ The condition $\Phi\left(\left|\left|\left|h\right|\right|\right|^{-1}h\right)=1$ implies $\varphi_{n}\left(\frac{1}{\left|\left|\left|h\right|\right|\right|}\left|\left|\left|h\right|\right|\right|_{(s),n}\right)>0$ for some $n$, hence $\varphi_{n}'\left(\frac{1}{\left|\left|\left|h\right|\right|\right|}\left|\left|\left|h\right|\right|\right|_{(s),n}\right)>0$ too and $D_{2}\Psi(h,\left|\left|\left|h\right|\right|\right|)\neq0$ on $X\backslash\left\{ 0\right\} $. Thus the Implicit Function theorem yields that the implicitly defined function shares the same regularity as $\Psi$, i.e. $\left|\left|\left|\cdot\right|\right|\right|$ is $C^{k}$-smooth on $X\backslash\left\{ 0\right\} $. \[Proof of Theorem \[thm:Ck norm improving\]\] Fix a separable Banach space as in the statement and a sequence $\left\{ \varepsilon_{N}\right\} _{N\geq0}$ of positive numbers. We find a sequence $\left\{ \lambda_{i}\right\} _{i\geq1}\subseteq(0,\infty)$ such that $$\prod_{i=N+1}^{\infty}(1+\lambda_{i})<1+\varepsilon_{N}$$ for every $N\geq0$; next, we find a decreasing sequence $\left\{ \delta_{N}\right\} _{N\geq0}$, $\delta_{N}\searrow0$, that satisfies $(\dagger)$ and such that $$\frac{1+\delta_{N}}{1-\delta_{N}}\cdot\prod_{i=N+1}^{\infty}(1+\lambda_{i})\leq1+\varepsilon_{N}$$ for every $N\geq0$. We then apply the renorming procedure described in this section with these parameters $\left\{ \lambda_{i}\right\} _{i\geq1}$ and $\left\{ \delta_{N}\right\} _{N\geq0}$ and we obtain a $C^{k}$-smooth norm $\left|\left|\left|\cdot\right|\right|\right|$ on $X$ that satisfies $$(1-\varepsilon_{N})\cdot\left\Vert \cdot\right\Vert \leq\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)^{-1}\cdot\left\Vert \cdot\right\Vert \leq\left|\left|\left|\cdot\right|\right|\right|\leq\frac{1+\delta_{N}}{1-\delta_{N}}\cdot\prod_{i=N+1}^{\infty}\left(1+\lambda_{i}\right)\cdot\left\Vert \cdot\right\Vert \leq(1+\varepsilon_{N})\cdot\left\Vert \cdot\right\Vert$$ on $X^{N}$ for every $N\geq0$; since these inequalities are obviously equivalent to $$\Bigl|\,\left|\left|\left|x\right|\right|\right|-\left\Vert x\right\Vert \,\Bigr|\leq\varepsilon_{N}\left\Vert x\right\Vert \qquad\text{for }x\in X^{N},$$ the proof is complete. Final remarks ============= In this short section we present some improvements of our main result in the particular case of polyhedral Banach spaces. Recall that a finite-dimensional Banach space $X$ is said to be *polyhedral* if its unit ball is a polyhedron, i.e. finite intersection of closed half-spaces; an infinite-dimensional Banach space $X$ is *polyhedral* if its finite-dimensional subspaces are polyhedral. It is proved in [@defoha-polyhedral] that if $X$ is a separable polyhedral Banach space, then every equivalent norm on $X$ can be approximated (uniformly on bounded sets) by a polyhedral norm (see Theorem 1.1 in [@defoha-polyhedral], where the approximation is stated in terms of closed, convex and bounded bodies). In analogy with our main result, it is natural to ask if this result can be improved in the sense that the approximation can be chosen to be improving on the tail vectors. It is not difficult to see that if we replace the $C^{k}$-smooth norms $\left|\left|\left|\cdot\right|\right|\right|_{(s),n}$ with polyhedral norms $\left|\left|\left|\cdot\right|\right|\right|_{(p),n}$ (thus using Theorem 1.1 in [@defoha-polyhedral]) and we replace the $C^{\infty}$-smooth functions $\varphi_{n}$ with piecewise linear ones, the resulting norm $\||\cdot\||$ is still polyhedral. We thus have: Let $X$ be a polyhedral Banach space with a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$ and let $\left\Vert \cdot\right\Vert $ be any renorming of $X$. Then for every sequence $\left\{ \varepsilon_{N}\right\} _{N\geq0}$ of positive numbers, there is a polyhedral renorming $\left|\left|\left|\cdot\right|\right|\right|$ of $X$ such that for every $N\geq0$ $$\Bigl|\,\left|\left|\left|x\right|\right|\right|-\left\Vert x\right\Vert \,\Bigr|\leq\varepsilon_{N}\left\Vert x\right\Vert \qquad\text{for }x\in X^{N}.$$ We say that $\|\cdot\|$ *depends locally on finitely many coordinates* if for each $x\in S_{X}$ there exists an open neighbourhood $O$ of $x$, a finite set $\{x_{1}^{*},\dots,x_{k}^{*}\}\subset X^{*}$ and a function $f:\mathbb{R}^{k}\rightarrow\mathbb{R}$ such that $\|y\|=f(x_{1}^{*}(y),\dots,x_{k}^{*}(y))$ for $y\in O$. It was also shown in [@defoha-polyhedral] that if $X$ is a separable polyhedral space, then every equivalent norm on $X$ can be approximated by a $C^{\infty}$-smooth norm that depends locally on finitely many coordinates. By inspection of our argument it follows that if we use such approximations in our proof, the resulting $C^{\infty}$-smooth norm $\||\cdot\||$ will also depend locally on finitely many coordinates. Explicitly, we obtain: Let $X$ be a polyhedral Banach space with a Schauder basis $\left\{ e_{i}\right\} _{i\geq1}$ and let $\left\Vert \cdot\right\Vert $ be any renorming of $X$. Then for every sequence $\left\{ \varepsilon_{N}\right\} _{N\geq0}$ of positive numbers, there is a $C^{\infty}$-smooth renorming $\left|\left|\left|\cdot\right|\right|\right|$ of $X$ that locally depends on finitely many coordinates and such that for every $N\geq0$ $$\Bigl|\,\left|\left|\left|x\right|\right|\right|-\left\Vert x\right\Vert \,\Bigr|\leq\varepsilon_{N}\left\Vert x\right\Vert \qquad\text{for }x\in X^{N}.$$ In conclusion of our note, we mention that we do not know whether our main result can be generalized replacing Schauder basis with Markushevich basis. The argument presented here is not directly applicable, since, for example, we have made use of the canonical projections on the basis and their uniform boundedness. **Acknowledgments.** The authors wish to thank the referee for a careful reading of our manuscript and for pointing out to us the above question. V. Bible, [*Using boundaries to find smooth norms*]{}, Studia Math. [**224**]{} (2014), 169–181. V. Bible and R.J. Smith, [*Smooth and polyhedral approximations in Banach spaces*]{}, J. Math. Anal. Appl. [**435**]{} (2016), 1262–1272. R. Deville, [*Geometrical implications of the existence of very smooth bump functions in Banach spaces*]{}, Israel J. Math. [**6**]{} (1989), 1–22. R. Deville, V.P. Fonf and P. Hájek, [*Analytic and $C^k$ approximations of norms in separable Banach spaces*]{}, Studia Math. [**120**]{} (1996), 61–74. R. Deville, V.P. Fonf and P. Hájek, [*Analytic and polyhedral approximation of convex bodies in separable polyhedral Banach spaces*]{}, Israel J. Math. [**105**]{} (1998), 139–154. R. Deville, G. Godefroy and V. Zizler, [*Smoothness and renormings in Banach spaces,*]{} Pitman Monographs and Surveys in Pure and Applied Mathematics, 64, 1993. A. J. Guirao, V. Montesinos and V. Zizler, [*Open Problems in the Geometry and Analysis of Banach Spaces*]{}, Springer 2016. P. Hájek, [*Smooth norms that depend locally on finitely many coordinates*]{}, Proc. Amer. Math. Soc. [**123**]{} (1995), 3817–3821. P. Hájek and R. Haydon, [*Smooth norms and approximation in Banach spaces of the type $C(K)$*]{}, Q.J. Math [**58**]{} (2007), 221–228. P. Hájek and J. Talponen, [*Smooth approximations of norms in separable Banach spaces.*]{} Q. J. Math. [**65**]{} (2014), 957–969. P. Hájek and M. Johanis, [*Smooth Analysis in Banach Spaces*]{}, De Gruyter, Berlin 2014. P. H' ajek and A. Proch' azka, [*$C^k$-smooth approximations of LUR norms*]{}, in Trans. AMS [**366**]{} (2014), 1973–1992. R. Haydon, [*Smooth functions and partitions of unity on certain Banach spaces*]{}, Quart. J. Math. [**47**]{} (1996), 455–468. R. Haydon, [*Trees in renorming theory*]{}, Proc. London Math. Soc. [**78**]{} (1999), 541–584. R.P. Maleev and S. Troyanski, [*Smooth norms in Orlicz spaces*]{}, Canad. J. Math. [**34**]{} (1991), 74–82. J. Pechanec, J.H.M. Whitfield and V. Zizler, [*Norms locally dependent on finitely many coordinates*]{}, An. Acad. Brasil Ci. [**53**]{} (1981), 415–417. [^1]: Research of the first author was supported in part by GAČR 16-07378S, RVO: 67985840. Research of the second author was supported in part by the Università degli Studi di Milano (Italy) and in part by the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM) of Italy.
--- abstract: 'Person re-identification (reID) aims to match person images to retrieve the ones with the same identity. This is a challenging task, as the images to be matched are generally semantically misaligned due to the diversity of human poses and capture viewpoints, incompleteness of the visible bodies (due to occlusion), [*etc*]{}. In this paper, we propose a framework that drives the reID network to learn semantics-aligned feature representation through delicate supervision designs. Specifically, we build a Semantics Aligning Network (SAN) which consists of a base network as encoder (SA-Enc) for re-ID, and a decoder (SA-Dec) for *reconstructing/regressing the densely semantics aligned full texture image*. We jointly train the SAN under the supervisions of person re-identification and aligned texture generation. Moreover, at the decoder, besides the reconstruction loss, we add Triplet ReID constraints over the feature maps as the perceptual losses. The decoder is discarded in the inference and thus our scheme is computationally efficient. Ablation studies demonstrate the effectiveness of our design. We achieve the state-of-the-art performances on the benchmark datasets CUHK03, Market1501, MSMT17, and the partial person reID dataset Partial REID. Code for our proposed method is available at: <https://github.com/microsoft/Semantics-Aligned-Representation-Learning-for-Person-Re-identification>.' author: - | Xin Jin$^{1}$[^1] Cuiling Lan$^{2}$[^2] Wenjun Zeng$^{2}$ Guoqiang Wei$^{1}$ Zhibo Chen$^{1\dag}$\ University of Science and Technology of China$^{1}$ Microsoft Research Asia$^{2}$\ [{jinxustc,wgq7441}@mail.ustc.edu.cn{culan, wezeng}@microsoft.comchenzhibo@ustc.edu.cn]{} bibliography: - 'reference.bib' title: 'Semantics-Aligned Representation Learning for Person Re-identification' --- Introduction ============ Person re-identification (reID) aims to identify/match persons in different places, times, or camera views. There are large variations in terms of the human poses, capturing view points, incompleteness of the bodies (due to occlusion). These result in *semantics misalignment* across 2D images which makes reID challenging [@shen2015person; @varior2016siamese; @subramaniam2016deep; @su2017pose; @zheng2017pose; @zhang2017alignedreid; @yao2017deep; @li2017learning; @zhao2017spindle; @wei2017glad; @zheng2018pedestrian; @ge2018fd; @suh2018part; @qian2018pose; @zhang2019DSA]. *Semantics misalignment* can be interpreted from two aspects. (1) Spatial semantics misalignment: the same spatial position across images may correspond to different semantics of human body or even different objects. As the example in Figure \[fig:motivation\] (a) shows, the spatial position $A$ which corresponds to person leg in the first image corresponds to person abdomen in the second image. (2) Inconsistency of visible body regions/semantics: since a person is captured through a 2D projection, only a portion of the 3D surface of a person is visible/projected in an image. The visible body regions/semantics across images are not consistent. As shown in Figure \[fig:motivation\](b), front side of a person is visible in one image and invisible in another one. ![Challenges in person reID: (a) Spatial misalignment; (b) Inconsistency of the visible body regions/semantics.](fig1.pdf){width="1.0\linewidth"} \[fig:motivation\] **Alignment:** Deep learning methods can deal with such diversities and misalignment to some extent but it is not enough. In recent years, many approaches explicitly exploit human pose/landmark information to achieve coarse alignment and they have demonstrated their superiority for person reID [@su2017pose; @zheng2017pose; @yao2017deep; @li2017learning; @zhao2017spindle; @wei2017glad; @suh2018part]. During the inference, these part detection sub-networks are usually required which increases the computational complexity. Besides, the body-part alignment is coarse and there is still spatial misalignment within the parts [@zhang2019DSA]. To achieve fine-granularity spatial alignment, based on estimated dense semantics [@guler2018densepose], Zhang [*et al*. ]{}warp the input person image to a canonical UV coordinate system to have densely semantics aligned images as inputs for reID [@zhang2019DSA]. However, the invisible body regions result in many holes in the warped images and thus the inconsistency of visible body regions across images. How to better solve the dense semantics misalignment is still an open problem. ![Illustration of the proposed Semantics Aligning Network (SAN), which consists of a base network as encoder (SA-Enc) and a decoder sub-network (SA-Dec). The reID feature vector $\rm \textbf{f}$ is obtained by average pooling the feature map $f_{e4}$ of the SA-Enc, followed by the reID losses of $\mathcal{L}_{ID}$ and $\mathcal{L}_{Triplet}$. To encourage the encoder learning semantically aligned features, the SA-Dec is followed which regresses the densely semantically *aligned full* texture image with the pseudo groundtruth supervision $\mathcal{L}_{Rec.}$. The pseudo groundtruth generation is described in Sec. \[3.1\] without shown here. At the decoder, $\mathcal{L}_{TR}$ are added as the high level perceptual metric. We use ResNet-50 with four residual blocks as our SA-Enc. In inference, the SA-Dec is discarded.](fig2.pdf){width="1.0\linewidth"} \[fig:flowchart\] **Our work:** We intend to fully address the semantics misalignment problems in both aspects. We achieve this by proposing a simple yet powerful Semantics Aligning Network (SAN). Figure \[fig:flowchart\] shows the overall framework of the SAN, which introduces an aligned texture generation sub-task, with densely semantics aligned texture image (see examples in Figure \[fig:texture\]) as supervision. Specifically, SAN consists of a base network as encoder (SA-Enc), and a decoder sub-network (SA-Dec). The SA-Enc can be any baseline network used for person reID ([*e*.*g*. ]{}ResNet-50 [@he2016deep]), which outputs a feature map $f_{e4}$ of size $h\times w \times c$. The reID feature vector $\rm \textbf{f} \in \mathbb{R}^c$ is then obtained by average pooling the feature map $f_{e4}$, followed by the reID losses. To encourage the SA-Enc to learn semantically aligned features, the SA-Dec is introduced and used to regress/generate the densely semantically aligned full texture image (also referred to as texture image for short) with pseudo groundtruth supervision. We exploit a synthesized dataset for learning pseudo groundtruth texture image generation. This framework enjoys the benefit of dense semantics alignment but without increasing the complexity of inference since the decoder SA-Dec is discarded in inference. Our main contributions are summarized as follows. - We propose a simple yet powerful framework for solving the misalignment challenge in person reID without increasing computational cost in inference. - A semantics alignment constraint is delicately introduced by empowering the encoded feature map with *aligned full* texture generation capability. - At the SA-Dec, besides the reconstruction loss, we propose over the feature maps as the perceptual metric. - There is no groundtruth aligned texture image for the person reID datasets. We address this by generating pseudo groundtruth texture images by leveraging synthesized data with person image and aligned texture image pairs (see Figure \[fig:texture\]). Our method achieves the state-of-the-art performance on the benchmark datasets CUHK03 [@li2014deepreid], Market-1501 [@zheng2015scalable], MSMT17 [@wei2018person], Partial REID [@zheng2015partial]. Related Work {#sec2} ============ Person reID based on deep neural networks has made great progress in recent years. Due to the variations in poses, viewpoints, incompleteness of the visible bodies (due to occlusion), [*etc*.]{}, across the images, semantics misalignment is still one of the key challenges. **Alignment with Pose/Part Cues for ReID:** To address the spatial semantics misalignment, most of the previous approaches make use of external cues such as pose/part [@li2017learning; @yao2017deep; @zhao2017spindle; @kalayeh2018human; @zheng2017pose; @su2017pose; @suh2018part]. Human landmark (pose) information can help align body regions across images. Zhao [*et al*. ]{}[@zhao2017spindle] propose a human body region guided Splindle Net, where a body region proposal sub-network (trained with the human pose dataset) is used to extract the body regions, [*e*.*g*.]{}, head-shoulder, arm region. The semantic features from different body regions are separately captured thus the body part features can be aligned across images. Kalayeh [*et al*. ]{}[@kalayeh2018human] integrate a human semantic parsing branch in their network for generating probability maps associated to different semantic regions of human body, [*e*.*g*.]{}, head, upper-body. Based on the probability maps, the features from different semantic regions of human body are aggregated separately to have part aligned features. Qian [*et al*. ]{}[@qian2018pose] propose to make use of GAN model to synthesize realistic person images of eight canonical poses for matching. However, these approaches usually require pose/part detection or image generation sub-networks, and extra computational cost in inference. Moreover, the alignment based on pose is coarse without considering the finer grained alignment within a part across images. [Zhang [*et al*. ]{}[@zhang2019DSA] exploit the dense semantics from DensePose [@alp2018densepose] rather than the coarse pose for reID. Their network consists of two streams in training: a main stream takes the original image as input while the other stream learns features from the warped images for regularizing the feature learning of the main stream. However, the invisible body regions result in many holes in the warped images and inconsistency of visible body regions across images, which could hurt the learning efficiency. Moreover, there is a lack of more direct constraints to enforce the alignment. The design of efficient frameworks for dense semantics alignment is still under-explored. In this paper, we propose an elegant framework which adds direct constraints to encourage dense semantics alignment in feature learning.]{} **Semantics Aligned Human Texture:** A human body could be represented by a 3D mesh ([*e*.*g*. ]{}Skinned Multi-Person Linear Model, SMPL [@loper2015smpl]) and a texture image [@varol2017learning; @hormann2007mesh] as illustrated in Figure \[fig:syn\]. Each position on the 3D body surface has a semantic identity (identified by a 2D coordinate (u,v) in the canonical UV space) and a texture representation ([*e*.*g*. ]{}RGB pixel value) [@guler2018densepose; @guler2017densereg]. A texture image on the UV coordinate system ([*i*.*e*.]{}, surface-based coordinate system) represents the *aligned full* texture of the 3D surface of the person. Note that the texture images across different persons are densely semantically aligned (see Figure \[fig:texture\]). In [@guler2018densepose], a dataset with labeled dense semantics ([*i*.*e*. ]{}DensePose) is established and a CNN-based system is designed to estimate DensePose from person images. Neverova [*et al*. ]{}[@neverova2018dense] and Wang [*et al*. ]{}[@wang2019re] leverage the aligned texture image to synthesize person image of another pose or view. Yao [*et al*. ]{}[@yao2019densebody] propose to regress the 3D human body ((x,y,z) coordinates in 3D space) in the semantics aligned UV space, with the RGB person image as the input to the CNN. Different from all these works, we leverage the densely semantically aligned full texture image to address the misalignment problem in person reID. We use them as direct supervisions to drive the reID network to learn semantics aligned features. ![Examples of texture images (first row) and the corresponding synthesized person images with different poses, viewpoints, and backgrounds (second row). A texture image represents the full texture of the 3D human surface in a surface-based canonical coordinate system (UV space). Each position (u,v) corresponds to a unique semantic identity. For person images of different persons/poses/viewpoints (in the second row), their corresponding texture images are densely semantically aligned.](fig3.pdf){width="1.0\linewidth"} \[fig:texture\] The Semantics Aligning Network (SAN) {#sec3} ==================================== To address the cross image misalignment challenge caused by human pose, capturing viewpoint variations, and the incompleteness of the body surface (due to the occlusion when projecting 3D person to 2D person image), we propose a Semantics Aligning Network (SAN) for robust person reID, in which densely semantically aligned full texture images are taken as supervision to drive the learning of semantics aligned features. The proposed framework is shown in Figure \[fig:flowchart\]. It consists of a base network as encoder (SA-Enc) for reID, and a decoder sub-network (SA-Dec) (see Sec. \[3.2\]) for generating densely semantically aligned full texture image with supervision. This encourages the reID network to learn semantics aligned feature representation. Since there is no groundtruth texture image of 3D human surface for the reID datasets, we use our synthesized data based on [@varol2017learning] to train SAN (with reID supervisions removed) which is then used to generate pseudo groundtruth texture images for the reID datasets (see Sec. \[3.1\]). The reID feature vector $\rm \textbf{f}$ is obtained by average pooling the last layer feature map $f_{e4}$ of the SA-Enc, followed by the reID losses. The SA-Dec is added after the last layer of the SA-Enc to regress densely semantically aligned texture image, with the (pseudo) groundtruth texture supervision. In the SA-Dec, are further incorporated at different layers/blocks as the high level perceptual metric to encourage identity preserving reconstruction. During inference, the SA-Dec is discarded. ![Illustration of the generation of synthesized person image to form a (*person image*, *texture image*) pair. Given a texture image, a 3D mesh, a background image, and rendering parameters, we can obtain a 2D person image through the rendering.](fig4.pdf){width="1.0\linewidth"} \[fig:syn\] Densely Semantically Aligned Texture Image {#3.1} ------------------------------------------ **Background:** The person texture image in the surface-based coordinate system (UV space) is widely used in the graphics field [@hormann2007mesh]. Texture images for different persons/viewpoints/poses are densely semantically aligned, as illustrated in Figure \[fig:texture\]. Each position (u,v) corresponds to a unique semantic identity on the texture image, [*e*.*g*.]{}, the pixel on the right bottom of the texture image corresponds to some semantics of a hand. Besides, a texture image contains all the texture of the full 3D surface of a person. In contrast, only a part of the surface texture is visible/projected on a 2D person image. **Motivation:** We intend to leverage such aligned texture images to drive the reID network to learn semantics aligned features. For different input person images, the corresponding texture images are well semantics aligned. First, for the same spatial positions on different texture images, the semantics are the same. Second, for person images with different visible semantics/regions, their texture images are semantics consistent/aligned since each one contains the full texture/information of the 3D person surface. **Pseudo groundtruth Texture Images Generation:** For the images in the reID datasets, however, there are no groundtruth aligned full texture images. We propose to train the SAN using our synthesized data to enable the generation of a pseudo groundtruth texture image for each image in the reID datasets. We can leverage a CNN-based network to generate pseudo groundtruth texture images. In this work, we reuse the proposed SAN (with the reID supervisions removed) as the network (see Figure \[fig:flowchart\]), which we refer to as SAN-PG (Semantics Aligning Network for Pseudo Groundtruth Generation) for differentiation. Given an input person image, the SAN-PG outputs predicted texture image as the pseudo groundtruth. To train the SAN-PG, we synthesize a Paired-Image-Texture dataset (PIT dataset), based on SURREAL dataset [@varol2017learning], for the purpose of providing the image pairs, [*i*.*e*.]{}, the *person image* and *its texture image*. The texture image stores the RGB texture of the *full* person 3D surface. As illustrated in Figure \[fig:syn\], given a texture image, a 3D mesh/shape, and a background image, a 2D projection of a 3D person can be obtained by rendering [@varol2017learning]. We can control the pose and body form of the person, and projection viewpoint, through changing the parameters of 3D mesh/shape model ([*i*.*e*. ]{}SMPL [@loper2015smpl]) and the rendering parameters. Note that we do not include identity information in the PIT dataset. To generate the PIT dataset with paired person images and texture images, in particular, we use 929 (451 for female and 478 for male) raster-scanned texture maps provided by the SURREAL dataset [@varol2017learning] to generate the *person image* and *texture image* pairs. These texture images are aligned with the SMPL default two-dimensional UV coordinate space (UV space). The same uv coordinate value corresponds to the same semantics. We generate 9,290 different meshes of diverse poses/shapes/viewpoints, by using SMPL body model [@loper2015smpl] parameters inferred by HMR [@kanazawa2018end] from the person images of the COCO dataset [@lin2014microsoft]. For each texture map, we assign 10 different meshes and render these 3D meshes with the texture image by Neural Render [@kato2018neural]. Then we obtain in total 9,290 different synthesized (*person image*, *texture image*) pairs. To simulate real-world scenes, the background images for rendering are randomly sampled from COCO dataset [@lin2014microsoft]. Each synthetic person image is centered on a person with resolution 256$\times$128. The resolution of the texture images is 256$\times$256. **Discussion:** The texture images which we use for supervisions have three major advantages. 1) They are spatially aligned in terms of the dense semantics of a person surface and thus can guide the reID network to learn semantics aligned representation. 2) A texture image containing the *full* 3D surface of a person can guide the reID network to learn more comprehensive representation of a person. 3) They represent the textures of the human body surface and thus naturally eliminate the interference of diverse background scenes. There are also some limitations of the current pseudo groundtruth texture image generation process. 1) There is a domain gap between synthetic 2D images (in the PIT dataset) and real-world captured images where the synthetic person is not very realistic. 2) The number of texture images provided by SURREAL [@varol2017learning] is not large ([*i*.*e*. ]{}929 in total) which may constraint the diversity of the data in our synthesized dataset. 3) On SURREAL, all faces in the texture image are replaced by an average face of either man or woman [@lin2014microsoft]. We leave it as future work to address these limitations. Even with such limitations, our scheme achieves significant performance improvement over the baseline on person reID. SAN and Optimization {#3.2} -------------------- As illustrated in Figure \[fig:flowchart\], the SAN consists of an encoder SA-Enc for person reID, and a decoder SA-Dec which enforces constraints over the encoder by requiring the encoded features to be able to predict/regress the semantically aligned full texture images. **SA-Enc:** We can use any baseline network used in person reID ([*e*.*g*. ]{}ResNet-50 [@sun2017beyond; @zhang2017alignedreid; @zhang2019DSA]) as the SA-Enc. In this work, we similarly use ResNet-50 and it consists of four residual blocks. The output feature map of the fourth block $f_{e4} \in \mathbb{R}^{ h \times w \times c}$ is spatially average pooled to get the feature vector (${\rm \textbf{f}} \in \mathbb{R}^c$), which is the reID feature for matching. For the purpose of reID, on the feature vector ${\rm \textbf{f}}$, we add the widely-used identification loss ($ID$ Loss) $\mathcal{L}_{ID}$, [*i*.*e*.]{}, the cross entropy loss for identification classification, and the ranking loss of triplet loss with batch hard mining [@hermans2017defense] ($Triplet$ Loss) $\mathcal{L}_{Triplet}$ as the loss functions in training. **SA-Dec:** To encourage the encoder features to learn semantics aligned features, we add a decoder SA-Dec after the fourth block ($f_{e4}$) of the encoder to regress the densely semantically aligned texture images, supervised by the (pseudo) groundtruth texture images. A reconstruction loss $\mathcal{L}_{Rec.}$ is introduced to minimize $L$1 differences between the generated texture image and its corresponding (pseudo) groundtruth texture image. **[Triplet ReID constraints at SA-Dec:]{}** Besides the capability of reconstructing the texture images optimized/measured by the $L$1 distance, we also expect the features in the decoder inherit the capability of distinguishing different identities. Wang [*et al*. ]{}[@wang2019re] use reID network as the perceptual supervision to generate person image, which judges whether the generated person image and the real image have the same identity. Different from [@wang2019re], in considering that the features at each layer of the decoder are spatially semantically aligned across images, we measure the feature distance for each spatial position rather than on the final globally pooled feature. We introduce $Triplet$ $ReID$ constraints to minimize the $L$2 differences between the features of the same identity and maximize those of different identities. Specially, for a sample $a$ in a batch, we can randomly select a positive sample $p$ (with the same identity) and a negative sample $n$. The Triplet ReID constraint/loss over the output feature map of the $l^{th}$ block of the SA-Dec is defined as $$\begin{aligned} \mathcal{L}_{TR}^l = max(\frac{1}{h_l\times w_l}||f_{dl}(x_{l}^{a})-f_{dl}(x_{l}^{p})||_{2}^{2} - \\ \frac{1}{h_l\times w_l}||f_{dl}(x_{l}^{a}-f_{dl}(x_{l}^{n})||_{2}^{2}+ m, 0), \label{eq:2} \end{aligned}$$ where $h_l \times w_l$ is the resolution of feature map with $c_l$ channels, $f_{dl}(x_{l}^{a}) \in \mathbb{R}^{h_l \times w_l \times c_l}$ denotes the feature map of sample $a$. $||f_{dl}(x_{l}^{a})-f_{dl}(x_{l}^{p})||_{2}^{2} = \sum_{i=1}^{h_l} \sum_{j=1}^{w_l} || f_{dl}(x_{l}^{a})(i,j,:) - f_{dl}(x_{l}^{p})(i,j,:) ||_{2}^{2}$ with $f_{dl}(x_{l}^{a})(i,j,:)$ denotes the feature vector of $c_l$ channels at spatial position $(i,j)$. The margin parameter $m$ is set to 0.3 experimentally **Training Scheme:** There are two steps for training our proposed SAN framework for reID: *Step-1*, we train a network for the purpose of generating pseudo groundtruth texture images for any given input person image. For simplicity, we reuse a simplified SAN ([*i*.*e*.]{}, SAN-PG) which consists of the SA-Enc and SA-Dec, but with only the reconstruction loss $\mathcal{L}_{Rec.}$. We train the SAN-PG with our synthesized PIT dataset. The SAN-PG model is then used to generate pseudo groundtruth texture image for reID datasets (such as CUHK03 [@li2014deepreid]). *Step-2*, we train the SAN for both reID and aligned texture generation. The pre-trained weights of the SAN-PG are used to initialize the SAN. One alternative is to use only the reID dataset for training SAN, where the pseudo groundtruth texture images are used for supervision and all the losses are added. The other strategy is to iteratively use the reID dataset and the synthesized PIT dataset during training. We find the later solution gives superior results because the groundtruth texture images for the synthesized PIT dataset have higher quality than that of reID dataset. The overall loss $\mathcal{L}$ consists of the $ID$ Loss $\mathcal{L}_{ID}$, the $Triplet$ Loss $\mathcal{L}_{Triplet}$, the reconstruction loss $\mathcal{L}_{Rec.}$, and the $Triplet$ $ReID$ constraint $\mathcal{L}_{TR}$, [*i*.*e*.]{}, $\mathcal{L}$ = $\lambda_1\mathcal{L}_{ID}$ + $\lambda_2\mathcal{L}_{Triplet}$ + $\lambda_3\mathcal{L}_{Rec.}$ + $\lambda_4\mathcal{L}_{TR}$. For a batch of reID data, we experimentally set $\lambda_1$ to $\lambda_4$ as 0.5, 1.5, 1, 1. For a batch of synthesized data, $\lambda_1$ to $\lambda_4$ are set to 0, 0, 1, 0 where the reID losses and (losses) are not used. Experiment ========== Datasets and Evaluation Metrics ------------------------------- We conduct experiments on six benchmark person reID datasets, including CUHK03 [@li2014deepreid], Market1501 [@zheng2015scalable], DukeMTMC-reID [@zheng2017unlabeled], the large-scale MSMT17 [@wei2018person], and two challenging partial person reID datasets of Partial REID [@zheng2015partial] and Partial-iLIDS [@he2018deep] We follow the common practices and use the cumulative matching characteristics (CMC) at Rank-k, $k$ = 1, 5, 10, and mean average precision (mAP) to evaluate the performance. Implementation Details ---------------------- We use ResNet-50 [@he2016deep] (which are widely used in some re-ID systems [@sun2017beyond; @zhang2019DSA]) to build our SA-Enc. We also take it as our baseline (Baseline) with both ID loss and triplet loss. Similar to [@sun2017beyond; @zhang2019DSA], the last spatial down-sample operation in the last *Conv* layer is removed. We build a light weight decoder SA-Dec by simply stacking 4 residual up-sampling blocks with about 1/3 parameters of the SA-Enc. This facilitates our model training using only a single GPU. Ablation Study -------------- We perform comprehensive ablation studies to demonstrate the effectiveness of the designs in the SAN framework, on the datasets of CUHK03 (labeled bounding box setting) and Market-1501 (single query setting). **Effectiveness of Dense Semantics Alignment.** In Table \[tab:compare-baseline\], ***SAN-basic*** denotes our basic semantics aligning model which is trained with the supervision of the pseudo groundtruth texture images with loss of $\mathcal{L}_{Rec.}$, the reID losses $\mathcal{L}_{ID}$ and $\mathcal{L}_{Triplet}$. ***SAN w/$\mathcal{L}_{TR}$*** denotes that the at the SA-Dec is added on top of the *SAN-basic*. ***SAN w/syn. data*** denotes that the (*person image*, *texture image*) pairs of our PIT dataset is also used in training the SAN on top of the *SAN-basic* network. ***SAN*** denotes our final scheme with both the and the groundtrth texture image supervision from the PIT on top of the *SAN-basic* network. ![image](fig5.pdf){width="1.0\linewidth"} \[fig:exp1\] --------------------------- ---------- ---------- ---------- ---------- \[4\][\*]{}[Model]{} Rank-1 mAP Rank-1 mAP Baseline (ResNet-50) 73.7 69.8 94.1 83.2 SAN-basic 77.9 73.7 95.1 85.8 SAN w/ $\mathcal{L}_{TR}$ 78.9 74.9 95.4 86.9 SAN w/ syn. data 78.8 75.8 95.7 86.8 SAN **80.1** **76.4** **96.1** **88.0** --------------------------- ---------- ---------- ---------- ---------- : Comparisons (%) of our SAN and baseline. \[tab:compare-baseline\] We have the following observations/conclusions. **1)** Thanks to the drive to learn semantics aligned features, our *SAN-basic* significantly outperforms the baseline scheme by about 4% in both Rank-1 and mAP accuracy on CUHK03. **2)** The introduction of high-level ($\mathcal{L}_{TR}$) as the perceptual loss can regularize the feature learning and it brings about additional 1.0% and 1.2% improvements in Rank-1 and mAP accuracy on CUHK03. Note that we add them after each block of the first three blocks in the SA-Dec. **3)** The use of the synthesized PIT dataset (syn. data) with the input image and groundtruth texture image pairs for training the SAN remedies the imperfection of the generated pesudo groundtruth texture images (with errors/noise/blurring). It improves the performance over *SAN-basic* by 0.9% and 2.1% in Rank-1 and mAP accuracy. **4)** Our final scheme *SAN* significantly outperforms the baseline, [*i*.*e*.]{}, by **6.4%** and **6.6%** in Rank-1 and mAP accuracy on CUHK03, but with the same inference complexity. On Market1501, even though the baseline performance is already very high, our *SAN* achieves 2.0% and 4.8% improvement in Rank-1 and mAP. **Different Reconstruction Guidance.** We study the effect of using different reconstruction guidance and show results in Table \[tab:supervision\]. We design another two schemes for comparisons. For the same input image, the three schemes use the same encoder-decoder networks (the same network as SAN-basic) but to reconstruct (a) the input person image, (b) pose aligned person image, and (c) proposed texture image (see Figure \[fig:exp1\]). To have pose aligned person image as supervision, during synthesizing the PIT dataset, for each projected person image, we also synthesized a person image of a given fixed pose (frontal pose here). Thus, the pose aligned person images are also semantically aligned. In this case, only partial texture (frontal body regions) of the full 3D surface texture is retained with information loss. [In addition, corresponding to (b), we also use the pose aligned person images generated by PN-GAN [@qian2018pose] as the reconstruction guidance and get *Enc-Dec rec. PN-GAN pose*.]{} =2.6pt ---------------------------------- ---------- ---------- ---------- ---------- \[4\][\*]{}[Model]{} Rank-1 mAP Rank-1 mAP Baseline (ResNet-50) 73.7 69.8 94.1 83.2 Enc-Dec rec. input 74.4 70.8 94.3 84.0 Enc-Dec rec. pose 75.8 72.0 94.4 84.5 Enc-Dec rec. PN-GAN pose 76.1 72.6 94.3 84.7 Enc-Dec rec. texture (SAN-basic) **77.9** **73.7** **95.1** **85.8** ---------------------------------- ---------- ---------- ---------- ---------- : Performance (%) comparisons of the same encoder-decoder networks but with different reconstruction objectives of reconstructing the input image, pose aligned person image, and texture image respectively. \[tab:supervision\] From Table \[tab:supervision\], we have the following observations/conclusions. 1) The addition of a reconstruction sub-task helps improve the reID performance which encourages the encoded feature to preserve more original information. *Enc-Dec rec. input* improves the performance of the baseline by 0.7% and 1.0% in Rank-1 and mAP accuracy. However, the input images (and their reconstructions) are not semantically aligned across images. 2) *Enc-Dec rec. pose* and *Enc-Dec rec. PN-GAN pose* both enforce the supervision to be *pose aligned person images*. This has a superior performance to *Enc-Dec rec. input*, demonstrating the effectiveness of **alignment**. But they are sub-optimal which may lose information. For example, for an input back-facing person image, such fixed (frontal) pose supervision may mistakenly guide the features to drop the back-facing body information. 3) In contrast, our full aligned texture image as supervision can provide comprehensive and densely semantics aligned information, which results in the best performance. **Why not *Directly* use Generated Texture Image for ReID?** How about the performance when the generated texture images are used as the input for reID? Results show that our scheme significantly outperforms them. The inferior performance is caused by the low quality of the generated texture image (with the texture smoothed/blurred). **How does the Quality of Textures affect reID Performance?** We use different backbone networks, [*e*.*g*.]{}, ResNet-101, DenseNet-121, [*etc*.]{}, to train the pseudo texture generators, and then the generated pseudo textures are used to train our SAN-basic network for reID. We find that using deeper and more complex generators can improve the texture quality, which in turn further boosts the reID performance. ![Two sets of examples of the pairs. Each pair corresponds to the original input image and the generated texture image.](fig6.pdf){width="1.0\linewidth"} \[fig:vis\] \[tab:sto\] Comparison with State-of-the-Arts --------------------------------- Table \[tab:sto\] shows the performance comparisons of our proposed SAN with the state-of-the-art methods. Our scheme SAN achieves the best performance on CUHK03, Market1501, and MSMT17. It consistently outperforms the approach *DSA-reID* [@zhang2019DSA] which also considers the dense alignment. On the DukeMTMC-reID dataset, *MGN* [@wang2018learning] achieves better performance, however, it ensembles the local features of multiple granularities and the global features. Visualization of Generated Texture Image ---------------------------------------- For the different images with varied poses, viewpoints, or scales, we find the generated texture images from our SAN are well semantically aligned (see Figure \[fig:vis\]). Partial Person ReID ------------------- Partial person reID is more challenging as the misalignment problem is more severe, where two partial person images are generally not spatially semantics aligned and usually have less overlapped semantics. We also demonstrate the effectiveness of our scheme on the challenging partial person reID datasets of Partial REID [@zheng2015partial] and Partial-iLIDS [@he2018deep]. Benefiting from the *aligned full* texture generation capability, our SAN exhibits outstanding performance. Figure \[fig:partial\] shows our regressed texture images from the SA-Dec are semantically aligned across images even though the input images have severe misalignment. ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- \[4\][\*]{}[Model]{} Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 AMC+SWM 36.0 - - 49.6 - - DSR (single-scale)\* 39.3 - - 51.1 - - DSR (multi-scale)\* 43.0 - - **54.6** - - Baseline (ResNet-50) 37.8 65.0 74.5 42.0 65.5 73.2 SAN 39.7 67.5 80.5 46.9 71.2 78.2 Baseline (ResNet-50)\* 38.9 67.7 78.2 46.1 69.6 76.1 SAN\* **44.7** **72.4** **86.0** 53.7 **77.4** **81.9** ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- : Partial person reID performance on the datasets of Partial REID and Partial-iLIDS (partial images are used as the probe set and holistic images are used as the gallery set). “\*” means that the network is fine-tuned with holistic and partial person images from Market1501. \[tab:partial\] Table \[tab:partial\] shows the experimental results. Note that we train SAN on the Market1501 dataset [@zheng2015scalable] and test on the partial datasets. We directly take the trained model for Market1501 for testing, [*i*.*e*.]{}, Baseline (ResNet-50), SAN. In this case, the network seldom sees partial person data. Similar to [@he2018deep], we also fine-tune with the holistic and partial person images cropped from Market1501 (marked by \*). SAN\* outperforms *Baseline\**, AMC+SWM [@zheng2015partial] and is comparable with the state-of-the-art partial reID method DSR [@he2018deep]. SAN\* outperforms Baseline (ResNet-50)\* by 5.8%, 4.7%, 7.8% on Rank-1, Rank-5, and Rank-10 respectively on the Partial REID dataset, and by 7.6%, 7.8%, 5.8% on Rank-1, Rank-5, and Rank-10 respectively on the other Partial-iLIDS dataset. Even without fine-tune, our SAN also significantly outperforms the baseline. ![Three example pairs of (input image, regressed texture images by our SAN) from the Partial REID dataset.](fig7.pdf){width="1.0\linewidth"} \[fig:partial\] Conclusion ========== In this paper, we proposed a simple yet powerful Semantics Aligning Network (SAN) for learning semantics-aligned feature representations for efficient person reID, under the joint supervisions of person reID and semantics aligned texture generation. At the decoder, we add over the feature maps as the perceptual loss to regularize the learning. We have synthesized a Paired-Image-Texture dataset (PIT) to train a SAN-PG model, with the purpose to generate pseudo groundtruth texture images for the reID datasets, and to train the SAN. Our SAN achieves the state-of-the-art performances on the datasets CUHK03, Market1501, MSMT17, and the Partial REID, without increasing computational cost in inference. Acknowledgments =============== This work was supported in part by NSFC under Grant 61571413, 61632001. [^1]: This work was done when Xin Jin was an intern at MSRA. [^2]: Corresponding Author.
--- author: - '[Fazle Karim$^{1}$, Somshubra Majumdar$^{2}$, Houshang Darabi$^{1}$,  and Samuel Harford$^{1}$]{}[^1] [^2]' bibliography: - 'biblio.bib' title: 'Multivariate LSTM-FCNs for Time Series Classification' --- =1 fnsymbolarabic Convolutional neural network, long short term memory, recurrent neural network, multivariate time series classification Introduction ============ Background Works {#Background Works} ================ Multivariate LSTM Fully Convolutional Network {#LSTMFCN} ============================================= Network Architecture -------------------- Network Input ------------- Experiments {#Experiments} =========== Evaluation Metrics ------------------ Datasets -------- Results ------- Conclusion & Future Work {#conclusion} ======================== [^1]: $^{1}$Mechanical and Industrial Engineering, University of Illinois at Chicago, Chicago,IL [^2]: $^{2}$Computer Science, University of Illinois at Chicago, Chicago, IL
--- abstract: 'The Sirius AB binary system has masses that are well determined from many decades of astrometric measurements. Because of the well-measured radius and luminosity of Sirius A, we employed the TYCHO stellar evolution code to determine the age of the Sirius A,B binary system accurately, at 225–250 Myr. Note that this fit requires the assumption of solar abundance, and the use of the new Asplund et al. primordial solar metallicity. No fit to Sirius A’s position is possible using the old Grevesse & Sauval scale. Because the Sirius B white dwarf parameters have also been determined accurately from space observations, the cooling age could be determined from recent calculations by Fontaine et al. or Wood to be 124$\pm$10 Myr. The difference of the two ages yields the nuclear lifetime and mass of the original primary star, 5.056$_{-0.276}^{+0.374}$ M$_{\odot}$. This result yields in principle the most accurate data point at relatively high masses for the initial-final mass relation. However, the analysis relies on the assumption that the primordial abundance of the Sirius stars was solar, based on membership in the Sirius supercluster. A recent study suggests that its membership in the group is by no means certain.' author: - 'James Liebert, Patrick A. Young, David Arnett, J. B. Holberg, and Kurtis A. Williams' title: The Age and Progenitor Mass of Sirius B --- Introduction ============ The initial-final mass relation (IFMR) for progenitors and white dwarfs is fundamental in understanding a stellar population, interpreting the white dwarf mass and luminosity distributions, and determining the star formation history. Of particular interest is the upper mass limit of a star forming a white dwarf. It is well known that the mass loss in the red giant phases (RGB and AGB) of stars of similar mass and chemical composition shows dispersion. The existence of “horizontal branches” in metal-poor and metal-rich clusters shows that varying amounts of the hydrogen envelope are lost in the RGB or helium-ignition events. For intermediate-mass stars undergoing AGB evolution, the timing of thermal pulses of the helium shell may produce a dispersion in the resulting white dwarf distribution (eg. Iben & Renzini 1983). It may therefore be anticipated that the the IFMR will show dispersion. In order to determine empirically the IFMR, it is necessary to study samples of white dwarfs where it is possible to estimate the initial mass of the progenitor. Since there is generally no way of establishing the total lifetimes of white dwarfs in the field population, white dwarfs found in well-studied star clusters have been used. The cluster age is known within some uncertainty, generally from fits to the main sequence turnoff. In a few clusters, the mass limit below which lithium is not depleted in completely-convective low mass stars and/or brown dwarfs may also be employed. Several Galactic disk clusters ranging in age from of the order 10$^8$ to 10$^9$ years, and with main sequence turnoff masses of $\sim$2 to $>$5M$_{\odot}$, have been studied in recent years. These include the Hyades and Praesepe (Claver et al. 2001 is the most recent study), the Pleiades (the single white dwarf has been studied by many authors), NGC 2516 (Koester & Reimers 1996), NGC 3532 (Koester & Reimers 1993), NGC 2168 (Williams, Bolte & Koester 2004), and NGC 2099 (Kalirai et al. 2005). Since it is necessary to measure the masses of the white dwarfs accurately – by means of stellar atmosphere fits to Balmer lines or by measuring the gravitational redshift – the faint white dwarf sequences recently found in the nearest globular clusters (cf. M 4, Hansen et al. 2004) and in older Galactic clusters (NGC 6791, Bedin et al. 2005) are not yet as useful. The upper end of the IFMR is yielding progenitors with short nuclear-burning phases, and these lifetimes are very dependent on the mass. For several reasons, there must be significant uncertainty in the estimations of the ages of such young clusters. First, these clusters do not necessarily have well-populated upper main sequences. Shifts due to rotation, unresolved binaries, and the possible presence of blue stragglers with masses larger than the turnoff mass represent a separate category of problems. Finally, uncertainty in the main sequence lifetime of stars with convective cores due to the probable overshooting beyond the simple Schwarzschild boundary is a theoretical problem. Thus the uncertainty in the cluster age causes a big uncertainty in the possible progenitor masses of white dwarfs, especially for the youngest clusters. These problems result in considerable uncertainty in establishing the upper mass limit of a star that can form a white dwarf, or even if such a uniform upper limit exists. It is thus extremely important if a method were available to reduce the spread in the progenitor’s mass. Binaries consisting of a nuclear-burning star and a white dwarf offer another potential source of candidates for the IFMR. If the parameters of the nuclear-burning star can be established with sufficient accuracy that its age can be obtained by fitting its position on the HR Diagram, then the total age of the white dwarf – which is the sum of the nuclear-burning lifetime and cooling age – can also be established. As for the clusters, the nuclear-burning lifetime is used to establish the progenitor mass of the white dwarf. The Sirius system is the fifth or sixth nearest stellar system to the Sun[^1], and certainly is one of the best studied binary systems including a white dwarf component. One does not want to employ a white dwarf in a binary close enough that interactions might have affected the mass loss phases in the late stages of the progenitor’s evolution. However, the orbit is eccentric with a period of about 50 years. At periastron, the components are just within 7 AU of each other (van de Kamp 1971). This is probably well enough separated so that any interaction during the asymptotic giant branch phase of the original primary (now B) would have been minimal. Significant interaction would probably have circularized the orbit. We therefore assume that Sirius B evolved in a manner like that of a single star. An excellent trigonometric parallax, accurate to better than 1%, is available from the [*HIPPARCOS*]{} satellite mission as well as from several good ground-based studies. Using the bolometric energy distribution and effective temperature determination, the luminosity of Sirius A is known to about 4% accuracy. Using the ESO [*Very Large Telescope Interferometer*]{}, Kervella et al. (2003) have obtained a superb measurement of the diameter of Sirius A, accurate to about 0.75%. One may then avoid very direct use of the T$_{eff}$ estimates of Sirius A. Robust parameter (T$_{eff}$, log g) determinations for Sirius B are also available from space observations. Finally, a recent reexamination of the plethora of astrometric data on the binary orbit (Holberg 2005) results in mass estimates of both and “A” and “B” components accurate to 1-2%. Improvements in opacities, the equation-of-state, and other treatments in modern stellar evolution codes now make possible fits to nuclear-burning stars on the HR Diagram, especially those on or near the main sequence (Young & Arnett 2005). Robust, consistent ages for (nondegenerate) binary stars with well-determined luminosities, masses, radii and temperature have been determined (Young et al. 2001). In Section 2 of this paper, we employ this code to fit the position of Sirius A, and estimate the age with an error estimate. This gives the systemic age from which to subtract the cooling age of “B” in Section 3 to obtain the estimate of its progenitor mass, and its uncertainty. We then summarize and discuss an important caveat in Section 4. The Fitting of Sirius A in an HR-like Diagram ============================================= The mass adopted for Sirius A is 2.02$\pm$0.03M$_{\odot}$ (Holberg 2005). Note that this new astrometric study yields a mass about 5% smaller than some earlier determinations of Sirius A. (The smaller orbit also yields a smaller mass for Sirius B.) The adopted radius of Sirius A is 1.711$\pm$0.013 R$_{\odot}$ (Kervella et al. 2003), and the resulting luminosity is L/L$_{\odot}$ = 25.4$\pm$1.3. Since the luminosity and radius are more directly measured than the T$_{eff}$, these will be the quantities fitted in an $L$–$R$ equivalent of the HR diagram. One issue with Sirius A is that it is a chemically-peculiar A1V dwarf, so that a direct determination of the interior chemical composition is not possible. However, it has been believed for a long time that Sirius is a member of a large moving group near the Sun, called the “Sirius supercluster” (Hertzsprung 1909; Eggen 1983). (We reassess this issue in the last section of the paper, however.) The “core” of this association is the Ursa Major “dipper” stars. Metallicity estimates for the group members (that are not chemically-peculiar) are generally consistent with solar (Palous & Hauck 1986). Eggen (1992) compared the abundances of the Sirius and Hyades superclusters and concluded that the former is deficient by about -0.18 dex in \[Fe/H\] compared with the Hyades group. The latter is believed to exceed solar by about this amount. There is no guarantee that appreciable dispersion in metallicity does not exist among the group. We nonetheless can do no better than assume solar X,Z values below. However, a new solar abundance scale determined from 3D non-LTE calculations of the solar atmosphere (Asplund et al. 2004, and references therein) has significantly lower abundances of oxygen, carbon and nitrogen. The overall heavy elements abundance parameter for the Sun decreases to a primordial value of Z=0.014. The new scale jeopardizes the excellent agreement of the “standard solar model” with the helioseismology observations (Bahcall et al. 2005), though a new study attempts to remedy this, retaining the Asplund et al. values (Young & Arnett 2005). We have adopted the new scale for the calculations described below. In Figure 1 are shown the results of running the TYCHO code (Young & Arnett 2005) to fit the position of Sirius A. The model included wave-driven mixing and diffusion. The former is a way of accounting for the “overshooting” of the convective core based on physics, rather than a prescription involving the ratio of the mixing length to scale height. The microphysics for opacities, reaction rates and the EOS appear numerically similar to those of Ventura et al. (1998). The inclusion of these effects produces only a small effect at the best fit age of 237.5$\pm$12.5 Myr. Radiative levitation was not included. A perfect fit would require a lower metallicity or an increase in mass of a few hundredths M$_{\odot}$. The evolution was begun well up the Hayashi track and followed throughout the pre-main sequence. The best-fit age includes the pre-MS evolution. No pre-MS accretion was included. The best-fit age is constrained primarily by the radius determination. The precision with which the Sirius-A radius and luminosity are known is such that the use of the new solar abundance scale matters greatly. In particular, [*no reasonable fit can be achieved*]{} if the old scale (Grevesse & Sauval 1998) were adopted. When the 2.02M$_{\odot}$ track for the old solar scale crosses the correct radius on its main sequence track, the luminosity is log L/L$_{\odot}$ = 1.26, several sigma below the observed value. The new solar abundances result in lower interior opacities, so the star has a smaller radius at a given luminosity, and vice versa. Another estimate of the radius of Sirius A is reported by Decin et al. (2003) using the Short Wavelength Spectrometer on the Infrared Space Observatory. This measurement is far less precise, and the Kervella et al. (2003) angular diameter is within its 5% errors. The cooling age and progenitor mass of Sirius B =============================================== An accurate T$_{eff}$ determination is also essential for measuring the cooling age of Sirius B. Holberg et al. (1998) employed space ultraviolet [*Extreme Ultraviolet Explorer*]{} and [*International Ultraviolet Explorer*]{} spectrophotometry to estimate 24,790$\pm$100. Barstow et al’s (2005) estimate, using an optical spectrum of extraordinarily high signal-to-noise ratio, obtained with the Space Telescope Imaging Spectrograph ([*STIS*]{}) on the [*Hubble Space Telescope*]{} is 25,193$\pm$37 (an internal error only). We adopt T$_{eff}$ = 25,000$\pm$200. The astrometric reanalysis of Holberg (2005) results in a measurement of 1.00$\pm$0.01M$_{\odot}$ for the mass of Sirius B. Barstow et al. (2005) obtained T$_{eff}$ = 25,193 K (as stated above) and log g = 8.528$\pm$0.05 from fits to the Balmer line profiles. In the next paragraphs we shall apply cooling sequences to obtain the cooling age of Sirius B. These also yield a second relationship (besides the surface gravity) between the radius and mass – R = R(M,T$_{eff}$). Solving for the mass one obtains 0.978$\pm$0.005M$_{\odot}$, if the cooling sequences of Wood (1995) are used, or 0.003M$_{\odot}$ less if the those of Fontaine et al. (2001) are employed. Using their gravitational redshift measurement of 80.42$\pm$4.83 km s$^{-1}$ – yielding the ratio M/R – with the Wood sequences, Barstow et al. (2005) obtain 1.02$\pm$0.02M$_{\odot}$. Note that these mass determinations are generally below previous estimates in earlier literature. For this study, we shall adopt 1.00$\pm$0.02M$_{\odot}$ for Sirius B. For the cooling ages of white dwarfs, most cluster white dwarf studies in the last ten years have used the evolutionary models of Wood (1992, 1995). For reasons stated below, we will use primarily a new sequence by Fontaine, Brassard, & Bergeron (2001). Both calculations used 50% carbon – 50% oxygen cores, and the relatively thick outer layers of helium and hydrogen predicted by most evolutionary calculations of the asymptotic giant branch phase. The Fontaine et al. calculations incorporate several new treatments of physics (Fontaine 2005, private communication). First, the equation of state of Saumon, Chabrier, & Van Horn (1995) for H and He in the partially-ionized envelope, and a new treatment of carbon in the same regime, was employed. Second, the conductive opacity tables of Hubbard & Lampe (1969) and Itoh & Kohyama (1993, and references therein) were “fused together,” rather than treated as completely separate. Third, no discontinuities in the chemical composition profile are allowed. Diffusion at the interfaces is calculated. Fourth, a full stellar structure is evolved from the center to the top of the atmosphere, not a simple “Henyey” core which excludes the envelope. Fifth, a robust and accurate finite element technique (developed by P. Brassard) is used, as opposed to finite differences as in most other codes. We thus have adopted these sequences to estimate the cooling age of Sirius B. Having said this, it is interesting and reassuring that the 1.00 M$_{\odot}$ C,O sequence of Wood (2005, private communication) reaches a cooling age of 123.3 Myr at 25,000 K, while the Fontaine et al. (2001) sequence with identical parameters reaches 123.6$\pm$10 Myr. Error bars in the “final” mass of Sirius B are as stated above. The uncertainty in the mass and systemic age (§ 2) leads to a nuclear lifetime for the progenitor in the range of 101–126 Myr. For determining the initial mass, we consider errors due to (1) uncertainty in the final mass, and (2) uncertainty in the systemic age (§  2). We shall also consider a possible additional uncertainty due to the different results obtained from the TYCHO and “Padova” codes, and an uncertainty due to the unknown carbon-oxygen abundance profiles throughout the core. For most recent papers on white dwarf sequences, such as Ferrario et al. (2005, hereafter F05), the “Padova” stellar evolution models of Girardi et al. (2002) were used to get the nuclear lifetime and the initial progenitor mass. For this paper, for self-consistency with § 2, we present first the calculations of these using the TYCHO code. While TYCHO incorporates physical treatments available in 2005 (rather than several years earlier for the “Padova” literature calculations), the principal difference between the two results may be the treatment of mixing beyond the traditional Schwarzschild core boundary. Girardi et al. (2002) use an “overshoot” prescription based on a single parameter. As alluded to in § 2, TYCHO includes the effects of hydrodynamics in the convective boundary and radiative regions in a predictive, physically-motivated fashion. This treatment may give more accurate predictions of core sizes, and thus of luminosities, radii, and nuclear lifetimes. From the TYCHO calculations, the masses that bracket the above range in the progenitor’s nuclear lifetime are 5.43 and 4.78 M$_{\odot}$, with a mean of 5.056M$_{\odot}$. The uncertainly in the cooling age (due to that of the white dwarf mass) contributes errors of only -0.171 and +0.262M$_{\odot}$. The uncertainty in the age of the binary system contributes -0.213 and +0.273M$_{\odot}$. When added in quadrature, these values yield the mass range of the previous paragraph. If we employ instead the Girardi et al. (2002) tables, again for nuclear lifetimes of 101 Myr and 126 Myr, the mean value of initial mass is only slightly larger at 5.132$_{-0.23}^{+0.28}$M$_{\odot}$. Thus we may also conclude that the physical treatment of the convective boundary region by TYCHO, and the “overshoot” prescription of the Padova group, agree pretty well near 5M$_{\odot}$. In any case the uncertainties in the initial mass from this analysis are smaller than for any massive cluster white dwarf included in F05. An additional source of uncertainty in the cooling age not considered here is that due to the carbon–oxygen abundance distribution in the core. As stated previously, the white dwarf cooling calculations of Fontaine et al. (2001) simply employ a 50%–50% mixture. The usual practice in the published analyses of the cluster white dwarfs is, as stated previously, to use Wood’s calculations with the same mixture. Other available calculations from Wood are for pure carbon and for pure oxygen compositions. The 5 M$_{\odot}$ sequence calculated from TYCHO produces very nearly a 1M$_{\odot}$ core at the end of the AGB evolution, which suggests that the prescription for mass loss in the red giant phases is fairly accurate. The resulting white dwarf has a carbon abundance of 15% throughout the inner 0.45 $M_{\odot}$, comprised of the former convective He-burning core. The carbon abundance increases from 35% to 40% moving outward through the region processed by He shell burning, with a narrow ($\sim 0.01 M_{\odot}$) spike of 60% carbon where He burning is incomplete. At low He abundances The $Y_{\rm ^4He}^3$ dependence of the triple $\alpha$ reaction favors the ${\rm ^{12}C(\alpha,\gamma)^{16}O}$ reaction over triple $\alpha$. More massive cores with higher entropy favor the production of $^{16}O$ over $^{12}C$. Convective He burning cores also tend to grow at low $Y_{\rm ^4He}$ due to increased opacity. This process as well as any non-convective mixing process which mixes He into the core at low abundance will increase the destruction of $^{12}C$, resulting in an oxygen dominated core (Arnett 1996). Pure carbon white dwarf cooling models are thus excluded for this mass range, favoring a shorter cooling time. Using the simple mixture is therefore the best choice of available cooling sequences, but the additional error to the cooling time due to not matching the true abundance profile is not well constrained. Conclusions and a Caveat ======================== The result of this analysis is a spread in the possible progenitor mass of Sirius B generally smaller than those in the young clusters cited above. In a parallel F05 paper, Sirius B provides a valuable “anchor point” in the high mass part of the IFMR, which is plotted therein (their Fig. 1). The conclusions as to the Sirius B progenitor mass obviously depend on the fit to the Sirius A luminosity and radius being a valid measure of the systemic age. The “Achilles heel” of the analysis may be the assumption of solar composition for the primordial abundance of the two stars. Since there is no way to measure directly the primordial abundance of either the chemically-peculiar A star or the white dwarf, we have relied on the assumption that it is a member of the Sirius supercluster, and that these stars have generally been shown to have abundances indistinguishable from solar. It must be acknowledged that, in the recent study of King et al. (2003), it appears by no means certain that Sirius – so distant from the Ursa Major core – is a member of the Sirius supercluster. These authors estimate an age of 500$\pm$100 Myr for the supercluster from main sequence turnoff fits. Using Strömgren photometry, Asiain et al. (1999) estimate 520$\pm$160 Myr. These values are substantially larger than previous literature estimates – eg. 240 Myr (Eggen 1983). They are appreciably larger than the systemic age of 225–250 Myr determined in this paper for Sirius. There is no way, if this analysis is based on sound assumptions, that the Sirius age can be 400 Myr. King et al. (2003) do emphasize that they have not determined that the supercluster is coeval, nor that the stellar abundances are the same. If we were to consider the possibility that the interior abundances of Sirius A are not determined, we consult the comprehensive study by Nördstrom et al. (2004) of 7566 nearby, single F and G dwarfs to see what the abundance distribution is. For the subsample within 40 pc of the Sun, and at estimated ages near 1 Gyr or less, the \[Fe/H\] determinations (see their Fig. 28) range from about -0.3 to +0.2, with a mean value near -0.02 to -0.06 (the former if the sample is restricted to the subsample with age estimates considered to be accurate). As mentioned in § 2, the fit would actually improve (though its already within one sigma), if the assumed metallicity (Z) were decreased. At the extreme, with Z = 1/2 solar, a good fit can be achieved at an age of 375$\pm$19 Myr, over 50% larger. (The metal-poor star begins with a smaller radius and higher T$_{eff}$, and must evolve farther through its main sequence phase to reach the observed radius and luminosity.) This systemic age would correspond to a much lower progenitor mass of 3.61$\pm$0.125$M_{\odot}$. This unlikely outcome would make Sirius a seriously-discrepant data point in the IFMR (F05). On the other hand, we may note that most stars in the solar neighborhood have close to solar abundances. The analysis in this paper is self-consistent within the stated assumptions. As a point in the overall IFMR for disk stars, it can be seen in the F05 paper that the solar abundance Sirius B point has a somewhat higher white dwarf mass or a somewhat lower initial mass than most of the similar white dwarfs in NGC 2516, M35 and the Pleiades, but overlaps the error bars of most of these. Marigo (2001) gives a careful treatment of AGB mass loss, and predicts that a solar metallicity 5.06$M_{\odot}$ star should produce almost exactly a 1$M_{\odot}$ white dwarf. (We remarked earlier that the TYCHO code does also.) In summary, this result appears to provide a strong confirmation of stellar theory. This work was supported by the National Science Foundation through grant AST-0307321 (JL and KAW). We thank Gilles Fontaine, Pierre Bergeron, Martin Barstow, and Matt Wood for valuable communications, Eric Mamajek for a tutorial on moving groups, and the anonymous referee for several helpful suggestions. Asiain, R., Figueras, F., Torra, J., & Cheu, B. 1999, , 341, 427 Asplund, M., Grevesse, N., Sauval, A.J., Allende Prieto, C., & Kiselman, D. 2004, , 417, 751 Bahcall, J.N., Basu, S., Pinsonneault, M., & Serenelli, A.M. 2005, , 618, 1049 Barstow, M.A., Bond, H.E., Holberg, J.B., Burleigh, M.R., Hubeny, I., & Koester, D. 2005 , in press Bedin, L.R., Salaris, M., Piotto, G., King, I.R., Anderson, J., Cassisi, S., & Momany, Y. 2005, , 624, 45 Claver, C.F., Liebert, J., Bergeron, P., & Koester, D. 2001, , 563, 987 Decin, L., Vandenbussche, B., Waelkens, K., Eriksson, C., Gustafsson, B., Plez, B., & Sauval, A.J. 2003, , 400, 695 Eggen, O.J. 1983, , 88, 642 Eggen, O.J. 1992, , 104, 1493 Eggen, O.J. 1998, , 116, 782 Ferrario, L., Wickramasinghe, D.T., Liebert, J., & Williams, K.A. 2005, , in press (astro-ph/0506317) (F05) Fontaine, G., Brassard, P., & Bergeron, P. 2001, ., 113, 409 Girardi, L., Bertelli, G., Bressan, A., Chiosi, C., Groenewegen, M.A.T., Marigo, P., Salasnich, B., & Weiss, A. 2002, , 391, 195 Grevesse, N., & Sauval, A.J. 1998, [*Space Science Reviews*]{}, v. 85, p. 161 Hansen, B.M.S. et al. 2004, , 155, 551 Hertzsprung, E. 1909, , 30, 135 Holberg, J.B. 2005, in preparation Holberg, J.B., Barstow, M.A., Fruhweiler, F.C., Cruise, A.M., & Penny, A.J. 1998, , 497, 935 Hubbard, W.B., & Lampe, M. 1969, , 18, 297 Iben, I., Jr., & Renzini, A. 1983, , 21, 271 Itoh, N. & Kohyama, Y. 1993, , 404, 268 Kalirai, J.S., Richer, H.B., Reitzel, D., Hansen, B.M.S., Rich, R.M., Fahlman, G.G., Gibson, B.K., and von Hippel, T. 2005, , 618, L123 Kervella, P., Thévenin, F., Morel, P., Bordé, P., & Di Folco, E. 2003, , 408, 681 King, J.R., Villarreal, A.R., Soderblom, D.R., Gulliver, A.F., & Adelman, S.J. 2003, , 125, 1980 Koester, D., & Reimers, D. 1993, , 275, 479 Koester, D., & Reimers, D. 1996, , 313, 810 Marigo, P. 2001, , 370, 194 Nordström, B., Mayor, M., Andersen, J., Holmberg, J., Pont, F., J[ø]{}rgensen, B.R., Olsen, E.H., Udry, S., & Mowlavi, N. 2004, , 418, 989 Palous, J. & Hauck, B. 1986, , 162, 54 Saumon, D., Chabrier, G., & Van Horn, H.M. 1995, , 99, 713 Teegarden, B.J., Pravdo, S.H., Hicks, M., Lawrence, K., Shaklan, S.B., Covey, K., Fraser, O., Hawley, S.L., McGlynn, T., & Reid, I.N. 2003, , 589, L51 Van de Bos, W.H. 1960 [*J. Obs.*]{}, 43, 145 Van de Kamp, P. 1971, , 9, 103 Ventura, P., Zeppieri, A., Mazzitelli, I., & D’Antona, F. 1998 , 334, 953 Williams, K.A., Bolte, M., & Koester, D. 2004, , 615, L49 Wood, M.A. 1992, , 386, 539 Wood, M.A. 1995, in [*White Dwarfs*]{}, eds. D. Koester & K. Werner (Berlin: Springer), 41 Young, P.A., & Arnett, D. 2005, , 618, 908 Young, P.A., Mamajek, E., Arnett, D., & Liebert, J. 2001, , 556, 230 [^1]: depending on the actual distance to the recently-discovered, very high proper motion star SO025300.5+165258, estimated to be within 2.4 to 3.6 pc (Teegarden et al. 2003)
--- abstract: 'Type Ia supernovae (SNIe) are generally accepted to act as standardisable candles, and their use in cosmology led to the first confirmation of the as yet unexplained accelerated cosmic expansion. Many of the theoretical models to explain the cosmic acceleration assume modifications to Einsteinian General Relativity which accelerate the expansion, but the question of whether such modifications also affect the ability of SNIe to be standardisable candles has rarely been addressed. This paper is an attempt to answer this question. For this we adopt a semi-analytical model to calculate SNIe light curves in non-standard gravity. We use this model to show that the average rescaled intrinsic peak luminosity – a quantity that is assumed to be constant with redshift in standard analyses of Type Ia supernova (SNIa) cosmology data – depends on the strength of gravity in the supernova’s local environment because the latter determines the Chandrasekhar mass – the mass of the SNIa’s white dwarf progenitor right before the explosion. This means that SNIe are no longer standardisable candles in scenarios where the strength of gravity evolves over time, and therefore the cosmology implied by the existing SNIa data will be different when analysed in the context of such models. As an example, we show that the observational SNIa cosmology data can be fitted with both a model where $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.62, 0.38)$ and Newton’s constant $G$ varies as $G(z)=G_0(1+z)^{-1/4}$ and the standard model where $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.3, 0.7)$ and $G$ is constant, when the Universe is assumed to be flat.' author: - 'Bill S. Wright$^{1,2}$' - Baojiu Li$^1$ bibliography: - 'ReferencesCut.bib' title: 'Type Ia supernovae, standardisable candles, and gravity' --- Introduction {#sec:intro} ============ Cosmology observations from Type Ia supernovae (SNIe) gave the first evidence for a late-time acceleration in the expansion of the Universe [@Perlmutter1999; @Riess1998]. A Type Ia supernova (SNIa) is the cataclysmic explosion of a white dwarf star that occurs when the white dwarf accretes enough mass from a binary partner for the material in its core to undergo runaway thermonuclear fusion. SNIe are thought to act as standardisable candles because of an observed relationship between a SNIa’s peak brightness and how rapidly this peak brightness is achieved and subsequently left behind [@Phillips1993] (the so-called width-luminosity relation, or WLR). After standardisation procedures, which are often at least partially based on the WLR, are applied, any remaining difference in the peak brightnesses of the two SNIe should be due to a difference in distance to the observer. Thus the relative distances between SNIe can be measured, and along with measurements of their redshifts can be used to infer the details of the expansion of the Universe through the construction of the distance-redshift relation. After the late-time acceleration of the expansion of the Universe was discovered, Einstein’s idea of a small, positive cosmological constant $\Lambda$ was revived and established as the leading candidate for the acceleration’s origin. However, the idea of $\Lambda$ as the cause of cosmic acceleration is not without theoretical difficulties, such as the cosmological constant fine-tuning and coincidence problems [@cosmoconstprob; @coincidenceprob]. These problems have motivated a wide array of theories of dynamical dark energy [@Copelandetal2006] or modified gravity [@Joyceetal2015; @Koyama2016] which aim to explain the smallness of $\Lambda$ (or its substitute) using dynamical or more natural mechanisms. The latter class of models has received growing interest in recent years, partly because current and next-generation cosmological surveys (such as e[boss]{} [@eBOSS], [des]{} [@DES], [hsc]{} [@HSC], [desi]{} [@DESI], [lsst]{} [@LSST], [Euclid]{} [@Euclid], [4most]{} [@4MOST], [wfirst]{} [@WFIRST] and [ska]{} [@SKA]) will allow their theoretical predictions to be confronted with precision data. The use of Einstein’s General Relativity (GR) as the foundation of modern cosmology is a vast extrapolation of its validity beyond the length and energy scales at which it has been rigorously tested [@Will2014], and thus testing the validity of GR with unprecedented precision in this new regime of cosmological scales is a vital task. An unintended and less well recognised consequence of introducing theories of modified gravity is their potential impact on the astrophysics of SNIe themselves – in particular, their ability to act as standardisable candles could be affected. A fairly common feature of these theories is that the strength of gravity varies over cosmic time and/or across space, as a result of which some key properties of the white dwarf progenitors of the SNIe, such as their mass, can become redshift dependent. The redshift dependence of these key properties may in turn affect the intrinsic peak luminosities of the SNIe. If this is the case, then the measurement of the acceleration such theories are introduced to explain might need to be reinterpreted as the very result of their introduction. In the extreme case, modified gravity theories may produce an acceleration that is no longer supported by the SNIa data once the data is reinterpreted in the context of the new theory. Evidently, this is an interesting question that is not just important for consistency in the study and testing of modified gravity theories, but also relevant to the general cosmological community. It has long been recognised that any evolution of the SNIe intrinsic luminosity $L$ with redshift would affect the distances measured and therefore the values of the cosmological parameters deduced from SNIa cosmology observations [@Drelletal2000]. There have also been some earlier efforts to relate this evolution of intrinsic luminosity to an evolution of the strength of gravity through the value of Newton’s gravitational constant $G$ [@Amendolaetal1999; @Garcia-Berroetal1999; @RiazueloUzan2002]. However, these initial studies assumed a straightforward proportionality between $L$ and $G$, and either did not attempt to produce light curves [@Amendolaetal1999; @Garcia-Berroetal1999], or if they did produce light curves [@RiazueloUzan2002] did not attempt to reproduce the WLR or quantitatively verify that the standardisation procedures still work when $G \neq G_0$, where $G_0$ is the value of $G$ at the present day. This method has also been utilised to place constraints on the variation of $G$ using the observational dispersion in SNIe absolute magnitudes [@Gaztanagaetal2002; @LorenAguilaretal2003; @MouldUddin2014]. In this paper we further develop these early works and introduce an alternative method to treat SNIe in modified gravity theories that allows us to produce SNIe light curves in order to verify whether the WLR is reproduced in non-standard gravity, and then use this information to tackle the issue of whether the ability of SNIe to act as standardisable candles is affected. Modeling the impact of modified gravity on SNIa astrophysics is a highly nontrivial task. The pre and post-collapse phases of the evolution of SNIe are both typical astrophysical laboratories where all four types of fundamental interactions play a role and the many physical processes going on are not yet fully understood. Even with the current best knowledge of these physical processes, the SNIa evolution cannot be accurately followed without expensive hydrodynamical simulations. This is further complicated by the strong variations displayed in the physical properties of the SNIa population, such as their burning conditions and the $^{56}$Ni mass produced in the thermonuclear reactions, the latter being a critical quantity determining the intrinsic SNIa luminosity. If we add modifications to gravity on top of all these, the situation only becomes worse. Clearly, a simplified approach is needed for initial proof-of-concept studies before embarking on a full numerical investigation. Our approach makes significant simplification of the study in three ways: firstly the use of a semi-analytical model for SNIe light curves that has been demonstrated to work quite well in explaining the observed behaviours of the light curves; secondly the treatment of modified gravity as a time variation of Newton’s constant $G$, which affects SNIa astrophysics mainly by modifying the mass of the white dwarf progenitor – the Chandrasekhar mass; and thirdly the use of a simplified standardisation procedure (in comparison to those used for observational work [@MLCS; @SALT2]) that involves rescaling light curves so that their shape around the peak matches that of a template. We will show that, using this simplified procedure, the semi-analytical light curve model can successfully reproduce the WLR and the standardisability of SNIe light curves in the case of a redshift-independent value for Newton’s gravitational constant $G(z)=G_0$, which corresponds to the well-known result of the Chandrasekhar mass $M_{\rm Ch}=1.44M_\odot$. However, if $M_{\rm Ch}$ takes different values at different redshifts due to $G(z)\neq G_0$, we will show that although the WLR is reproduced for $G \neq G_0$, our procedure for the standardisation of SNIe light curves based on this WLR gives rise to different rescaled intrinsic peak luminosities than predicted by $G=G_0$, suggesting that SNIe are no longer conventional standardisable candles with the same rescaled intrinsic peak luminosities at all redshifts. We will present a simple numerical example that utilises this result and demonstrates that the same SNIa data can be fitted by two cosmological models: one with $(\Omega_{\rm M},\Omega_{\Lambda})=(0.3,0.7)$ and $G=G_0$, and the other with $(\Omega_{\rm M},\Omega_{\Lambda})=(0.62,0.38)$ and $G(z)=G_0(1+z)^{-1/4}$, where $z$ is the redshift, and $\Omega_{\rm M}$ and $\Omega_{\Lambda}$ are respectively the present-day density parameters for non-relativistic matter and the cosmological constant $\Lambda$. This paper is organised as follows. We start in Section \[ssec:LCM\] by describing a model capable of producing the light curves of SNIe for a given set of input parameters, show in Section \[ssec:LCparams\] how varying each of these input parameters affects the light curve, discuss the relationship between two of these input parameters, nickel-56 mass $M_{\rm Ni}$ and ejecta opacity $\kappa$, that is essential to the rescaling of SNIe light curves in Section \[ssec:MNikappa\], and then in Section \[ssec:GdepLCM\] deduce how modifications to gravity would affect the values of the input parameters and therefore the light curves. Once we have this gravity-dependent light curve model, we use it to investigate how the results of the light curve rescaling process change under modified gravity in Section \[ssec:GDepWLR\], before presenting the numerical example mentioned above in Section \[ssec:NumEx\]. Finally, we conclude in Section \[sec:conc\]. SNIa Astrophysics {#sec:SNIaastro} ================= To construct a model capable of capturing the effect of a time-dependent local strength of gravity on a SNIa light curve, we first identify a model that can reproduce a light curve for a given set of input parameters in Section \[ssec:LCM\], investigate how the light curve depends on each parameter in Section \[ssec:LCparams\], discuss a key component of the model that allows SNIe light curves to be standardisable in Section \[ssec:MNikappa\], and then determine how the values of those parameters depend on the value of $G$ in Section \[ssec:GdepLCM\]. Light curve model {#ssec:LCM} ----------------- ### Physics of the light curve {#sssec:equation} A SNIa is triggered when a carbon/oxygen white dwarf accretes enough mass from a binary partner to increase the temperature and density in its core past the level required to restart nuclear fusion. This initial fusion begins a runaway thermonuclear process known as carbon detonation that releases enough energy to destroy the white dwarf. During this process, large quantities of the radioactive isotope nickel-56 ($^{56}\textup{Ni}$) are produced. The $^{56}\textup{Ni}$ undergoes positron decay to cobalt-56 ($^{56}\textup{Co}$) which in turn decays via positron emission to the stable isotope iron-56 ($^{56}\textup{Fe}$) [@Colgate1969]. The decay chain is simplified as follows: $${}^{56}_{28}\textup{Ni} \to {}^{56}_{27}\textup{Co} + {}^{\ 0}_{+1}\mathrm{e}^+ + \gamma \to {}^{56}_{26}\textup{Fe} + 2\ {}^{\ 0}_{+1}\mathrm{e}^+ + \gamma.$$ The radiation produced in these decays is in the form of short-wavelength gamma rays. Throughout this paper we will treat such gamma rays as unobservable and only consider the SNIa’s ultraviolet+optical+infrared (UVOIR) light curve. Therefore to contribute to the light curve, the radiation produced in these decays must first increase its wavelength through one of many possible interactions with the supernova ejecta material thrown out in the initial explosion of the white dwarf progenitor. This longer wavelength radiation can then diffuse through the supernova ejecta and be observed. Gamma rays that diffuse through the ejecta without interacting will not contribute to the UVOIR light curve – this is known as gamma ray leakage. A qualitative description of each phase of the diffusion process for the post-interaction, longer wavelength radiation is given below and is accompanied by a sketch of the corresponding UVOIR light curve in the left panel of Fig. \[fig:G0\_Rescaling\]: 1. At early times, the outer layers of the supernova ejecta are still hot and densely packed, with high opacity to radiation of all wavelengths. Thus at this stage the instantaneous luminosity observed from the supernova is only a small fraction of the instantaneous power from radioactive decay in the centre of the ejecta, and this can be seen in the small initial brightness of the supernova’s light curve. 2. Gradually, as the ejecta expands and disperses, its opacity to longer wavelength radiation falls and the amount of UVOIR radiation that can escape increases, until the ejecta becomes essentially translucent and UVOIR radiation can escape unimpeded. This can be seen as the light curve steadily rises to its peak after the initial explosion. 3. Once the ejecta has become essentially fully translucent to UVOIR wavelengths, the trapped UVOIR radiation that had been produced at earlier times before the ejecta became translucent can escape. This results in the instantaneous observed luminosity temporarily rising above the instantaneous power from radioactive decay until the excess trapped UVOIR radiation energy has escaped. This can be seen in the light curve shortly after the time of peak brightness. 4. After this point, the observed UVOIR luminosity falls below the instantaneous power from radioactive decay. This is because the opacity of the ejecta at short wavelengths is now small enough that a significant fraction of the radiation produced leaks out as unobserved gamma rays without interacting to become longer wavelength UVOIR radiation, and so does not contribute to the UVOIR light curve. ![[*Left panel*]{}: Sketch of a typical UVOIR light curve for a SNIa, along with the power produced by the radioactive decay chain of $^{56}\rm Ni$. The Roman numerals correspond to the phases described in the text. [*Right panel*]{}: Rescaling of light curves from a SNIa population with varying $M_{\mathrm{Ni}}$ in standard gravity ($G=G_0$). A template curve, whose shape around the peak the other curves have been rescaled to match with, is also shown.[]{data-label="fig:G0_Rescaling"}](sketch_rescaleG0_2panel3_crop.pdf){width="\textwidth"} In order to calculate the light curve produced by the supernova in this complex situation, the radiative transport equations for the decay radiation propagating through the ejecta must be solved. This problem has been tackled by many groups using large numerical simulations, for example see Refs. [@Blinnikov2006] and [@Kasen2006]. However, such calculations are computationally intensive and are beyond the scope of this research. Instead, a semi-analytical treatment of an approximated version of the full problem is used, a method which has shown success in reproducing the standard observed behaviour of SNIe and the results of more complex numerical simulations [@Jeffery1999; @Pinto2000a]. The method that follows is based on the treatments in Refs. [@Arnett1980], [@Arnett1982], and [@Chatzopoulos2012] (henceforth known as A80, A82, and C12 respectively). A condensed derivation of the equation for the light curve is presented here, but a fully detailed version can be seen in Appendix \[App:LCDerivation\]. The following assumptions and approximations are made: 1. Spherically symmetric system; 2. Homologous expansion of ejecta – see Eq. (\[eq:homexp1\]); 3. Ejecta gas that is dominated by radiation pressure; 4. Diffusion approximation – optically thick ejecta (optical depth $>$ 1); 5. Constant effective opacity of ejecta; 6. Radioactive decay as the only source of energy; 7. Concentrated distribution of $^{56}\textup{Ni}$ in the centre of the system. We start by applying the first law of thermodynamics to the expanding supernova ejecta: $$\dot{E} + P \dot{V} = -\frac{\partial L}{\partial m} + \epsilon~, \label{eq:1stlaw1}$$ where $E=\alpha T^4 V$ is the specific energy, $P=\alpha T^4/3$ is the pressure, $T$ is the temperature of the ejecta, $V=1/\rho$ is the specific volume, $\rho$ is the density of the ejecta, $\alpha=4\sigma /c$ is the radiation constant, $\sigma$ is the Stefan-Boltzmann constant, the $\dot{y}$ notation represents the partial derivative with respect to time $\partial y/\partial t$, $L$ is the luminosity output of the system, $m$ is the mass, and $\epsilon$ is the rate of energy per unit mass added to the system. The first term, $\dot{E}$, is the rate of change in energy density, and $P\dot{V}$ represents the specific work involved in expanding the ejecta, so the equation shows that the sum of the rate of change in energy density and the specific work are equal to the sum of the energy per unit mass added to the system (positive) and the luminosity output of the system per unit mass (negative). The source of energy in this system, $\epsilon$, is the radioactive decay of $^{56}\textup{Ni}$ to $^{56}\textup{Co}$ and the subsequent decay of $^{56}\textup{Co}$ to stable $^{56}\textup{\textup{Fe}}$, and is given by $$\epsilon(r, t) = \xi(r) \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}} \right]~, \label{eq:eps41}$$ where $\xi(r)$ is the radial distribution of $^{56}\textup{Ni}$ in the ejecta, $\epsilon_{\textup{Ni}}\mathrm{=3.9\times 10^{10}}$ $\mathrm{erg/s/g}$ and $\epsilon_{\textup{Co}} \rm = 3.9 \times 10^{10} ~erg/s/g$ are the energy generation rates from $^{56}\textup{Ni}$ and $^{56}\textup{Co}$ decays respectively, and $\tau_{\textup{Ni}}\mathrm{=8.8~days}$ and $\tau_{\textup{Co}}\mathrm{=111.3~days}$ are the lifetimes of $^{56}\textup{Ni}$ and $^{56}\textup{Co}$ respectively [@Nadyozhin1994]. In the diffusion approximation, the luminosity of a shell of the ejecta at radius $r$ is related to the temperature of that shell by $$L = -4\pi r^2 \frac{\Gamma c a}{3} \frac{\partial T^4}{\partial r}~, \label{eq:luminosity1}$$ where $\Gamma=1/\rho \kappa$ is the mean free path in the ejecta, $\kappa$ is the effective opacity of the ejecta, and $c$ is the speed of light. The temperature can be expressed as a separation of variables: $$T(r,t)^4=\psi(r) \phi(t) T^4_{00} {\left[\frac{R_0}{R(t)}\right]}^4~, \label{eq:temp1}$$ where the temperature’s radial dependence is contained in $\psi(r)$, and its time dependence in $\phi(t)$. $T_{00}$ is the initial temperature at zero radius. Shortly after the initial supernova explosion, the expansion of the ejecta should become homologous such that the radial extent of the surface of the ejecta at time $t$, $R(t)$, is given by $$R(t) = R_0 + v_{\mathrm{sc}}t~, \label{eq:homexp1}$$ where $R(t)$ has advanced constantly at a scale velocity $v_{\mathrm{sc}}$ from its initial position at shock breakout $R_0$. For a SNIa, this can be taken as the radius of the white dwarf progenitor [@Piroetal2009]. Solving Eq. (\[eq:1stlaw1\]) using Eqs. (\[eq:luminosity1\])-(\[eq:homexp1\]) (see Appendix \[App:LCDerivation\] for details) gives the surface luminosity as a function of time, which is an equation for the light curve, as $$\begin{gathered} L_{\mathrm{surf}}(t)=\frac{2M_{\textup{Ni}}}{\tau_{\mathrm{m}}} \mathrm{e}^{ -\left(\frac{2 R_0 t}{v_{\mathrm{sc}} \tau^2_m} + \frac{t^2}{\tau^2_m}\right) } \bigg[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_{\mathrm{m}}} + \frac{t^{\prime}}{\tau_{\mathrm{m}}}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Ni}}} {\rm d}t^{\prime} \\ + \epsilon_{\textup{Co}} \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_{\mathrm{m}}} + \frac{t^{\prime}}{\tau_{\mathrm{m}}}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Co}}} {\rm d}t^{\prime} \bigg]~, \label{eq:luminosity31}\end{gathered}$$ where $M_{\textup{Ni}}$ is the initial mass of $^{56}\textup{Ni}$ in the ejecta, $\tau_{\mathrm{m}}=(2\kappa M_{\mathrm{ej}} /v_{\mathrm{sc}}\beta c)^{1/2}$ is the light curve timescale which determines how quickly the brightness rises to a peak and falls away again, $M_{\mathrm{ej}}$ is the total ejecta mass, and $\beta$ is a constant that depends on the ejecta’s density profile. A80 calculates that a good approximation for a variety of different density profiles is $\beta =13.8$, and this value is adopted here as well. Eq. (\[eq:luminosity31\]) will be completed by an overall time-dependent factor to account for gamma ray leakage – see Appendix \[App:LCDerivation\] for more details. ### Model parameters {#sssec:params} It would appear the variables that need to be known in order to calculate the light curve using Eq. (\[eq:luminosity31\]) are the initial radius of shock breakout $R_0$, scale velocity $v_{\mathrm{sc}}$, initial $^{56}\textup{Ni}$ mass $M_{\textup{Ni}}$, effective opacity $\kappa$, and total ejecta mass $M_{\mathrm{ej}}$. However, $v_{\mathrm{sc}}$ can be calculated from the energetics of the supernova explosion. The kinetic energy $E_{\mathrm{K}}$ is calculated as the difference between the energy produced by nuclear fusion $E_{\mathrm{N}}$ and the gravitational binding energy $E_{\mathrm{G}}$ of the white dwarf progenitor [@Howell2006; @Maeda2009]. The equation for $E_{\mathrm{N}}$ given in Ref. [@Maeda2009] is $$\label{S12 4} E_{\mathrm{N}} =[1.74f_{\textup{\textup{Fe}}}+1.56f_{\textup{Ni}}+1.24f_{\textup{Si}}] \Big( \frac{M_{\mathrm{ej}}}{M_{\odot}} \Big) \times 10^{51} \mathrm{~erg}~,$$ where $f_{\textup{Fe}, \textup{Ni}, \textup{Si}, \textup{C/O}}$ are the initial fractions of the ejecta mass in the form of stable $^{56}\textup{Fe}$; radioactive $^{56}\textup{Ni}$; intermediate mass elements such as $\textup{Si}$, $\textup{Mg}$, and $\textup{S}$; and unburned carbon/oxygen respectively, and the fractions are governed by the relationship $f_{\textup{C/O}} = 1-f_{\textup{Fe}}-f_{\textup{Ni}}-f_{\textup{Si}}$. $f_{\textup{Ni}}$ can be rewritten in terms of the initial mass of $^{56}\textup{Ni}$ in the ejecta using $M_{\textup{Ni}}=f_{\textup{Ni}}M_{\mathrm{ej}}$. An empirical formula for $E_{\mathrm{G}}$ is prescribed in Ref. [@Yoon2005]: $$\begin{aligned} \label{YL05 34} E_{\mathrm{G}}(\rho_{\mathrm{c}}) = - \bigg[ 32.759747 + 6.7179802 \log_{10} \rho_{\mathrm{c}} - 0.28717609(\log_{10} \rho_{\mathrm{c}})^2 \bigg] \times 10^{50} \mathrm{~erg}~,\end{aligned}$$ where $\rho_{\mathrm{c}}$ is the central density of the white dwarf progenitor. This allows the scale velocity $v_{\mathrm{sc}}$ to be calculated using: $E_{\mathrm{K}}=6M_{\mathrm{ej}}\ v_{\mathrm{sc}}^2=\left| E_{\mathrm{G}}-E_{\mathrm{N}} \right|$ [@Scalzo2012]. Thus the free parameter $v_{\mathrm{sc}}$ has been removed, but only at the expense of adding the mass fractions $f_{\textup{Fe}, \textup{Si}, \textup{C/O}}$ and the central density of the white dwarf progenitor $\rho_{\mathrm{c}}$ as free parameters instead. At this stage the free parameters required are $R_0$, $M_{\mathrm{ej}}$, $\kappa$, $\rho_{\mathrm{c}}$, $M_{\textup{Ni}}$, and two out of three of $f_{\textup{Fe}, \textup{Si}, \textup{C/O}}$ since the fractions must sum to unity. However, this number can be reduced further by investigating the effects of varying the central density of the white dwarf progenitor on the ratio of yields of $^{56}\textup{Ni}$ and $^{56}\textup{Fe}$ produced in the initial supernova explosion, which leads to the following relationship [@Krueger2010]: $$\label{S12 6} \frac{f_{\textup{Ni}}}{f_{\textup{Ni}} + f_{\textup{Fe}}} = 0.95 - 0.05 \times \frac{\rho_{\mathrm{c}}}{10^9 \mathrm{~g/cm^{3}}}~,$$ such that $f_{\textup{Fe}}$ can be calculated provided $f_{\textup{Ni}}=M_{\rm Ni}/M_{\rm ej}$ and $\rho_{\mathrm{c}}$ are known, which removes $f_{\textup{Fe}}$ as a free parameter in our model. Thus only one of the three element fractions now need to be specified. There are strong limits on $f_{\textup{C/O}}$ in that it is well know that the amount of the unburned carbon and oxygen is very small, between $0-5\%$ of the total white dwarf mass [@Thomas2011]. For this reason, we choose $f_{\textup{C/O}}$ as the free parameter instead of $f_{\rm Si}$. Finally, there is a relationship between the initial mass of in the ejecta $M_{\textup{Ni}}$ and the mean effective opacity of the ejecta $\kappa$ which allows us to eliminate $\kappa$ as an input parameter in our light curve model. Because knowledge of this relationship is vital to understanding the WLR observed in SNIe populations, the specifics of the $M_{\textup{Ni}}$-$\kappa$ relationship are discussed in greater detail in its own, dedicated section; Section \[ssec:MNikappa\]. Thus a semi-analytical model to calculate the light curve at all times can be created, based on Eq. (\[eq:luminosity31\]), as long as values for the following five parameters are specified: $M_{\mathrm{ej}}$, $M_{\textup{Ni}}$, $f_{\textup{C/O}}$, $\rho_{\mathrm{c}}$, and $R_0$. Although the total number of input parameters has not decreased from our initial set, this new set of input parameters can be estimated with much better justification, or previous work has established constraints on them as well. Dependence of model on parameters {#ssec:LCparams} --------------------------------- Before moving on, we would like to test the model’s dependency on each of the input parameters in order to gain an intuitive sense of how the model works, and to verify that the model behaves sensibly given our knowledge of the underlying supernova astrophysics. In order to test the effect of model parameters on the light curve, each parameter was varied over an observationally motivated (or allowed) range in turn, while the other parameters were held constant. For now we assume $M_{\textup{Ni}}$ and $\kappa$ are independent input parameters, although we will show in Section \[ssec:MNikappa\] that there is a relationship between the two. Further, we neglect the minor effect of varying white dwarf mass on the binding energy, though this will be included later. The results are shown in Fig. \[fig:param\_dep\]. ![Effect of varying each model parameter on the SNIa light curve while the other parameters remain fixed. When not being varied, the parameters are fixed with values $R_0\mathrm{=10^{9}~cm}$, $M_{\mathrm{Ni}}\mathrm{=1.0}M_{\odot}$, $f_{\rm C/O}\mathrm{=0.0}$, $\rho_{\mathrm{c}}\mathrm{=1.0\times 10^9~g/cm^{3}}$, $M_{\mathrm{ej}}\mathrm{=1.44}M_{\odot}$, and $\kappa \mathrm{=0.2\ cm^2/g}$[]{data-label="fig:param_dep"}](param_dep3_crop.pdf){width="\textwidth"} Given that the value of the radius of initial shock breakout can be taken to be the radius of the white dwarf progenitor [@Piroetal2009], the values of $R_0$ would be expected to be in the range $R_0\mathrm{=10^{7}-10^{9}~cm}$. In the upper left panel of Fig. \[fig:param\_dep\] we can see that varying $R_0$ within this range yields essentially identical light curves. This corresponds to the limit $R_0 \rightarrow 0$ in Eq. (\[eq:luminosity31\]) which, as discussed in Ref. [@Arnett1982], means that the light curve model essentially no longer depends on $R_0$. ![Effect of varying $\rho_{c}$ on the nuclear, gravitational, and kinetic energies of a SNIa. The other model parameters are held constant with values of: $M_{\mathrm{Ni}}\mathrm{=1.0}M_{\odot}$, $f_{\rm C/O}\mathrm{=0.0}$, $R_0\mathrm{=10^{9}~cm}$, $M_{\mathrm{ej}}\mathrm{=1.44}M_{\odot}$, and $\kappa \mathrm{=0.27 cm^2/g}$.[]{data-label="fig:EngrhoDep"}](NewEngrho.pdf){width="\textwidth"} There are strong constraints on the central density of the white dwarf progenitor, $\rho_{\mathrm{c}}$. The lower bound is due to there being a minimum density, for a given central temperature, at which the fusion of carbon can occur and trigger the thermonuclear explosion required for a SNIa. For example, Fig. 1 of Ref. [@Sahrling1994] calculates that even a high central temperature of $T_\mathrm{c}\mathrm{=4.0\times 10^8~K}$ requires a minimum central density around $\rho_{\mathrm{c}}\mathrm{=1.0\times 10^9~g/cm^{3}}$. The upper limit is provided by the point at which electron capture on nuclei to produce neutrons becomes a dominant process such that the white dwarf will collapse to a neutron star instead of becoming a SNIa, which Refs. [@Yoon2005] and [@Nomoto1991] calculate to occur around $\rho_{\mathrm{c}}\mathrm{\approx 1.0\times 10^{10}~g/cm^{3}}$. Figure \[fig:EngrhoDep\] shows that, over this constrained range, the kinetic energy of the system varies little with changes in $\rho_{\mathrm{c}}$. Therefore, it is not surprising that the light curves are only very weakly affected by changes in $\rho_{\mathrm{c}}$ as shown in the middle left panel of Fig. \[fig:param\_dep\]. The fraction of unburned carbon/oxygen in the ejecta of SNIe is constrained observationally to be very low [@Thomas2011], and therefore fractions above 5$\%$ will not be considered here. The lower left panel of Fig. \[fig:param\_dep\] shows that over this range, variation in $f_{\rm C/O}$ has little effect on the light curve of the SNIa. As varying the values of $f_{\rm C/O}$, $\rho_{\rm c}$, and $R_0$ within the physically motivated ranges mentioned above does not significantly affect the light curves, we will fix their values from here onwards at $f_{\rm C/O}=0.0$, $\rho_{\rm c}=1.0\times 10^9 \mathrm{g/cm^3}$, and $R_0=1.0\times 10^9 \mathrm{cm}$. The upper right panel of Fig. \[fig:param\_dep\] shows that, in general, increasing the total ejecta mass $M_{\mathrm{ej}}$ (while the mass of $^{56}\textup{Ni}$ is kept constant) decreases the brightness of the supernova and increases the light curve width. Specifically, when $M_{\mathrm{ej}}$ is doubled from $M_{\mathrm{ej}}=1.5M_{\odot}$ to $3.0M_{\odot}$ the peak luminosity of the SNIa drops to about $\sim70\%$ of its former value, and the width of its light curve increases by $\sim60\%$. This behaviour is logical as a more massive ejecta would be more difficult for the radiation emitted in the decays of $^{56}\textup{Ni}$ and $^{56}\textup{Co}$ to pass through, so the same amount of energy escapes over a longer timescale, mathematically represented in the equation for the light curve timescale: $\tau_{\mathrm{m}} = \sqrt{2\kappa M_{\mathrm{ej}}/\beta c v_{\mathrm{sc}}}$. This results in a fainter, wider light curve. Note that the above expression treats the effect of $M_{ej}$ and $\kappa$ on $\tau_{\mathrm{m}}$ distinctly, so this physical interpretation is valid even though we held the effective opacity constant through the fixed value of $\kappa$. Variations in $M_{\mathrm{ej}}$ also affect the energetics of the supernova explosion, but the resulting effect on the light curve is negligible. The middle right panel of Fig. \[fig:param\_dep\] shows that decreasing the mass of $^{56}\textup{Ni}$ from $M_{\mathrm{Ni}}\mathrm{=1.0}M_{\odot}$ to $M_{\mathrm{Ni}}\mathrm{=0.5}M_{\odot}$ in an $M_{{\rm ej}}=1.4M_{\odot}$ supernova causes the peak luminosity of the SNIa to decrease to around half of its former value, while the timescale over which the SNIa brightens and fades remains essentially unaffected. This is simply because the reduced amount of unstable $^{56}\textup{Ni}$ decreases the instantaneous power output from radioactive decay, making the SNIa fainter at each point on the light curve. Variations in $M_{\mathrm{Ni}}$ also affect the energetics of the supernova explosion which determine the scale velocity $v_{\mathrm{sc}}$; however the resulting variation of $v_{\mathrm{sc}}$ is very small and has a negligible effect on the SNIa light curve. The lower right panel of Fig. \[fig:param\_dep\] shows that reducing the effective opacity of the SNIa from $\kappa=0.30\ \mathrm{cm^2/g}$ to $\kappa=0.10\ \mathrm{cm^2/g}$ causes the width of the light curve to halve, while the peak luminosity nearly doubles. This reduction in $\kappa$ allows the radiation produced by the decay of $^{56}\textup{Ni}$ to escape more easily, which leads to the light curve peaking at an earlier time when a higher fraction of the initial $^{56}\textup{Ni}$ remains undecayed such that the instantaneous power from radioactive decay at the peak is higher and the SNIa is therefore brighter. However, this also means that there is less trapped radiation still remaining at late times, such that the brightness of the SNIa falls sharply soon after the peak. $M_{\rm Ni}$-$\kappa$ and width-luminosity relationships {#ssec:MNikappa} -------------------------------------------------------- As mentioned at the end of Section \[ssec:LCM\], there is a relationship between the initial mass of in the ejecta $M_{\textup{Ni}}$ and the mean effective opacity of the ejecta $\kappa$ which allows us to eliminate $\kappa$ as an input parameter in our light curve model. To understand why this is the case, it is necessary to consider how the radioactive decay energy from the decay of $^{56}\textup{Ni}$ through to $^{56}\textup{\textup{Fe}}$ escapes the ejecta. The radiation is initially produced as short-wavelength, high-energy gamma rays. The ejecta has a high opacity at these wavelengths, but a much lower opacity at longer wavelengths in the optical and infrared regions. Thus in order to escape, the radiation must somehow increase its wavelength. One process through which this increase in wavelength can happen is fluorescence, whereby an atom absorbs a single high energy, short wavelength photon and emits several lower energy, longer wavelength photons. This fluorescence process becomes less effective the more ionised a material becomes [@Pinto2000b]. An increased $^{56}\textup{Ni}$ content increases the instantaneous power deposited into the ejecta by the radioactive decay of the $^{56}\textup{Ni}$. This increased power deposition results in the ejecta being heated more, and the higher temperature means the ejecta material will be more ionised, which in turn reduces the efficacy of the fluorescence process, so a larger fraction of the radiation remains trapped at short wavelengths. In the light curve model, a larger fraction of the radiation remaining trapped can be expressed as an increased effective opacity $\kappa$. Thus there is a positive relationship between $M_{\textup{Ni}}$ and $\kappa$ where an increased $M_{\textup{Ni}}$ corresponds to an increased $\kappa$. As has been shown in Section \[ssec:LCparams\] in the middle right and lower right panels of Fig. \[fig:param\_dep\], increasing $M_{\textup{Ni}}$ in the light curve model increases the peak luminosity (i.e., height) of the light curve, and increasing $\kappa$ increases the timescale over which the SNIa brightens and fades (i.e., the width of the light curve). Thus the above relationship between $M_{\textup{Ni}}$ and $\kappa$ means that an increase of these two parameters simultaneously increases both the height and width of the light curve, as can be seen in Fig. \[fig:MNikappaDep\]. ![Effect of varying both $M_{\mathrm{Ni}}$ and $\kappa$ simultaneously on the shape of the SNIa light curve. The other model parameters are held constant with values of: $f_{\rm C/O}\mathrm{=0.0}$, $R_0\mathrm{=10^{9}~cm}$, $\rho_{\mathrm{c}}\mathrm{=1.0\times 10^9~g/cm^{3}}$, and $M_{\mathrm{ej}}\mathrm{=1.44}M_{\odot}$.[]{data-label="fig:MNikappaDep"}](MNikappacurves.pdf){width="\textwidth"} The above relationship between $M_{\textup{Ni}}$ and $\kappa$ can explain the WLR observed in SNIe [@Pinto2000b]. Since our model must be able to reproduce the WLR in order to be valid, this allows the specific relationship between $M_{\textup{Ni}}$ and $\kappa$ to be quantified. The exact $M_{\textup{Ni}}$-$\kappa$ relationship that is required to produce the observed WLR can be found by calculating, for a given increase in $M_{\textup{Ni}}$, how much of an increase in $\kappa$ is required to produce a new curve whose peak luminosity and width are both scaled from the peak luminosity and width of the old curve by the same factor. This can then be repeated, such that the required $\kappa$ values can be calculated for many $M_{\textup{Ni}}$ values, therefore establishing a relationship between $M_{\textup{Ni}}$ and $\kappa$. In practice this is done as follows: consider a light curve A that is produced with parameters ${M_{\textup{Ni}}}_{\mathrm{A}}$ and $\kappa_{\mathrm{A}}$, and a set of light curves $\rm B_i$ produced with fixed ${M_{\textup{Ni}}}_{\mathrm{B}}$ but different $\kappa_{\mathrm{B, i}}$. The accepted $\kappa_{\mathrm{B, i}}$ is the one that produces the light curve $\rm B_i$ whose shape about the peak most closely matches that of light curve A (computed by minimising $\chi^2$) once light curve $\rm B_i$ has been stretched in both height and width by a factor $s_{\rm B,i}$. This process yields the new pair of values (${M_{\textup{Ni}}}_{\mathrm{B}}$, $\kappa_{\mathrm{B, accepted}}$) and can be repeated for other values ${M_{\textup{Ni}}}_{\mathrm{C}}$ and so on in order to build a relationship between $M_{\textup{Ni}}$ and $\kappa$. It does require a single point in the $M_{\textup{Ni}}$-$\kappa$ parameter space, $({M_{\textup{Ni}}}_{\mathrm{A}}, \kappa_{\mathrm{A}})$, through which the relationship passes, to be specified. Following Refs. [@Piroetal2009] and [@Childressetal2015], we specify the point $(M_{\mathrm{Ni}}, \kappa)=(0.79\ M_{\odot}, 0.20\ \mathrm{cm^2/g})$. The resulting relationship between $M_{\textup{Ni}}$ and $\kappa$ is displayed in the left panel of Fig. \[fig:MNivkappa\_MChvG\]. ![[*Left panel*]{}: Relationship between $M_{\textup{Ni}}$ and $\kappa$ that is required to produce the observed WLR. [*Right panel*]{}: Relationship between Chandrasekhar mass $M_{\mathrm{Ch}}$ of a SNIa expressed in units of solar mass $M_{\odot}\mathrm{=2.0\times 10^{30}~kg}$ and the local value of Newton’s gravitational constant $G$, expressed in units of the standard value $G_0\mathrm{=6.67\times 10^{-11}N m^2 kg^{-2}}$. The standard Chandrasekhar mass value is given by $M_{\mathrm{Ch}}(G_0)\mathrm{=1.44}M_{\odot}$.[]{data-label="fig:MNivkappa_MChvG"}](kappaMNi_MChG_2panel_sq3_crop.pdf){width="\textwidth"} Using this calculated $M_{\textup{Ni}}$-$\kappa$ relationship, we can reproduce a set of light curves for a population of SNIe in a standard gravity environment with varying $M_{\textup{Ni}}$ values. We can then verify that this set of light curves does obey the WLR by rescaling them. The rescaling is done by stretching the light curve’s height and width by a factor $s$ such that the shape of the stretched light curve about the peak most closely matches that of a template light curve (again computed by minimising $\chi^2$). As confirmed by the tight distribution of peak luminosities in the rescaled light curves seen in the right panel of Fig. \[fig:G0\_Rescaling\], which shows the unscaled and rescaled light curves for a population of SNIe in standard gravity, the WLR is obeyed for $G=G_0$ when the $M_{\textup{Ni}}$-$\kappa$ relationship calculated as described above is used. This is expected, given the way in which the $M_{\textup{Ni}}$-$\kappa$ relationship was calculated. Nonetheless, checking that the population does obey the WLR confirms the $M_{\textup{Ni}}$-$\kappa$ relationship has been calculated correctly and gives a clear demonstration of what the $M_{\textup{Ni}}$-$\kappa$ relationship achieves. If the value of $G$ is constant (i.e. $G(z)=G_0$) then the SNIe will have the same average rescaled intrinsic peak luminosity at all redshifts, and therefore act as conventional standardisable candles. Gravitational dependence of light curve {#ssec:GdepLCM} --------------------------------------- Now that a model for producing the light curve of a SNIa for a given set of input parameters has been established, the next step is to understand how the values of these input parameters depend on the local strength of gravity. As an initial investigation, this paper focuses mainly on the dependence of the total ejecta mass $M_{\mathrm{ej}}$ on the strength of gravity. This is the most obvious and important effect the strength of gravity should have on any of the input parameters. A simple measure of the strength of gravity is through the value of Newton’s gravitational constant, $G$, with a smaller value of $G$ corresponding to a weaker gravity and vice versa. As mentioned previously, the SNIa occurs when the white dwarf progenitor accretes enough mass from a binary partner to increase the core temperature and density above the level at which carbon fusion can occur. The underlying physics results from the white dwarf being made of degenerate matter that obeys an inverse mass-radius relationship such that the additional mass has the effect of decreasing the white dwarf’s radius, thereby increasing the density and temperature in its core. The consensus model for SNIe is that the critical mass at which the carbon detonation is triggered, and therefore $M_{\mathrm{ej}}$, is approximately equal to the mass at which the internal electron degeneracy pressure that prevents the white dwarf from collapsing under its own weight can no longer withstand the inwards force of gravity, a mass known as the Chandrasekhar mass, $M_{\mathrm{Ch}}$ [@Hillebrandt2000]. Making the assumption $M_{ej} \approx M_{\mathrm{Ch}}$, it becomes necessary to understand how $M_{\mathrm{Ch}}$ depends on $G$. $M_{\mathrm{Ch}}$ can be calculated by equating the inwards force of gravity against the outwards force due to the white dwarf’s internal electron degeneracy pressure [@Chandrasekhar1931]. The electron degeneracy pressure arises from the Pauli exclusion principle that states no two fermions can occupy the same quantum mechanical state. Therefore, when several electrons are confined to a small volume they must each occupy different energy levels, and adding further electrons to this small volume by compressing the material raises the energy of the highest occupied level. This means that energy is required to compress the electrons, which is the definition of a pressure, in this case known as electron degeneracy pressure. A condensed derivation of the equation for the Chandrasekhar mass is presented here, but a fully detailed version can be seen in Appendix \[App:MChDerivation\]. The derivation of the equation for the Chandrasekhar mass is very well known, for example see [@MChDerivation]. The equation of hydrostatic equilibrium for a spherically symmetrical stellar fluid in Newtonian gravity can be written as $$\frac{1}{r^{2}} \frac{{\rm d}}{{\rm d}r} \left( \frac{r^{2}}{\rho} \frac{{\rm d}P}{{\rm d}r} \right) = - 4\pi{G}\rho, \label{eq:stellarstate1}$$ where $r$ is the radial coordinate, $P$ is the pressure of the fluid, $\rho$ is the density of the fluid. For the degenerate material under high amounts of compression inside a white dwarf, the electrons will have a large energy due to the electron degeneracy pressure, and so will have a velocity approaching that of the speed of light. Thus, the white dwarf material is best described as a relativistic Fermi gas with an equation of state given by $$P = \frac{\hbar c}{12 \pi^{2}} \left( \frac{3 \pi^{2} \rho}{m_{\mathrm{N}} \mu} \right)^{4/3} \equiv K\rho^{4/3}, \label{eq:EoS1}$$ where $\hbar$ is the reduced Planck constant, $m_{\mathrm{N}}$ is the nucleon mass, and $\mu=\left\langle A/Z \right\rangle$ is the average mass number per nuclear charge with $\mu \approx2$ for the $^{12}\mathrm{C}$ and $^{16}\mathrm{O}$ that make up the majority of the white dwarf. This equation of state is of the form of a polytrope $P = K \rho^{\gamma}$ with $\gamma=4/3$. By defining $\rho \equiv \lambda \Theta^n$, $\gamma \equiv \frac{n+1}{n}$, and introducing a radial variable $y\equiv r/\alpha$ where $\alpha \equiv \sqrt{(n+1)K \lambda^{(1-n)/n}/4 \pi G}$, Eq. (\[eq:stellarstate1\]) becomes the Lane-Emden equation for polytropes in hydrostatic equilibrium [@Lane1870]: $$\frac{1}{y^2} \frac{{\rm d}}{{\rm d}y} \left(y^2 \frac{{\rm d}\Theta}{{\rm d}y} \right) = - \Theta^n~. \label{eq:LaneEm1}$$ The white dwarf mass is given by $$M = 4 \pi \lambda \alpha^3 \left[-y^2 \frac{{\rm d}\Theta}{{\rm d}y} \right]_{y_1}, \label{eq:Mint11}$$ where $y_1$ corresponds to the outer radius of the star $R$ where $\rho(R)=\Theta(y_1)=0$, and can be found by numerically solving Eq. (\[eq:LaneEm1\]), which gives $y_1(n=3)=6.89685$ and $-y^2{\rm d}\Theta/{\rm d}y\mid_{y_1}=2.01824$ for a white dwarf. Substituting the definitions for $\lambda$ and $\alpha$ into Eq. (\[eq:Mint11\]) leads to $$M_{\mathrm{Ch}}=\frac{\sqrt{3 \pi}}{2} {\left( \frac{\hbar c}{G} \right)}^{3/2} \frac{1}{( \mu m_{\mathrm{N}} )^2} \left[- y^2 \frac{{\rm d}\theta}{{\rm d}y} \right]_{y_1}, \label{MCh1}$$ which defines the Chandrasekhar mass $M_{\mathrm{Ch}}$. It can be seen that there is a clear proportionality for the Chandrasekhar mass, and therefore in our model, the white dwarf mass at supernova: $M_{\mathrm{ej}}=M_{\mathrm{Ch}}\propto G^{-3/2}$, [as was shown in earlier works [@Amendolaetal1999; @Garcia-Berroetal1999; @RiazueloUzan2002]]{}. This result is displayed graphically in the right panel of Fig. \[fig:MNivkappa\_MChvG\]. The fact that the Chandrasekhar mass $M_{\rm Ch}$ increases with a decreasing $G$ can be understood as follows: when $G$ decreases, gravity per unit mass becomes weaker, and therefore the electron degeneracy pressure can counteract against the gravity produced by more mass before the collapse occurs. In addition, we note that the relationship $y \equiv r/\alpha = \mathrm{const}$ where $\alpha \equiv \sqrt{(n+1)K \lambda^{(1-n)/n}/4 \pi G}$ tells us that the radius of the white dwarf $R$ also depends on $G$ as $R \propto G^{-1/2}$. However, since changes to the white dwarf radius, and therefore the model parameter $R_0$, have a negligible effect on the light curve, this effect can be neglected. Moreover, the binding energy $E_G$ used in calculating the energetics of the SNIa explosion is also affected by changes in the strength of gravity. We use a simplified model that assumes constant density of the white dwarf, but a more complicated model is not necessary as the effect is at the 1-2$\%$ level. The binding energy is related to the strength of gravity, the mass of the progenitor, and the radius of the progenitor by $$\begin{aligned} E_G \propto \frac{GM^2}{R}.\end{aligned}$$ The above results give $M \propto G^{-3/2}$ and $R \propto G^{-1/2}$, and they lead to $E_G \propto G^{-3/2}$. This dependence of binding energy on the strength of gravity affects the light curve in non-standard $G$ through the kinematics of the explosion, but as mentioned above this effect is minor in comparison to the dominant effect of the gravitational-dependence of the Chandrasekhar mass. In this work we shall not consider the effect of varying $G$ on the background expansion rate, as there are theories in which the variation of $G$ mainly affects interactions of matter particles, for example in the form of a Yukawa-type fifth force which decays at large distance, such as viable $f(R)$ gravity models [@Braxetal2008; @Wangetal2012; @Ceron-Hurtadoetal2016] that feature a working chameleon screening mechanism [@KhouryandWeltman20041; @KhouryandWeltman20042]. Also, when considering the time variation of $G$, we assume that $G(z)$ only varies noticeably on cosmological time scales – the supernova timescale is much shorter, over which we take $G$ as constant. Note that in more general modified gravity models the internal structure of the star can be affected as well, see for example [@Babichevetal2016], but this possibility is beyond the scope of this initial study. Previous works [@Garcia-Berroetal1999; @RiazueloUzan2002] made the assumption $M_{\rm Ni} \propto M_{\rm Ch}$ and therefore $M_{\rm Ni} \propto G^{-3/2}$. However, here we do not assume any systematic dependence of $M_{\rm Ni}$ on $M_{\rm Ch}$ or $G$. Hence when we compare populations of SNIe in different strengths of gravity, as in the subsequent section, we do not change the set of $M_{\rm Ni}$ values used to generate the varied members of the population for each new value of $G$. A proper investigation into how the distribution of the amount of $^{56}{\rm Ni}$ produced would differ when $G \neq G_0$ would require a detailed study of star formation and full hydrodynamical modeling of the SNIa explosion in such scenarios, and if the evolution of $G$ over the relevant timescales is non-negligible then the situation is even more complicated – such an investigation is far beyond the scope of this work. As we mainly want to consider the effect of the gravitational dependence of $M_{\rm ej}$ on the WLR, we make the assumption that the $M_{\textup{Ni}}$-$\kappa$ relationship that underlies the observed WLR is a consequence of nuclear and atomic physics, and does not depend on the local strength of gravity around the SNIa. Considering the astrophysics discussed in Section \[ssec:MNikappa\], this assumption is equivalent to assuming that changes in the strength of gravity around the SNIa do not significantly affect the transport of radiation through the ejecta, which we believe is a reasonable assumption to make for this initial investigation. Thus with the dominant Chandrasekhar mass effect and the minor binding energy effect we have defined a gravity-dependent light curve model that we can use to investigate the effects of gravity on the interpretation of SNIa cosmology data. We will do this in the next section. Reinterpreting SNIa Cosmology {#sec:SNIacosmo} ============================= SNIa cosmology relies heavily on the rescaling procedures that allow SNIe to act as standardisable candles. Now that we have a gravity-dependent light curve model, we are able to investigate whether the WLR and our standardisation procedure are affected in non-standard gravity where $G\neq G_0$. Gravitational effect on WLR and standardisability {#ssec:GDepWLR} ------------------------------------------------- Rescaling a population of SNIe in a non-standard gravity environment generated using the same set of $M_{\rm Ni}$ values and the same calculated $M_{\textup{Ni}}$-$\kappa$ relationship as used for the standard gravity SNIe still produces a set of approximately uniform rescaled curves, as can be seen in Fig. \[fig:G\_Rescaling\] for two different values of $G$: a weaker gravity ($G=0.8G_0$; left panel) and a stronger one ($G=1.1G_0$; right panel). Therefore, the WLR is reproduced even for $G \neq G_0$. ![Rescaling of light curves from a SNIa population with varying $M_{\mathrm{Ni}}$ in non-standard gravity ($G\neq G_0$). A template curve, whose shape around the peak the other curves have been rescaled to match with, is also shown. [*Left panel*]{}: $G=0.8G_0$. [*Right panel*]{}: $G=1.1G_0$.[]{data-label="fig:G_Rescaling"}](0811G_rescale.pdf){width="\textwidth"} Interestingly, Fig. \[fig:G\_Rescaling\] shows that the average rescaled intrinsic peak luminosity of the non-standard gravity SNIa population is different from that of the standard gravity population as seen in the right panel of Fig. \[fig:G0\_Rescaling\]. The black dotted lines in Fig. \[fig:G\_Rescaling\] are the same template curves as used in the right panel of Fig. \[fig:G0\_Rescaling\], thus we see that a weaker (stronger) gravity leads to intrinsically less (more) luminous SNIe after applying the same shape matching rescaling procedure. Therefore, our gravity-dependent light curve model allows us to obtain a relationship between the average rescaled intrinsic peak luminosity and the strength of gravity, which is displayed in the left panel of Fig. \[fig:LGz\]. This means that in a model where the local strength of gravity varies as a function of redshift (i.e. $G(z) \neq G_0$) the average rescaled intrinsic peak luminosity will be different at different redshifts, and thus the SNIe will no longer be conventional standardisable candles. Following the arguments made in Section \[sec:intro\], interpreting the SNIa cosmology observations in the context of such a model will lead to different SNIa luminosity distances from the values obtained by assuming $G$ is independent of redshift (i.e. $G(z) = G_0$), and therefore a different best-fitting cosmology. This is the key result of this work, and in the next subsection, as an example of its use, we will apply it to a toy model where the value of $G$ changes over time to see what the consequences would be on the distance-redshift relation that is calculated from the existing SNIa data. ![[*Left panel*]{}: The $L(G)$ relationship showing the effect of the local strength of gravity on the average rescaled intrinsic peak luminosity of a SNIa population. [*Centre panel*]{}: The $G(z)$ relationship for our numerical example specified by Eq. (\[eq:Gvz\]) with $n=4$. [*Right panel*]{}: The $L(z)$ relationship that results from using our gravity-dependent light curve model to investigate how the average rescaled intrinsic peak luminosity $L$ varies with redshift in a model of gravity that varies as specified by Eq. (\[eq:Gvz\]) with $n=4$.](LGz_3panel_sq3_crop.pdf){width="\textwidth"} \[fig:LGz\] As previously mentioned, the standardisation procedures used in observational SNIa cosmology are more complex than our rescaling method. Typically such studies describe the variability of SNIe as being captured by two parameters: the time-stretching of the light curve and the colour of the SNIa at peak brightness [@MLCS; @SALT2]. Note that we cannot consider SNIa colour in our simplified procedure because the semi-analytical model we use can only produce UVOIR light curves. The dependence of the absolute magnitude on the two parameters can be estimated using an entire sample of SNIe simultaneously, not just low-redshift SNIe whose distances can be computed from other methods. Any remaining dispersion in the absolute magnitudes of a sample after this stretch-color standardisation could be used in combination with our model to constrain the variation of $G$ more accurately than has been done in previous works. Numerical example {#ssec:NumEx} ----------------- For our numerical example we specify a $G(z)$ relationship as displayed in the centre panel of Fig. \[fig:LGz\] where the value of $G$ changes as $$\begin{aligned} \label{eq:Gvz} G(z) = G_0 (1+z)^{-1/n},\end{aligned}$$ with $n=4$. Note that this is a toy model intended to describe the late-time variation of $G$ and we do not assume that it works for $z>2$-$3$. We then use our gravity-dependent light curve model to compute the corresponding variation in the average rescaled intrinsic peak luminosity $L$ with redshift that occurs as a consequence of our non-constant $G$, and this is shown in the right panel of Fig. \[fig:LGz\]. Once we have this $L(z)$ relationship we can convert the intrinsic luminosity values to values of absolute magnitude $M$ with $$\label{eq:lumabsmag1} M = M_{\odot} - 2.5\log_{10} \left( \frac{L}{L_{\odot}} \right),$$ and use these absolute magnitude values to (re)interpret the existing SNIa apparent magnitude-redshift data $m(z)$ and produce the distance-redshift relation using $$\label{eq:dLobs} d_L = 10^{(m-M-25)/5}.$$ The distance-redshift relationship that results from (re)interpreting the existing SNIa data in a model where gravity varies with redshift as $G(z) = G_0 (1+z)^{-1/4}$ is shown in Fig. \[fig:dvz\]. This re-interpreted $d_L(z)$ can then be compared to theoretical distance-redshift relationships produced using $$\label{eq:d_Lz} d_{\mathrm{L}}(z;\Omega_{\mathrm{M}},\Omega_{\mathrm{\Lambda}}, H_0) = \frac{c(1+z)}{H_0 \sqrt{\left|\kappa\right|}} S \left[ \sqrt{\left|\kappa\right|} \int^{z}_{0} \left[ (1+z^{\prime})^2 (1+\Omega_{\mathrm{M}} z^{\prime}) - z^{\prime}(2+z^{\prime})\Omega_{\mathrm{\Lambda}} \right]^{-1/2} {\rm d}z^{\prime} \right],$$ where the function $S(x)$ is given by $$S(x) = \begin{cases} \sin (x) & \text{if } \Omega_{\mathrm{M}} + \Omega_{\mathrm{\Lambda}} > 1, \\ \sinh (x) & \text{if } \Omega_{\mathrm{M}} + \Omega_{\mathrm{\Lambda}} < 1, \\ x & \text{if } \Omega_{\mathrm{M}} + \Omega_{\mathrm{\Lambda}} = 1, \end{cases}$$ and the curvature $\kappa$ is $$\kappa = \begin{cases} 1 & \text{if } \Omega_{\mathrm{M}} + \Omega_{\mathrm{\Lambda}} = 1, \\ 1 - \Omega_{\mathrm{M}} - \Omega_{\mathrm{\Lambda}} & \text{otherwise}, \end{cases}$$ where $\Omega_{\mathrm{M}}$ and $\Omega_{\mathrm{\Lambda}}$ are the matter and dark energy density parameters, and $H_0$ is the current day (i.e. at $z=0$) Hubble parameter value. In Fig. \[fig:dvz\] we start by producing a distance-redshift relation for a standard $\Lambda$CDM cosmology with $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.31, 0.69)$ (green line) produced using Eq. (\[eq:d\_Lz\]) and construct a mock apparent magnitude-redshift relation using Eq. (\[eq:dLobs\]) assuming $G(z)=G_0$ and therefore a redshift-independent $M$. Once we have this mock data we can compute the $d_L(z)$ relation required to match it if we assume that $G(z)=G_0(1+z)^{-1/4}$ which means that intrinsic peak luminosity $L$, and therefore the absolute magnitude at peak $M$ through Eq. (\[eq:lumabsmag1\]), depends on redshift (blue line). This gravitational effect means that the supernovae at higher redshift are intrinsically fainter, which means that less cosmic acceleration is required to explain the dimming of distant SNIe. We then use a curve fitting approach to identify the values of the cosmological parameters $(\Omega_{\rm M}, \Omega_{\Lambda})$ in Eq. (\[eq:d\_Lz\]) that give the best fit to the reinterpreted mock data. We find that the new best fitting cosmology is $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.62, 0.38)$ when the Universe is assumed to be flat (red line). We also display the distance-redshift relation produced using Eq. (\[eq:d\_Lz\]) with $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.3, 0.7)$ (purple line) to compare the effect of slightly varying $\Omega_{\rm M}$ with that of reinterpreting the data for a redshift-dependent $G$. ![The $d_L(z)$ relationship showing that when the SNIa apparent magnitude-redshift data is (re)interpreted for a universe where $G(z)=G_0(1+z)^{-1/4}$ the new best fitting cosmology is $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.62, 0.38)$ when the Universe is assumed to be flat. The line labelled as ‘$G=G_0$’ is computed assuming a standard cosmology of $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.31, 0.69)$ to distinguish from the purple line and to compare the effect of slightly varying $\Omega_{\rm M}$ with that of reinterpreting the data assuming $G(z)=G_0(1+z)^{-1/4}$.[]{data-label="fig:dvz"}](new_cosmo.pdf){width="\textwidth"} This is just a numerical example to demonstrate the importance of properly considering the effects of modified gravity on supernova cosmology in models with a specified variation of the strength of gravity with redshift $G(z)$. The process could also be reversed to identify what $G(z)$ relationship would be required for the existing SNIa data to infer a given cosmology, for example one with $(\Omega_{\rm M}, \Omega_{\Lambda})=(1.0, 0.0)$ where there would be no sign of an accelerating Universe and the dimming of distant supernova is purely intrinsic. We will investigate this possibility in a separate work. Conclusion {#sec:conc} ========== Modified gravity theories have been studied as interesting alternatives to the standard $\Lambda$CDM paradigm to explain the late-time cosmic acceleration, the first evidence for which resulted from analyses of the dimming of distant Type Ia supernovae (SNIe). In most such studies, the modification to Einsteinian gravity is assumed to provide a mechanism to accelerate the expansion rate of the Universe, without affecting the interpretation of the Type Ia supernova (SNIa) data itself. However, given the diversity of modified gravity theories being studied in the literature, it is not unnatural to envisage that at least some of these models may affect the properties of the progenitors of SNIe – Chandrasekhar-mass white dwarfs, where gravity plays an important role – and therefore the properties of the SNIe themselves. That being the case, the interpretation of SNIa data would be affected, which can have nontrivial consequences in cosmology. In this paper, we make an updated attempt to understand the effect of a non-standard gravity on SNIe. Because modern supernova cosmology depends on the ability of SNIe to behave as standardisable candles, we are primarily interested in the standardisability of SNIe light curves in both standard and non-standard scenarios of gravity. Previous work [@Amendolaetal1999; @Garcia-Berroetal1999; @RiazueloUzan2002] suggested a straightforward proportionality between the intrinsic luminosity of SNIe and the value of Newton’s constant $G$. We advance this method by investigating how the full SNIe light curves are affected by modified gravity, an approach that allows us to test whether the width-luminosity relation (WLR) is reproduced, and verify that the standardisation procedure, vital for the use of SNIe as measures of distance, still works when $G \neq G_0$. As a first step towards a more comprehensive study, this work has made three simplifications that ideally should be revisited in more detail in future investigations. First, we use a semi-analytical model to predict the SNIe light curves, although a more detailed analysis may involve running full 3D hydrodynamical simulations for the SNIa explosion. The physics and astrophysics of SNIe is not yet fully understood, but the simple light curve model used here works reasonably well empirically. Second, we model the effect of modified gravity as a time variation of the gravitational constant $G$, which affects properties of the white dwarf progenitors such as their mass, radius and gravitational binding energy. In principle, a proper treatment of a given modified gravity model requires us to solve the equation for hydrostatic equilibrium, which may lead to changes of internal structures of the white dwarf. In our treatment, however, the main effect is a change of the Chandrasekhar mass as $M_{\rm Ch}\propto G^{-3/2}$. We note that a time variation of $G$ is a prediction of many modified gravity theories and indeed a key feature of early theoretical models such as the Brans-Dicke theory. Lastly, we use a light curve standardisation procedure that is simplified in comparison to those that are used in observational SNIa cosmology. Our procedure involves rescaling light curves so that the shape about their peaks matches that of a template light curve. This simplification is only used for this proof-of-concept study, and for a more rigorous analysis it is necessary to closely follow the procedure used in real observations. We leave this possibility for future work. The semi-analytical light curve model described in Eq. (\[eq:luminosity31\]) depends on a few physical parameters, including the ejecta mass $M_{\rm ej}$, the nickel-56 mass $M_{\rm Ni}$, the effective opacity of the ejecta $\kappa$, the progenitor radius $R_0$, the central density of the ejecta $\rho_{\rm c}$ and the mass fraction of unburned carbon-oxygen $f_{\rm C/O}$. Fortunately, the last three have little impact on the light curve in their constrained range of values and, in the case of $R_0$, in the range of variations of its value as caused by varying $G$. In addition, in this model $\kappa$ is both physically expected to strongly correlate with $M_{\rm Ni}$ and is required to do so in order to explain the width-luminosity relation of SNIe, which is fundamental to the standardisability of their light curves. We have derived such a $M_{\rm Ni}$-$\kappa$ relation and verified that it allows us to standardise the SNIe light curves (i.e. match both the shape and amplitude of the light curves) for a range of $M_{\rm Ni}$ values when the Chandrasekhar mass is fixed to its standard value of $1.44M_\odot$. However, if $G$ deviates from its present-day value $G_0$, then the Chandrasekhar mass differs from $1.44M_\odot$ as well so that the supernova ejecta will have a different $M_{\rm ej}$. To first order, we assume that the $M_{\rm Ni}$-$\kappa$ relation established above (or observationally from nearby SNIe, for which $G\approx G_0$) is still valid because it is defined by physics involving interactions other than gravity. In this case, we find that rescaling the SNIe light curves by matching their shapes about the peak when $G \neq G_0$ still leads to good match of their heights (with a slightly larger spread than for $G=G_0$ in the latter), thus verifying that the WLR holds for $G \neq G_0$. However, these rescaled peak luminosities are different from the ‘template’ value (which is obtained by using nearby SNIe and therefore for $G=G_0$). In other words, a time variation of $G$ results in a time dependence of the average rescaled intrinsic peak luminosity of SNIe, so that they are no longer standardisable candles in such a scenario. Because of this, in a theory where $G$ is time-dependent, the observed dimming of distant SNIe will at least partly be due to the variation of the local strength of gravity, and is not entirely the consequence of the cosmic acceleration. We have demonstrated this by using two models, one of standard $\Lambda$CDM with $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.3, 0.7)$ and no time variation of $G$, and a ‘modified’ $\Lambda$CDM with $(\Omega_{\rm M}, \Omega_{\Lambda})=(0.62, 0.38)$ and $G(z)=G_0(1+z)^{-1/4}$. Both scenarios predict the same apparent magnitude-redshift relation for SNIe, and therefore cannot be distinguished using supernova cosmology. This means that for accurate cosmological tests of gravity we must carefully consider gravity’s impact on SNIa astrophysics, and how this consequently loops back to affect the interpretations of the cosmological data being used. This may have an impact on potential resolutions of the recent observational tensions in the concordance $\Lambda$CDM model, a possibility that we will study in future works. This also brings us to another interesting question: can a cosmological model with a time-varying $G$ but no $\Lambda$ fit SNIa data? As [computed]{} in this paper, since a weak gravity leads to an intrinsically fainter SNIa population[, if $G$ continuously decreases with redshift (at least in the recent past of the Universe) in a not too unnatural way, then the dimming of distant SNIe can be purely due to modified gravity]{}. Of course, such a scenario may be in conflict with various other local experiments and cosmological observations. However, given the complexity of modified gravity models it is useful to understand in more detail whether those observational constraints can be evaded or whether they need to be reinterpreted in the context of such models if they are to apply to them, as we have shown is the case for for SNIe. Finally, let us mention once more the diversity of modified gravity models. In such scenarios, the internal structure of white dwarfs can be affected and the full modified Einstein equations need to be solved to understand this accurately. This will possibly cause ‘second-order’ effects on the astrophysics of SNIe, for example through the different density profile of the white dwarf progenitors. This, coupled to the complicated thermonuclear reactions and radiation transfer inside the ejecta, poses a great challenge for modeling the explosion and aftermath of SNIe numerically, let alone analytically. Therefore, much work remains to be done to refine the understanding of the effect of gravity – both standard and non-standard – in supernova cosmology. We would like to thank Carlton Baugh, Celine Boehm, Suhail Dhawan, and Hans Winther for useful comments. BSW is supported by the U.K. Science and Technology Facilities Council (STFC) research studentship, and thanks the Institute for Computational Cosmology for hosting him when part of the work described in this paper was carried out. BL acknowledges support by the European Research Council (ERC-StG-716532-PUNCA), the ICC’s STFC Consolidated Grants (ST/P000541/1, ST/L00075X/1) and Durham University. Full derivation of semi-analytic light curve model {#App:LCDerivation} ================================================== In this appendix we give a detailed derivation and explanation of the light curve model used in this paper. This appendix is largely a self-contained summary of the works described in Refs. [@Arnett1980; @Arnett1982; @Chatzopoulos2012]. Main Equation ------------- We will begin with the main equation for the thermodynamics of the supernova, which will allow us to derive an equation for the supernova’s luminosity over time – its light curve. For this the following assumptions and approximations[^1] are made: 1. The system is spherically symmetric. 2. The expansion of the supernova ejecta is homologous – see Eq. (\[eq:homexp\]) below – and the shells of the expanding ejecta do not cross each other. 3. The supernova ejecta gas is dominated by radiation pressure. 4. The supernova ejecta is optically thick with an optical depth $>$ 1. This is known as the diffusion approximation. 5. The effective opacity of the ejecta is constant. 6. Radioactive decay is the only source of energy in the system. 7. The distribution of $^{56}\textup{Ni}$ is concentrated in the centre of the system. Applying the first law of thermodynamics to the system of expanding supernova ejecta yields: $$\dot{E} + P \dot{V} = -\frac{\partial L}{\partial m} + \epsilon, \label{eq:1stlaw}$$ where $E=aT^4 V$ is the specific energy density, $P=aT^4/3$ is the pressure \[A3\], $T$ is the temperature, $V=1/\rho$ the specific volume and $\rho$ the density of the ejecta, $a=4\sigma /c$ is the radiation constant, $\sigma$ is the Stefan-Boltzmann constant, the $\dot{y}$ notation represents the partial derivative with respect to time $\partial y/\partial t$, $L$ is the luminosity output of the system, $m$ the mass and $\epsilon$ is the rate of energy per unit mass added to the system, which is discussed below. The first term of Eq. (\[eq:1stlaw\]), $\dot{E}$, represents the rate of change in specific energy density, and the second term $P\dot{V}$ represents the specific work involved in expanding the ejecta, so that the sum of the rate of change in energy density and the specific work is equal to the sum of the energy per unit mass added to the system (positive) and the luminosity output of the system per unit mass (negative). The source of energy in this system $\epsilon$ is the radioactive decay of $^{56}\textup{Ni}$ to $^{56}\textup{Co}$ and the subsequent decay of $^{56}\textup{Co}$ to stable $^{56}\textup{\textup{Fe}}$ \[A6\]: $${}^{56}_{28}\textup{Ni} \to {}^{56}_{27}\textup{Co} + {}^{\ 0}_{+1}\mathrm{e}^+ + \gamma \to {}^{56}_{26}\textup{Fe} + 2\ {}^{\ 0}_{+1}\mathrm{e}^+ + \gamma .$$ The rate of change in the number of $^{56}\textup{Ni}$ nuclei during decay is given by $$\dot{N}_{\textup{Ni}}(t)= -\lambda_{\textup{Ni}} N^0_{\textup{Ni}} \mathrm{e}^{-t/\tau_{\textup{Ni}}}, \label{eq:NNidot}$$ in which $N^0_{\textup{Ni}}$ is the initial number of $^{56}\textup{Ni}$ nuclei in the system, and $\lambda_{\textup{Ni}}$ and $\tau_{\textup{Ni}}=1/\lambda_{\textup{Ni}}$ are the decay constant and lifetime of $^{56}\textup{Ni}$ respectively. The negative sign shows that the number of $N^0_{\textup{Ni}}$ nuclei is decreasing as the decays occur. Thus the actual rate of $^{56}\textup{Ni}$ to $^{56}\textup{Co}$ decays is $$\dot{N}_{\textup{Ni}, decay}(t) = \lambda_{\textup{Ni}} N^0_{\textup{Ni}} \mathrm{e}^{-t/\tau_{\textup{Ni}}} . \label{eq:Nidec}$$ The rate of change in the number of $^{56}\textup{Co}$ nuclei is given by $$\dot{N}_{\textup{Co}}(t)= \lambda_{\textup{Ni}} N^0_{\textup{Ni}} \mathrm{e}^{-t/\tau_{\textup{Ni}}} - \lambda_{\textup{Co}} N_{\textup{Co}}, \label{eq:NCodot}$$ where the first, positive term on the RHS represents the production of $^{56}\textup{Co}$ nuclei by the decay of $^{56}\textup{Ni}$, and the second, negative term represents the decay of $^{56}\textup{Co}$ nuclei. Solving this equation for $N_{\textup{Co}}$ yields $$N_{\textup{Co}} = \frac{\lambda_{\textup{Ni}}}{\lambda_{\textup{Ni}}-\lambda_{\textup{Co}}} N^0_{\textup{Ni}} \left(\mathrm{e}^{-t/\tau_{\textup{Co}}} - \mathrm{e}^{-t/\tau_{\textup{Ni}}}\right). \label{eq:NCo}$$ The rate of $^{56}\textup{Co}$ to $^{56}\textup{\textup{Fe}}$ decays is given by $$\dot{N}_{\textup{Co}, decay}(t) = \lambda_{\textup{Co}} \frac{\lambda_{\textup{Ni}}}{\lambda_{\textup{Ni}}-\lambda_{\textup{Co}}} N^0_{\textup{Ni}} \left(\mathrm{e}^{-t/\tau_{\textup{Co}}} - \mathrm{e}^{-t/\tau_{\textup{Ni}}}\right), \label{eq:Codec}$$ where $\dot{N}_{\textup{Co}, decay}(t)$ is the number of $^{56}\textup{Co}$ decays per second at time $t$. The total rate of energy produced, $\dot{W}$, is then $$\dot{W}(t) = W_{\textup{Ni}} \dot{N}_{\textup{Ni}, decay}(t) + W_{\textup{Co}} \dot{N}_{\textup{Co}, decay}(t), \label{eq:Wrate}$$ where $W_{\textup{Ni}}$ and $W_{\textup{Co}}$ are the energies released in a single $^{56}\textup{Ni}$ decay and $^{56}\textup{Co}$ decay respectively. The total rate of energy produced per unit mass of $^{56}\textup{Ni}$, $\epsilon$, is given by $$\epsilon = \frac{\dot{W}(t)}{M_{\textup{Ni}}} = \frac{\dot{W}(t)}{m_{\textup{Ni}}N^0_{\textup{Ni}}}, \label{eq:eps}$$ where $M_{\textup{Ni}}$ is the initial mass of $^{56}\textup{Ni}$ and $m_{\textup{Ni}}$ the mass per $^{56}\textup{Ni}$ nuclei. Substituting Eqs. (\[eq:Nidec\]) and (\[eq:Codec\]) into Eq. (\[eq:Wrate\]), Eq. (\[eq:eps\]) becomes $$\epsilon = \frac{W_{\textup{Ni}}\lambda_{\textup{Ni}}}{m_{\textup{Ni}}} \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \frac{W_{\textup{Co}}\lambda_{\textup{Co}}\lambda_{\textup{Ni}}}{m_{\textup{Ni}}(\lambda_{\textup{Ni}}-\lambda_{\textup{Co}})} \left(\mathrm{e}^{-t/\tau_{\textup{Co}}} - \mathrm{e}^{-t/\tau_{\textup{Ni}}}\right). \label{eq:eps2}$$ Defining $$\epsilon_{\textup{Ni}} \equiv \frac{W_{\textup{Ni}}\lambda_{\textup{Ni}}}{m_{\textup{Ni}}}, \label{eq:epsNi}$$ and $$\epsilon_{\textup{Co}} \equiv \frac{W_{\textup{Co}}\lambda_{\textup{Co}}\lambda_{\textup{Ni}}}{m_{\textup{Ni}}(\lambda_{\textup{Ni}}-\lambda_{\textup{Co}})}, \label{eq:epsCi}$$ Eq. (\[eq:eps2\]) becomes $$\epsilon = (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}}. \label{eq:eps3}$$ Terms in main equation ---------------------- Because we assume spherical symmetry \[A1\] the quantities in the main equation depend only on radial coordinate $r$. In the diffusion approximation \[A4\], the luminosity of a shell of the ejecta at radius $r$ is related to the temperature of that shell by: $$L = -4\pi r^2 \frac{\Gamma c}{3} \frac{\partial aT^4}{\partial r}, \label{eq:luminosity}$$ where $r$ is the radial coordinate, $\Gamma=1/\rho \kappa$ is the mean free path in the ejecta, and $\kappa\equiv\kappa(r)$ is the opacity of the ejecta. Shortly after the initial supernova explosion, the expansion of the ejecta should become homologous \[A2\] such that the radius depends only on time $t$: $$R(t) = R_0 + v_{\mathrm{sc}}t, \label{eq:homexp}$$ where the radial extent of the surface of the ejecta at time $t$, $R(t)$, has advanced constantly at a scale velocity $v_{\mathrm{sc}}$ from its initial position $R_0$. We assume that the shells of expanding ejecta do not cross each other \[A2\], which we model by writing the velocity of sub-surface layers of the ejecta as $$v(x) = xv_{\mathrm{sc}}, \label{eq:velo}$$ where a change of coordinate to dimensionless radius $x=r(t)/R(t)$ has been carried out such that $x=[0, 1]$. We separate the time and spatial dependence of the ejecta density profile: $$\rho(x,t)=\rho_{00} \eta(x) {\left[\frac{R(t)}{R_0}\right]}^{-3}, \label{eq:dens}$$ where $\eta(x)$ is the dimensionless, time independent run of density, and $\rho_{00}$ is the initial central density. Using the expression $V=1/\rho$ we can write $$V(x,t)=\frac{V_{00}}{\eta(x)} {\left[\frac{R(t)}{R_0}\right]}^3, \label{eq:vol}$$ where $V_{00}=1/\rho_{00}$ is the initial specific volume at $r=0$. Given that $V \propto R^3$ we can then write $$\frac{\dot{V}}{V}=\frac{3\dot{R}}{R}=\frac{3v_{\mathrm{sc}}}{R}, \label{eq:VRdiff}$$ where the final equality comes from differentiating Eq. (\[eq:homexp\]). Applying a similar separation of variables into space and time to the ejecta temperature yields $$T(r,t)^4=\psi(x) \phi(t) T^4_{00} {\left[\frac{R_0}{R(t)}\right]}^4. \label{eq:temp}$$ where the spatial dependence is given by $\psi(x)$, and the time dependence is not purely due to expansion effects through $R(t)$ but also an additional term $\phi(t)$ which encapsulates the temperature changing over time due to energy gain/loss. $T_{00}$ again gives the initial central temperature. Substituting Eq. (\[eq:luminosity\]) into Eq. (\[eq:1stlaw\]) yields $$4T^4 \left(\frac{\dot{T}}{T}+\frac{\dot{V}}{3V}\right)=\frac{1}{r^2} \frac{\partial}{\partial r} \left[\frac{c}{3\kappa\rho}r^2\frac{\partial T^4}{\partial r}\right] + \frac{\epsilon}{aV} \label{eq:4t4}$$ We get $$\frac{R_0}{R(t)} \frac{\dot{\phi}(t)}{\phi(t)} - \frac{1}{aT_{00}^4V_{00}}\frac{b(x)}{\phi(t)} \left[ (\epsilon_{\rm Ni} - \epsilon_{\rm Co} ) \exp^{-\frac{t}{t_{\rm Ni}}} + \epsilon_{\rm Co}\exp^{-\frac{t}{t_{\rm Co}}} \right] = -\frac{\alpha(x) c}{3R_0^2 \kappa(0)\rho_{00}}, \label{eq:longtrans}$$ where we have assumed $\kappa(x)=\kappa(0)$ \[A5\] and defined $$\alpha(x) \equiv -\frac{1}{x^2\psi} \frac{{\rm d}}{{\rm d}x} \left(\frac{x^2}{\eta(x)}\frac{{\rm d}\psi}{{\rm d}x}\right), \label{eq:alpha}$$ $$b(x) \equiv \frac{\xi(x)\eta(x)}{\psi(x)}. \label{eq:b}$$ Note that to identify which radial layer different amounts of energy are produced in a radial dependence is added to $\epsilon$ based on the distribution of $^{56}\textup{Ni}$ given by $\xi(x)$: $$\epsilon(x, t) = \xi(x) \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}} \right]. \label{eq:eps4}$$ Here, $b$ is approximately constant for any $x$ under the assumption that the $^{56}\textup{Ni}$ is concentrated in the centre of the ejecta \[A7\]. Through the definition of $\alpha$ in Eq. (\[eq:alpha\]), $\alpha$ depends only on $x$. Meanwhile, with $b$ as a constant the LHS of Eq. (\[eq:longtrans\]) implies $\alpha$ is a function of $t$ only, therefore $\alpha$ must be a constant. Because $\alpha$ is a constant, we can define a constant $\tau_0$: $$\tau_0 \equiv \frac{3R^2_0 \rho_{00} \kappa(0)}{\alpha c}, \label{eq:tau0}$$ and then rewrite Eq. (\[eq:longtrans\]) as $$-\frac{\phi}{\tau_0} = \frac{R_0}{R(t)} \frac{{\rm d}\phi}{{\rm d}t} - \left[\frac{b}{aT^4_{00}V_{00}}\right] \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}} \right],\nonumber \label{eq:alphatau0}$$ which more neatly can be written as $$\dot{\phi} + \frac{\phi R(t)}{R_0 \tau_0} = \frac{\tilde{\epsilon}R(t)}{R_0}, \label{eq:phidot2}$$ with $$\tilde{\epsilon} \equiv \frac{b}{aT^4_{00}V_{00}} \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}} \right]. \label{eq:epssquig}$$ Eq. (\[eq:phidot2\]) is the key equation we need to solve to compute the light curve of the SNIa. In the next subsection we will do this explicitly. Light curve solutions --------------------- Let $\dot{u}=R(t)/R_0\tau_0$, and substitute in Eq. (\[eq:homexp\]) to get $$\dot{u}=\frac{1}{\tau_0} + \frac{v_{\mathrm{sc}}t}{R_0\tau_0}, \label{eq:udot}$$ and therefore $u$ is given by $$u = \frac{t}{\tau_0} + \frac{v_{\mathrm{sc}}t^2}{2R_0\tau_0} = \frac{t}{\tau_0} + \frac{t^2}{2\tau_h\tau_0} = \frac{t}{\tau_0} + \frac{t^2}{\tau^2_m}, \label{eq:u}$$ where $\tau_h\equiv R_0/v_{\mathrm{sc}}$ is the expansion timescale and $\tau^2_m\equiv 2\tau_0\tau_h$ is the light curve timescale. Substituting Eq. (\[eq:udot\]) into Eq. (\[eq:phidot2\]) yields $$\frac{\tilde{\epsilon}R(t)}{R_0} = \dot{\phi} +\phi\dot{u} = \mathrm{e}^{-u} \frac{{\rm d}}{{\rm d}t}\left(\phi \mathrm{e}^{u}\right), \label{eq:phidot3}$$ the solution to which, $\phi(t)$, is directly related to the light curve of the SNIa. Eq. (\[eq:phidot3\]) depends on $\tilde{\epsilon}$, which itself depends on constants such as $b$, $V_{00}$ and $T_{00}$. It is useful to express these constants in terms of more physical quantities. For this, we first define $$I_M \equiv \int^{1}_{0} \eta(x) x^2 {\rm d}x = \frac{V_{00}M}{4\pi R^3_0},\nonumber \label{eq:IM}$$ where $M$ is the total ejecta mass, and substitute into Eq. (\[eq:tau0\]) to get $$\tau_0 = \frac{3\kappa M}{4\pi\alpha c I_M R_0} = \frac{\kappa M}{\beta c R_0}, \label{eq:tau02}$$ in which $\beta$ is defined as $\beta \equiv 4\pi\alpha I_M/3$. Reference [@Arnett1980] discuses solutions to Eq. (\[eq:alpha\]) and the corresponding boundary conditions at length, and finds $\beta$ to be approximately a constant $\beta \approx 13.7$ for a variety of different density distributions. This value shall be used hereafter. On the other hand, the mass of $^{56}\textup{Ni}$ initially present in the ejecta is given by $$M_{\textup{Ni}} = \frac{4\pi R^3_0}{V_{00}} \int^{1}_{0} \xi(x) \eta(x) x^2 {\rm d}x, \label{eq:MNi0}$$ and the total thermal energy content is given by $$E_{Th}(t) = \int aT^4 {\rm d}V = 4\pi R^3_0 a T^4_{00} \phi(t) \frac{R_0}{R(t)}I_{Th}, \label{eq:ETh2}$$ where we have used Eq. (\[eq:temp\]) and defined $$I_{Th} \equiv \int^1_0 \psi(x) x^2 {\rm d}x. \label{eq:ITh}$$ Inserting Eqs. (\[eq:IM\]), (\[eq:MNi0\]) and (\[eq:ITh\]) into Eq. (\[eq:b\]) yields a new expression for $b$: $$b = \frac{M_{\textup{Ni}} I_M}{M I_{Th}}. \label{eq:b2}$$ Similarly, we can use Eq. (\[eq:IM\]) to rewrite $V_{00}$ as $$V_{00} = \frac{4\pi R^3_0 I_M}{M},\nonumber \label{eq:V00}$$ and use Eq. (\[eq:ETh2\]) to rewrite $aT^4_{00}$: $$aT^4_{00} = \frac{R(t) E_{Th}(t)}{4\pi R^4_0 I_{Th} \phi(t)}.\nonumber \label{eq:aT400}$$ Substituting these, and Eq. (\[eq:b2\]), into Eq. (\[eq:epssquig\]), we get $$\tilde{\epsilon}(t) = \frac{M_{\textup{Ni}} R_0 \phi(t)}{E_{Th}(t)R(t)} \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}} \right], \label{eq:epssquig2}$$ which at $t=0$ becomes $\tilde{\epsilon}(0)=\frac{M_{\textup{Ni}}}{E_{Th}(0)}\epsilon_{\rm Ni}$, in which we have used $R(0)=R_0$ and $\phi(0)=1$. Therefore, Eq. (\[eq:phidot3\]) can be rewritten in terms of more physically meaningful quantities as $$\frac{{\rm d}}{{\rm d}t}\left(\mathrm{e}^{\frac{t}{\tau_0} + \frac{t^2}{\tau^2_m}} \phi\right) = \frac{M_{\textup{Ni}}\tau_0}{E_{Th}(0)} \left( \frac{1}{\tau_0} + \frac{v_{\mathrm{sc}}t}{R_0\tau_0} \right) \mathrm{e}^{\frac{t}{\tau_0} + \frac{t^2}{\tau^2_m}} \left[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \mathrm{e}^{-t/\tau_{\textup{Ni}}} + \epsilon_{\textup{Co}} \mathrm{e}^{-t/\tau_{\textup{Co}}}\right], \label{eq:phi1}$$ which can be solved as $$\begin{gathered} \phi(t) = \frac{M_{\textup{Ni}}\tau_0}{E_{Th}(0)} \mathrm{e}^{ -\left(\frac{2 R_0 t}{v_{\mathrm{sc}} \tau^2_m} + \frac{t^2}{\tau^2_m}\right) } \bigg[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Ni}}} {\rm d}t^{\prime} \\ + \epsilon_{\textup{Co}} \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Co}}} {\rm d}t^{\prime} \bigg]. \label{eq:phi2}\end{gathered}$$ Finally, $\phi(t)$ can be related to the luminosity output of the SNIa by inserting Eq. (\[eq:temp\]) into Eq. (\[eq:luminosity\]), which gives $$L(x,t) = -\frac{16\pi^2 a c T^4_{00} R^4_0 I_M}{3\kappa M} \phi(t) \left( -\frac{x^2}{\eta(x)}\frac{{\rm d}\psi}{{\rm d}x} \right), \label{eq:luminosity2}$$ in which we have used $\Gamma=1/\kappa\rho$, $\rho=1/V$, and Eq. (\[eq:vol\]) to write $$\Gamma = \frac{V_{00}R^3(t)}{\kappa \eta(x) R^3_0} = \frac{4\pi R^3(t) I_M}{\kappa M \eta(x)} \label{eq:Gamma}$$ where in the last step we have used Eq. (\[eq:V00\]). The surface luminosity is given by $x=1$ (or $r=R$), as $$L(1,t) = -\frac{16\pi^2 a c T^4_{00} R^4_0 I_M}{3\kappa M} \phi(t) \left[-\frac{x^2}{\eta(x)}\frac{{\rm d}\psi}{{\rm d}x} \right]_{x=1}. \label{eq:luminosity3}$$ To remove the spatial derivative in the brackets, we rearrange Eq. (\[eq:alpha\]), which leads to $$x^2 \psi(x) \alpha = \frac{{\rm d}}{{\rm d}x} \left(-\frac{x^2}{\eta(x)}\frac{{\rm d}\psi}{{\rm d}x}\right).\nonumber$$ Integrating both sides of this equation between $x=0$ and $x=1$, and comparing to Eq. (\[eq:ITh\]), we find $$\left[-\frac{x^2}{\eta(x)}\frac{{\rm d}\psi}{{\rm d}x} \right]_{x=1} = \alpha \int^1_0 x^2 \psi(x){\rm d}x = \alpha I_{Th}.\nonumber$$ Substituting this into Eq. (\[eq:luminosity3\]) yields $$L(1,t) = -\frac{16\pi^2 a c T^4_{00} R^4_0 I_M \alpha I_{Th}}{3\kappa M} \phi(t). \label{eq:luminosity4}$$ Finally, using Eqs. (\[eq:tau02\]), (\[eq:ETh2\]), (\[eq:phi2\]) and the definition of $\beta$, Eq. (\[eq:luminosity4\]) becomes $$\begin{gathered} L(1,t)=\frac{2M_{\textup{Ni}}}{\tau_m} \mathrm{e}^{ -\left(\frac{2 R_0 t}{v_{\mathrm{sc}} \tau^2_m} + \frac{t^2}{\tau^2_m}\right) } \bigg[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Ni}}} {\rm d}t^{\prime}\\ + \epsilon_{\textup{Co}} \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Co}}} {\rm d}t^{\prime}\bigg]. \label{eq:luminosity5}\end{gathered}$$ As we want to produce ultraviolet+optical+infrared (UVOIR) light curves, a factor accounting for the possibility of gamma ray leakage, where gamma ray photons escape directly through the ejecta without interaction, should be included such that $$\begin{gathered} L(1,t)=\frac{2M_{\textup{Ni}}}{\tau_m} \mathrm{e}^{ -\left(\frac{2 R_0 t}{v_{\mathrm{sc}} \tau^2_m} + \frac{t^2}{\tau^2_m}\right) } \bigg[ (\epsilon_{\textup{Ni}}-\epsilon_{\textup{Co}}) \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Ni}}} {\rm d}t^{\prime} \\ + \epsilon_{\textup{Co}} \int^{t}_{0} \left(\frac{R_0}{v_{\mathrm{sc}}\tau_m} + \frac{t^{\prime}}{\tau_m}\right) \mathrm{e}^{\left( \frac{t^{\prime 2}}{\tau^2_m} + \frac{2R_0 t^{\prime}}{v_{\mathrm{sc}}\tau^2_m} \right)} \mathrm{e}^{-t^{\prime}/\tau_{\textup{Co}}} {\rm d}t^{\prime} \bigg] \left(1-\mathrm{e}^{(t_0/t)^2}\right), \label{eq:luminosity6}\end{gathered}$$ where $t_0 = (9\kappa_{\gamma}/2\pi E_{\mathrm{K}})^{1/2}$ is the timescale for gamma ray leakage, $\kappa_{\gamma}$ is the ejecta’s gamma ray opacity, and $E_{\mathrm{K}}$ is the kinetic energy in the supernova explosion. Eq. (\[eq:luminosity6\]) is the equation to compute UVOIR light curves used throughout the paper. Derivation of the Chandrasekhar mass {#App:MChDerivation} ==================================== The calculation of the Chandrasekhar mass $M_{\rm Ch}$ is now standard textbook material (see for example [@MChDerivation]). Here we include a derivation to make the paper self-contained, and to highlight some physics that is relevant for the discussion of this paper. We start from the equation of hydrostatic equilibrium for a spherically symmetrical stellar fluid in Newtonian gravity: $$\frac{{\rm d}P}{{\rm d}r} = - \rho(r)\frac{GM(r)}{r^2}, \label{eq:hydrostat}$$ where $r$ is the radial coordinate, $P$ is the pressure of the fluid, $\rho$ is the density, and $M(r)$ is the mass within a sphere of radius $r$, given by ${\rm d}M=4\pi r^2\rho(r){\rm d}r$ or $$M(r) = \int^{r}_{0} 4\pi r^2\rho(r){\rm d}r. \label{eq:M(r)}$$ Rearranging Eq. (\[eq:hydrostat\]) for $M(r)$ and differentiating with respect to radius $r$ yields $$\frac{{\rm d}M}{{\rm d}r} = - \frac{1}{G} \frac{{\rm d}}{{\rm d}r} \left( \frac{r^2}{\rho} \frac{{\rm d}P}{{\rm d}r}\right).\nonumber \label{eq:dMdr}$$ Comparing this with Eq. (\[eq:M(r)\]), we get the familiar equation $$\frac{1}{r^{2}} \frac{{\rm d}}{{\rm d}r} \left( \frac{r^{2}}{\rho} \frac{{\rm d}P}{{\rm d}r} \right) = -4 \pi G \rho. \label{eq:stellarstate}$$ To solve this equation, an equation of state relating $P$ and $\rho$ is required. For the degenerate material under high amounts of compression inside a white dwarf, the electrons have a large energy due to the electron degeneracy pressure, and so have a velocity approaching that of the speed of light. Therefore, the white dwarf material is best described as a relativistic Fermi gas with an equation of state given by $$P = \frac{\hbar c}{12 \pi^{2}} \left( \frac{3 \pi^{2} \rho}{m_{\mathrm{N}} \mu} \right)^{4/3}, \label{eq:EoS}$$ in which $\hbar$ is the reduced Planck constant, $m_{\mathrm{N}}$ is the nucleon mass, and $\mu=\left\langle A/Z \right\rangle$ is the average mass number per nuclear charge with $\mu \approx2$ for the $^{12}\mathrm{C}$ and $^{16}\mathrm{O}$ that make up the majority of the white dwarf. This equation of state is of the form of a polytrope $$P = K \rho^{\gamma},\nonumber \label{eq:polytrope}$$ with $K\equiv\hbar c / 12 {\pi}^2 \times {\left( 3 {\pi}^2/m_{\mathrm{N}} \mu \right)}^{4/3}$ and $\gamma=4/3$. Using this equation of state, Eq. (\[eq:stellarstate\]) can be rewritten as $$\left[ \frac{n+1}{4 \pi G} K \lambda^{(1-n)/n} \right] \frac{1}{r^2} \frac{{\rm d}}{{\rm d}r}\left( r^2 \frac{{\rm d}\Theta}{{\rm d}r} \right) = -\Theta^n \label{eq:polytropestate3}$$ where we have defined $\rho \equiv \lambda \Theta^n$ and $\gamma \equiv \frac{n+1}{n}$. This equation can be made dimensionless by introducing a radial variable $y \equiv r/\alpha$ where $\alpha \equiv \sqrt{(n+1)K \lambda^{(1-n)/n}/4 \pi G}$, yielding $$\frac{1}{y^2} \frac{{\rm d}}{{\rm d}y} \left(y^2 \frac{{\rm d}\Theta}{{\rm d}y} \right) = - \Theta^n. \label{eq:LaneEm}$$ This is the Lane-Emden equation for polytropes in hydrostatic equilibrium. As a second-order ordinary differential equation it requires two boundary conditions to complete it. First, we can define the central density as $\rho(r=0)=\rho_{\mathrm{c}} \equiv \lambda$, which gives $$\rho_{\mathrm{c}} = \lambda \Theta^n(y=0) = \lambda \Rightarrow \Theta (y=0) = 1. \label{eq:bound1}$$ Then, since $M(r=0)=0$ physically, we have $${\frac{{\rm d}P}{{\rm d}r}\Big|}_{r=0} = - \rho_{\mathrm{c}} \frac{GM(r=0)}{r^2} = 0.\nonumber$$ For the polytropic equation of state introduced above, this is equivalent to $${\frac{{\rm d}P}{{\rm d}r}\Big|}_{r=0} = \gamma K \rho_{\mathrm{c}}^{\gamma - 1} {\frac{{\rm d}\rho}{{\rm d}r}\Big|}_{r=0} = 0,\nonumber$$ and can be re-expressed in terms of the dimensionless quantities $\Theta$ and $y$ as $$\left[\frac{{\rm d}\Theta}{{\rm d}y}\right]_{y=0} = 0. \label{eq:bound2}$$ The outer radius of the star, $R$, corresponds to the point $y=y_1$ at which the density drops to zero, i.e., $\rho(R) = \Theta(y_1) = 0$. Using $\rho \equiv \lambda \Theta^n$ and $y \equiv r/\alpha$, the equation for the total mass of the star, Eq. (\[eq:M(r)\]), becomes $$M = 4 \pi \lambda \alpha^3 \int^{y_1}_{0} y^2 \Theta^n {\rm d}y, \label{eq:Mint1}$$ and then using Eq. (\[eq:LaneEm\]) this becomes $$M = 4\pi\lambda\alpha^3 \int^{y_1}_{0} -\frac{{\rm d}}{{\rm d}y} \left( y^2 \frac{{\rm d}\Theta}{{\rm d}y} \right) {\rm d}y = 4 \pi \lambda \alpha^3 \left[-y^2 \frac{{\rm d}\Theta}{{\rm d}y}\right]_{y_1}, \label{eq:Mint3}$$ Therefore, calculating the total mass requires the value of $y_1$ to be computed by solving the Lane-Emden equation. Recall now that the specific case of a white dwarf yields a polytrope with $\gamma=4/3$, which corresponds to a polytropic index $n=3$. The corresponding Lane-Emden equation is $$\frac{1}{y^2} \frac{{\rm d}}{{\rm d}y} \left(y^2 \frac{{\rm d}\Theta}{{\rm d}y} \right) + \Theta^3 = 0, \label{eq:LaneEm3}$$ which can be solved numerically to find $y_1$ at which $\Theta(y_1) = 0$. Doing so yields a value $y_1=6.89685$ and $-y^2 \Theta^{\prime}(y_1)=2.01824$. Using the definition of $\alpha$: $$4 \pi \lambda \alpha^3 = 4 \pi \lambda {\left( \frac{(n+1)K \lambda^{(1-n)/n}}{4 \pi G} \right)}^{3/2},$$ and plugging in $n=3$ gives $$4 \pi \lambda \alpha^3 = 4 \pi \lambda \left( \frac{4K \lambda^{-2/3} }{4 \pi G } \right) ^{3/2} = 4 \pi { \left( \frac{K}{\pi G} \right) }^{3/2}. \label{eq:lamalph}$$ In the specific case of a white dwarf, $K=\hbar c / 12{\pi}^2 \times {\left( 3 {\pi}^2/ \mu \/ m_{\mathrm{N}} \right)}^{4/3}$, so Eq. (\[eq:lamalph\]) becomes $$4 \pi \lambda \alpha^3 = 4 \pi {\left( \frac{K}{\pi G} \right)}^{3/2} = \frac{\sqrt{3 \pi}}{2} {\left( \frac{\hbar c}{G} \right)}^{3/2} \frac{1}{( \mu m_{\mathrm{N}})^2} \label{eq:lamalph3}.$$ Inserting Eq. (\[eq:lamalph3\]) into Eq. (\[eq:Mint3\]) yields $$M_{\rm Ch} = \frac{\sqrt{3 \pi}}{2} {\left( \frac{\hbar c}{G} \right)}^{3/2} \frac{1}{( \mu m_{\mathrm{N}} )^2} \left[- y^2 \frac{{\rm d}\Theta}{{\rm }dy} \right]_{y_1} \label{eq:Mint4}$$ Plugging $y_1(n=3)=6.89685$ and $-y^2 \Theta^{\prime}(y_1)=2.01824$ into Eq. (\[eq:Mint4\]) results in $M_{\mathrm{Ch}}=1.44M_{\odot}$. In our case, note that, in addition to the numerical value, $M_{\rm Ch}$ has a specific dependence on $G$: $M_{\rm Ch}\propto G^{-3/2}$. Physically, a smaller value of $G$ means that gravity is weaker, so that the electron degeneracy pressure can support a larger mass. [^1]: We number these as \[A1\]-\[A7\], and will mention the numbers where the corresponding assumptions or approximations are actually used in the text below.
--- abstract: 'A new cavity-chain layout has been proposed for the main linac of the TESLA linear collider [@SUSU]. This superstructure-layout is based upon four 7-cell superconducting standing-wave cavities, coupled by short beam pipes. The main advantages of the superstructure are an increase in the active accelerating length in TESLA and a saving in rf components, especially power couplers, as compared to the present 9-cell cavities. The proposed scheme allows to handle the field-flatness tuning and the HOM damping at sub-unit level, in contrast to standard multi-cell cavities. The superstructure-layout is extensively studied at DESY since 1999. Computations have been performed for the rf properties of the cavity-chain, the bunch-to-bunch energy spread and multibunch dynamics. A copper model of the superstructure has been built in order to compare with the simulations and for testing the field-profile tuning and the HOM damping scheme. A “proof of principle” niobium prototype of the superstructure is now under construction and will be tested with beam at the TESLA Test Facility in 2001. In this paper we present latest results of these investigations.' author: - | N. Baboi, M. Liepe, J. Sekutowicz\ Deutsches Elektronen-Synchrotron DESY, D-22603 Hamburg, Germany,\ M. Ferrario, INFN, Frascati, Italy title: 'Superconducting Superstructure for the TESLA Collider: New Results' --- INTRODUCTION ============ The cost for a superconducting linear collider can be significantly reduced by minimizing the number of microwave components, and increasing the fill factor in a machine. Here the fill factor is meant as a ratio of the active cavity length to the total cavity length (active length plus interconnection). These two conditions become partially fulfilled when the number of cells ($N$) in a structure -fed by one fundamental mode (FM) coupler- increases. Unfortunately there are two limitations on the cell’s number in one accelerating structure: firstly the field flatness -the sensitivity of the field pattern increases proportional to $N^2$- and secondly trapped higher order modes (HOM). In order to overcome these limitations on $N$, the concept of the superstructure has been proposed for the TESLA main linac [@SUSU]. In this concept four 7-cell cavities (sub-units) are coupled by short beam tubes. The whole chain can be fed by one FM coupler attached at one end beam tube. The length of the interconnections between the cavities is chosen to be half of the wave length. Therefore the $\pi$-0 mode ($\pi$ cell-to-cell phase advance and 0 cavity-to-cavity phase advance) can be used for acceleration. In the proposed scheme HOM couplers can be attached to interconnections and to end beam tubes. All sub-units are equipped with a tuner. Accordingly the field flatness and the HOM damping can be still handled at the 7-cell sub-unit level. REFILLING OF CELLS AND BUNCH-TO-BUNCH ENERGY SPREAD =================================================== The energy flow through cell-interconnections and the resulting bunch-to-bunch energy spread has been extensively studied for the superstructure with two independent codes: HOMDYN [@Ferrario] and MAFIA [@MAFIA][@Dohlus]. Negligible spread in the energy gain, smaller than $6 \cdot 10^{-5} $ for the whole train of 2820 bunches, proofs that energy flow is big enough to re-fill cells in the time between two sequential bunches; see Fig. \[spread\]. The energy spread results from the interference of the accelerating mode with other modes from the FM passband. The difference in energy becomes smaller at the end of the pulse due to the decay of the interfering modes. ![Calculated energy gain for 2820 bunches accelerated by the proposed superstructure.[]{data-label="spread"}](TUA15pic1.eps){width="65mm"} FIELD FLATNESS TUNING ===================== The $\pi$-0 mode will be used for the acceleration of beam in the superstructure. Before assembly, each of the four 7-cell cavities will be pre-tuned for flat field profile and the chosen frequency of the $\pi$-0 mode. The pre-tuning procedure is based on measurements of all modes of the fundamental mode passband. It allows to adjust the profile with accuracy of better than 2-3 $\%$ for a 9-cell TESLA cavity. This error corresponds to a frequency accuracy of the individual cells of $\pm$ 30 kHz. After the cavity chain of a superstructure has been assembled and is operated in the linac at 2K, the frequency of each sub-unit can be corrected in order to equalize the mean value of the field amplitude in all sub-units (not between cells within one sub-unit). This field profile correction is possible during the linac operation, since each 7-cell structure is equipped with its own frequency tuner. The method proposed to equalize the average accelerating field of sub-units during operation is based on perturbation theory, similar to the standard bead-pull method of L. Maier and J. Slater [@pert]. At first, present fields of all sub-units are measured. For that, successively, the volume of each sub-unit is changed by the same amount (stepping motor of each tuner will be moved by the same number of steps) to measure the frequency change of the $\pi$-0 mode. The change is proportional to the stored energy in the sub-unit of the superstructure. For each sub-unit relative values can be defined and used to calculate frequency corrections needed to equalize the field. This method has been tested on a room temperature Cu model of the superstructure -see Fig. \[vergleich\]- and by computer simulations, see Fig. \[tuning\]. One should note that the method requires only one pickup probe for all 28 cells, and therefore effectively reduces the numbers of cables, feedthroughs and electronics needed for the control. ![Field profile before field flatness tuning. Shown is a comparison between the measured field profile (bead pulling on a Cu model of the superstructure) and the field profile calculated from the measured frequency perturbations of the individual cavities.[]{data-label="vergleich"}](TUA15pic2.eps){width="65mm"} ![Example of field flatness tuning by tuning the individual cavities (computer simulation). For the frequency of the individual cells a variation of $\pm$ 30 kHz is assumed.[]{data-label="tuning"}](TUA15pic3.eps){width="65mm"} STATISTICS OF FIELD FLATNESS ============================ As discussed above the field flatness in a cold superstructure can be handled at the 7-cell sub-unit level by adjusting the frequency of each sub-unit. In order to verify this, the field flatness in a superstructure has been calculated before and after tuning of the individual cavities. The frequencies of the cavities have been corrected accordingly to the proposed tuning method. For the frequency of the individual cells a variation of $\pm$ 30 kHz is assumed, based on the experience with the TESLA 9-cell cavities. The statistics of 10000 calculated field profiles is shown in Fig. \[statistic\]. By adjusting the frequencies of the individual cavities the field unflatness is significantly reduced. ![Calculated field flatness statistics of 10000 superstructures before (a) and after field flatness tuning by adjusting the frequencies of the individual cavities (b). The frequencies of the individual cells varies by $\pm$ 30 kHz.[]{data-label="statistic"}](TUA15pic4.eps){width="65mm"} HOM DAMPING AND MULTIBUNCH EMITTANCE ==================================== ![image](TUA15pic5.eps){width="165mm"} The vertical normalized multibunch emittance at the interaction point of the TESLA collider is desired to be $3 \cdot 10^{-8}$ m$\cdot$rad. Simulations of the emittance growth along the TESLA linac showed, that the dipole modes with dominating impedance (R/Q) should for that be damped to the level of $Q_{ext}<2\cdot 10^5$ [@EPAC2k]. The interconnecting tubes of the superstructure allow to put HOM couplers between the 7-cell cavities. Measurements on a Cu model of the superstructure at room temperature have demonstrated, that the required damping can be achieved with five HOM couplers: three attached at the interconnections and one at both ends [@CUSUSU]; see Fig. \[hom\]. Note that the sum of all listed dipole modes impedances is almost ten times smaller than the BBU limit. NB PROTOTYPE ============ A first “proof of principle” niobium prototype of the superstructure is under construction [@NBSUSU]. The sub-units are under fabrication and will be vertical tested similar to TESLA 9-cell cavities. The beam test for the prototype is scheduled for Spring 2001. It will allow to verify energy spread computations and RF measurements on the room temperature models. This will include the test of the HOM damping, the performance of the HOM couplers at higher magnetic field and the tuning method during operation at 2K. CONCLUSIONS =========== The presented measurements and calculations demonstrate, that in the proposed superstructure the refilling of cells, the HOM damping, the field flatness and the field flatness tuning can be handled. For the final prove, that the superstructure layout can be used for acceleration, a niobium prototype will be tested with beam at the TESLA Test Facility linac. ACKNOWLEDGEMENTS ================ This work has benefited greatly from discussions with the members of the TESLA collaboration. [9]{} J. Sekutowicz, M. Ferrario and Ch. Tang, Phys. Rev. ST Accel. Beams, vol. 2, No. 6 (1999) M. Ferrario,. A. Mosnier, L. Serafini, F. Tazzioli, J. M. Tessier, Particle Accelerators, Vol. 52 (1996) R. Klatt et al., Proc. of Linear Accelerator Conference, Stanford, June 1986 M. Dohlus, private communication L. Maier and J. Slater, Journal of Applied Physics, Vol. 23, No. 1, January 1952, page 68-77 N. Baboi, R. Brinkmann, M. Liepe, J. Sekutowicz, EPAC2000, Viena, to be published H. Chen, G. Kreps, M. Liepe, V. Puntus and J. Sekutowicz, Proc. of the 9th Workshop on rf Superconductivity, Santa Fe, 1999 R. Bandelmann et al., Proc. of the 9th Workshop on rf Superconductivity, Santa Fe, 1999
--- address: - 'Energy Conversion Research Center, Korea Electrotechnology Research Institute (KERI), 12, Bulmosanro 10beon-gil, Seongsan-gu, Changwon, 51543, Republic of Korea' - 'B.R. and J.C. contributed equally to this work' author: - 'Byungki Ryu$^{\dagger,*}$' - 'Jaywan Chung$^{\dagger,*}$' - SuDong Park bibliography: - 'TEPdata3\_DFT.bib' date: - - title: | Supplementary Information for\ “Thermoelectric efficiency has three Degrees of Freedom” --- Thermoelectric Property Data used in the Manuscript =================================================== In this work, we constructed a dataset of TEPs of 276 materials gathered from 264 literatures [@biswas_strained_2011; @biswas_high-performance_2012; @fu_realizing_2015; @gelbstein_controlling_2013; @he_ultrahigh_2015; @heremans_enhancement_2008; @hsu_cubic_2004; @hu_shifting_2014; @hu_power_2016; @kim_dense_2015; @lin_tellurium_2016; @liu_thermoelectric_2011; @liu_convergence_2012; @pan_thermoelectric_2016; @pei_convergence_2011; @pei_high_2011-1; @poudel_high-thermoelectric_2008; @rhyee_peierls_2009; @wang_right_2014; @zhao_raising_2012; @zhao_thermoelectrics_2012; @zhao_ultralow_2014; @zhao_ultrahigh_2015; @cui_thermoelectric_2007; @cui_crystal_2008; @eum_transport_2015; @fan_p-type_2010; @han_alternative_2013; @hsu_enhancing_2014; @zheng_mechanically_2014; @hu_tuning_2015; @hwang_enhancing_2013; @ko_nanograined_2013; @zhang_improved_2015; @zhao_bismuth_2005; @lee_control_2010; @lee_enhancement_2013; @lee_crystal_2013; @yan_experimental_2010; @lee_preparation_2013; @lee_preparation_2014; @lee_preparation_2014-1; @lee_preparation_2014-2; @lee_thermoelectric_2014; @sumithra_enhancement_2011; @lukas_transport_2012; @min_surfactant-free_2013; @mun_fe-doping_2015; @ovsyannikov_enhanced_2015; @puneet_preferential_2013; @shin_twin-driven_2014; @son_n-type_2012; @son_effect_2013; @soni_enhanced_2012; @xiao_enhanced_2014; @tang_preparation_2007; @wang_metal_2013; @wu_thermoelectric_2013; @yelgel_thermoelectric_2012; @zhang_rational_2012; @wei_minimum_2016; @lan_high_2012; @yu_preparation_2013; @kosuga_enhanced_2014; @scheele_thermoelectric_2011; @ahn_improvement_2009; @ahn_exploring_2010; @ahn_enhanced_2013; @androulakis_thermoelectric_2010; @androulakis_high-temperature_2011; @bali_thermoelectric_2013; @bali_thermoelectric_2014; @wu_superior_2015; @dong_transport_2009; @dow_effect_2010; @falkenbach_thermoelectric_2013; @fan_enhanced_2015; @fang_synthesis_2013; @jaworski_valence-band_2013; @jian_significant_2015; @keiber_complex_2013; @kim_spinodally_2016; @lee_improvement_2012; @lee_contrasting_2014; @li_enhanced_2013; @li_pbte-based_2014; @liu_effect_2013; @lo_phonon_2012; @lu_enhancement_2013; @pei_combination_2011; @pei_high_2011; @pei_self-tuning_2011; @pei_stabilizing_2011; @pei_low_2012; @pei_thermopower_2012; @pei_optimum_2014; @poudeu_high_2006; @rawat_thermoelectric_2013; @wang_large_2013; @wang_tuning_2014; @wu_broad_2014; @wu_strong_2014; @yamini_heterogeneous_2015; @yang_enhanced_2015; @zebarjadi_power_2011; @zhang_enhancement_2012; @zhang_heavy_2012; @zhang_effect_2013; @zhang_enhancement_2015; @al_rahal_al_orabi_band_2015; @banik_mg_2015; @banik_high_2016; @banik_agi_nodate; @chen_thermoelectric_2014; @chen_understanding_2016; @leng_thermoelectric_2016; @pei_interstitial_2016; @tan_high_2014; @tan_codoping_2015; @tan_valence_2015; @tang_realizing_2016; @wang_thermoelectric_2015; @zhang_high_2013; @zhou_optimization_2014; @guan_thermoelectric_2015; @suzuki_supercell_2015; @fahrnbauer_high_2015; @gelbstein_-doped_2007; @gelbstein_powder_2007; @gelbstein_thermoelectric_2010; @hazan_effective_2015; @kusz_structure_2016; @lee_influence_2014; @schroder_nanostructures_2014; @schroder_tags-related_2014; @williams_enhanced_2015; @wu_origin_2014; @aikebaier_effect_2010; @chen_thermoelectric_2012; @dow_thermoelectric_2009; @drymiotis_enhanced_2013; @du_effect_2014; @guin_sb_2015; @han_lead-free_2012; @he_synthesis_2012; @hong_anomalous_2014; @liu_enhanced_2016; @mohanraman_influence_2014; @pei_alloying_2011; @wang_synthesis_2008; @wu_state_2015; @zhang_improved_2010; @aizawa_solid_2006; @akasaka_composition_2007; @akasaka_non-wetting_2007; @cheng_mg2si-based_2016; @duan_effects_2016; @isoda_effect_2007; @kajikawa_thermoelectric_1998; @liu_n-type_2015; @luo_fabrication_2009; @mars_thermoelectric_2009; @noda_temperature_1992; @tani_thermoelectric_2005; @tani_thermoelectric_2007-1; @tani_thermoelectric_2007; @yang_preparation_2009; @yin_optimization_2016; @zhang_high_2008; @zhang_situ_2008; @zhang_suppressing_2015; @zhao_synthesis_2009; @joshi_enhanced_2008; @tang_holey_2010; @wang_enhanced_2008; @ahn_improvement_2012; @bhatt_thermoelectric_2014; @fu_band_2015; @kraemer_high_2015; @krez_long-term_2015; @liu_thermoelectric_2007; @mudryk_thermoelectricity_2002; @shi_low_2008; @bai_enhanced_2009; @bao_effect_2009; @chitroub_thermoelectric_2009; @dong_hpht_2009; @duan_synthesis_2012; @dyck_thermoelectric_2002; @he_thermoelectric_2007; @he_great_2008; @laufek_synthesis_2009; @li_thermoelectric_2005; @liang_ultra-fast_2014; @liu_enhanced_2007; @mallik_transport_2008; @mallik_thermoelectric_2013; @mi_thermoelectric_2008; @pei_thermoelectric_2008; @qiu_high-temperature_2011; @rogl_thermoelectric_2010; @rogl_new_2011; @rogl_n-type_2014; @rogl_new_2015; @sales_filled_1996; @shi_multiple-filled_2011; @stiewe_nanostructured_2005; @su_structure_2011; @tang_synthesis_2001; @xu_thermoelectric_2014; @yang_synthesis_2009; @zhang_situ_2008-1; @zhang_high-pressure_2012; @zhao_enhanced_2009; @zhou_thermoelectric_2013; @bali_thermoelectric_2016; @ding_high_2016; @jo_simultaneous_2016; @joo_thermoelectric_2016; @li_enhanced_2016; @li_inhibition_2016; @liu_enhanced_2012; @zhou_strategy_2017; @zhou_scalable_2017; @zhang_discovery_2017; @xu_nanocomposites_2017; @xie_stabilization_2016; @wang_high_2016; @seo_effect_2017; @pei_multiple_2016; @park_extraordinary_2016; @moon_tunable_2016; @zhu_nanostructuring_2007; @choi_thermoelectric_1997; @yamanaka_thermoelectric_2003; @yang_nanostructures_2008; @yang_natural_2010; @zhang_effects_2009; @zhou_nanostructured_2008; @sharp_properties_2003; @salvador_transport_2009; @levin_analysis_2011; @yamanaka_thermoelectric_2003-1; @zhao_thermoelectric_2008; @zhao_synthesis_2006; @yu_high-performance_2009; @xiong_high_2010; @toberer_traversing_2008; @chung_csbi4te6:_2000; @tang_high_2008; @mi_improved_2007; @liu_improvement_2008; @li_preparation_2008; @chen_high_2006; @zhong_high_2014; @yu_thermoelectric_2012; @liu_ultrahigh_2013; @liu_copper_2012; @he_high_2015; @gahtori_giant_2015; @day_high-temperature_2014; @ballikaya_thermoelectric_2013; @bailey_enhanced_2016; @li_promoting_2017] to test our method. The TEPs were digitized using the Plot Digitizer [@PlotDigitizer]. The dataset consists of Seebeck coefficient $\alpha$, electrical resistivity $\rho$, and thermal conductivity $\kappa$ at measured temperature $T$. For the numerical computation of efficiency, we use the available temperature ranges of the given material: the $T_c$ is defined as the maximum of the lowest measured temperautre and $T_h$ is defined as the minimum of the highest measured temperature for given materials. As shown in Table \[tep-dataset\], the 276 materials in our dataset have various base-material groups: 59 $\rm Bi_2Te_3$-related materials, 55 $\rm PbTe$-related materials , 40 skutterudite (SKD), 23 $\rm Mg_2Si$-based materials, 18 $\rm GeTe$ materials, 14 $\rm M_2Q$ antifluorite-type chalcogenide materials (where M = Cu, Ag, Au and Q = Te, Se), 12 $\rm SnTe$-related materials, 11 $\rm ABQ_2$-type materials (where A=Group I, B=Bi, Sb, Q=Te, Se), 8 $\rm SnSe$-related materials, 7 $\rm PbSe$-related materials, 7 half-Heusler (HH) materials, 6 $\rm SiGe$-related materials, 3 $\rm In_4 Se_3$-related materials, 3 $\rm PbS$-related materials, 2 oxide materials, 2 clathrate materials, and 6 others. Here the base-material denotes the representative material, not the exact composition. Also note that for the categorizatoin of base materials, the doping element is ignored. For examples, $\rm Bi_2Te_3$, $\rm Sb_2Te_3$, $\rm Bi_2Se_3$ binary and their ternary alloys are categorized as $\rm Bi_2 Te_3$-related materials. The material doping composition is not denoted in the composition of the base material. **Group** **\#mat.** **Group** **\#mat.** ---------------- ------------ ---------------- ------------ $\rm Bi_2Te_3$ 59 SnSe 8 PbTe 55 PbSe 7 SKD 40 HH 7 $\rm Mg_2Si$ 23 SiGe 6 GeTe 18 $\rm In_4Se_3$ 3 $\rm M_2Q$ 14 PbS 3 SnTe 12 Oxide 2 $\rm ABQ_2$ 11 clathrate 2 etc. 6 **Total** **276** : TEP Dataset of 276 materials with various material groups. ‘Group’ and ‘\#mat.’ coloumns represent the group of base material and the number of materials inside the Group.[]{data-label="tep-dataset"} For the segmented-leg devices, we consider 18 candidates showing high peak $zT$ values exceeding 1. The full $zT$ curves of them are shown in Figure \[zT-for-18-mats\]. Table \[table-18-mats-temp-range\], \[table-18-mats-max-eff\] and \[table-18-mats-te-dof\] contain more information of the materials, including available temperature range, peak $zT$, numerical efficiency, formula efficiency, and the thermoelectric degrees of freedom. ![The $zT$ curves for 18 selected materials. The ‘ref-\#’ is the reference number.[]{data-label="zT-for-18-mats"}](FIGs01_selected18candidates){width="\textwidth"} ID-\# Material or Process \[Reference\] $T_c$ (K) $T_h$ (K) peat-$zT$ @$T$ -------- ------------------------------------------------------------- ----------- ----------- ---------------- ID-1 (PbTe)(SrTe):Na [@biswas_strained_2011] 251 818 1.7 @800K ID-2 (PbTe)(SrTe):Na [@biswas_high-performance_2012] 302 915 2.2 @915K ID-4 FeNbSb [@fu_realizing_2015] 301 1200 1.5 @1200K ID-5 $\rm Ge_{0.87}Pb_{0.13}Te$ [@gelbstein_controlling_2013] 329 713 2 @673K ID-6 $\rm Cu_2S_{0.52}Te_{0.48}$ [@he_ultrahigh_2015] 299 997 2.1 @1000K ID-9 $\rm Bi_{0.3}Sb_{1.7}Te_3$ [@hu_shifting_2014] 298 479 1.3 @380K ID-10 $\rm (PbTe)_{0.96}(MgTe)_{0.02}Na_{0.04}$ [@hu_power_2016] 307 900 1.8 @810K ID-12 BST dislocation [@kim_dense_2015] 300 480 1.86 @320K ID-17 $\rm PbTe_{0.85}Se_{0.15}$ [@pei_convergence_2011] 300 847 1.8 @850K ID-18 PbTeNa [@pei_high_2011-1] 300 750 1.4 @750K ID-19 BST nanobulk [@poudel_high-thermoelectric_2008] 300 525 1.4 @373K ID-23 PbTe:Na, quenching (PNAS) [@wang_right_2014] 321 759 2 @773K ID-27 sc-SnSe, $b$-axis [@zhao_ultralow_2014] 303 970 2.6 @923K ID-28 $\rm Sn_{0.985}Na_{0.015}Se$ [@zhao_ultrahigh_2015] 304 773 2 @773K ID-34 $\rm Bi_{0.4}Sb_{1.6}Te_3$ [@fan_p-type_2010] 303 513 1.8 @316K ID-43 KERI BSTAg, HP [@lee_control_2010] 323 573 1.2 @373K ID-85 $\rm (PbTe)_{0.8}(PbS)_{0.2}$ + 3at% Na [@wu_superior_2015] 302 922 2.3 @923K ID-292 $\rm Cu_{1.94}Al_{0.02}Se$ (APL) [@zhong_high_2014] 327 1019 2.62 @1029K : Information of 18 selected materials: available temperature range $T_c$ and $T_h$, $\Delta T = T_h - T_c$, peak $zT$, temperature of the peak $zT$.[]{data-label="table-18-mats-temp-range"} -------- ------- ------- ------------------------------ ------- ------- ------- ------- $\eta_{\rm max}^{\rm const}$ ID-1 13.7% 13.7% 14.4% 14.3% 14.5% 15% 22.9% ID-2 15.9% 15.9% 16.2% 16.1% 16.6% 16.8% 24.9% ID-4 15.3% 15.3% 15.8% 15.8% 15.8% 16.3% 23.8% ID-5 12.5% 12.6% 12.9% 13% 13.1% 13.4% 18% ID-6 10.5% 10.5% 10.7% 10.7% 11.1% 11.1% 25.9% ID-9 8.4% 8.4% 8.4% 8.4% 8.4% 8.4% 9.2% ID-10 13.8% 13.8% 14.2% 14.1% 14.4% 14.7% 22% ID-12 9.1% 9.1% 9.1% 9.1% 9% 9% 11.2% ID-17 12.6% 12.7% 13% 12.9% 13.3% 13.5% 21.5% ID-18 10.4% 10.4% 10.8% 10.8% 10.9% 11.2% 16.9% ID-19 9.9% 9.9% 10% 10% 9.9% 9.9% 11.1% ID-23 11.6% 11.6% 12.1% 12.1% 12.2% 12.5% 19.6% ID-27 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 27.9% ID-28 16.2% 16.2% 16.9% 16.9% 16.7% 17.3% 20.9% ID-34 10.1% 10.1% 10.1% 10.1% 10% 10% 12.2% ID-43 8.2% 8.2% 8.2% 8.2% 8.1% 8.1% 10.3% ID-85 17.6% 17.6% 18.1% 17.8% 18.5% 18.8% 25.6% ID-292 14.3% 14.3% 14.9% 14.9% 14.9% 15.4% 27.5% -------- ------- ------- ------------------------------ ------- ------- ------- ------- : Information of 18 selected materials: (a) maximum efficiencies computed using exact numerical method ($T$ is computed by fixed-point interation, then power, heat and efficiency are computed), maximum efficiencies computed from general maximum efficiency formula $\eta_{\rm max}^{\rm gen}$ (see equation ) (b) using *exact* thermoeletric degrees of freedom (DOFs) with exact $T$ ($Z_{\mathrm{gen}},\tau,\beta$), (c) using DOFs with $T^{(0)}$ ($Z_{\mathrm{gen}}^{(0)},\tau^{(0)},\beta^{(0)}$), (d) using DOFs with *one-shot* approximation ($Z_{\mathrm{gen}}^{(0)},\tau_{\rm lin}^{(0)},\beta_{\rm lin}^{(0)}$), (e) using DOFs with only $Z_{\mathrm{gen}}$ while $\tau=\beta=0$, (f) using DOFs with only $Z_{\mathrm{gen}}^{(0)}$ while $\tau=\beta=0$, and (g) using the classical efficiency formula for constant TEP using peak $zT$. Note that when we compute the numerical maximum efficiency we calculate the $T$ using the fixed-point iteration with integral equation of $T$ for given $J$. Then $J$ is optimized to maximize the efficiency. Note that when we use the general maximum efficiency formula, the $T$ and $J$ are simultaneously computed. For $T$, the fixed-point iteration is used. For $J$, we use the optimal $\gamma$ formula $\gamma_{\rm max}^{\rm gen}$. []{data-label="table-18-mats-max-eff"} -------- -------- -------- -------- -------- -------- -------- ID-1 0.0015 -0.253 0.192 0.0016 -0.207 0.199 ID-2 0.0018 -0.186 0.068 0.0018 -0.152 0.074 ID-4 0.0010 -0.164 0.197 0.0011 -0.141 0.203 ID-5 0.0022 -0.227 0.094 0.0023 -0.168 0.105 ID-6 0.0008 -0.253 0.027 0.0008 -0.208 0.028 ID-9 0.0029 -0.019 0.135 0.0029 -0.017 0.136 ID-10 0.0015 -0.192 0.102 0.0015 -0.161 0.107 ID-12 0.0033 0.030 0.177 0.0033 0.032 0.178 ID-17 0.0014 -0.231 0.109 0.0015 -0.189 0.112 ID-18 0.0014 -0.271 0.167 0.0014 -0.214 0.172 ID-19 0.0028 -0.015 0.189 0.0028 -0.013 0.190 ID-23 0.0017 -0.254 0.138 0.0017 -0.194 0.142 ID-27 0.0005 0.082 -0.379 0.0005 0.086 -0.382 ID-28 0.0025 -0.154 0.217 0.0026 -0.118 0.225 ID-34 0.0032 0.033 0.164 0.0032 0.036 0.166 ID-43 0.0019 0.028 0.186 0.0019 0.029 0.187 ID-85 0.0021 -0.179 0.079 0.0021 -0.146 0.095 ID-292 0.0013 -0.211 0.178 0.0014 -0.166 0.187 -------- -------- -------- -------- -------- -------- -------- : Information of 18 selected materials: *exact* value and *one-shot* approximation of thermoeletric degrees of freedom.[]{data-label="table-18-mats-te-dof"} Numerical Efficiency Calculation in Figure 1 ============================================ Numerical maximum efficiencies of ideal thermoelectric devices without thermal loss by radiation or air convection are computed for 276 materials and compared with the peak $zT$ values. The thermoelectric properties are *linearly interpolated* at intermediate temperatures. The exact temperature distribution $T(x)$ of steady state is determined by solving the differential equations of thermoelectricity with Dirichlet boundary conditions; the end point temperature is determined from the available temperature range. Then the thermoelectric performances of a thermoelectric leg with length $L$ and cross sectional area $A$ are calculated as a function of current density $J$ given as $\eta (J) = \frac{P/A}{Q_h/A} = \frac{ J ( \int_c^h \alpha dT - J \int_0^L \rho dx ) }{ - \kappa_h \nabla T_h + J \alpha_h T_h } $, where the $P$ and $Q_h$ are the power delivered outside and the hot-side heat current respectively. Then, the maximum of numerical efficiency ($\eta_{\rm max}$) is calculated, which satisfies the relation $\eta (J) \leq \eta_{\rm max}$. The reduced efficiency $\eta_{\rm red}$ is obtained as $\eta_{\rm red} = \frac{\eta_{\rm max}}{\eta_{\rm Carnot}}$, where $\eta_{\rm Carnot} = \frac{T_h - T_c}{T_h}$. Device Parameters and Operating Conditions ========================================== The thermoelectric (TE) power device mentioned in this paper is a uni-leg device composed of a single leg or a segmented leg sandwiched by heat source ($T_h$) and heat sink ($T_c$) at both sides. In such a device, electric current and heat current flow simultaneously across the leg. For the simplicity, we assume the steady-state condition. For $p$-type material ($\alpha > 0$), the electric current and the heat current flow in the same direction from hot to cold side, while the direction of the electric current is reversed in $n$-type material ($\alpha < 0$). The most important parameters in a TE device are voltage $V$, electrical resistance $R$, and thermal resistance $1/K$, which can describe the electrical and thermal circuits of the TE device. Once these three device parameters are known, we can roughly estimate the thermoelectric performance of the TE device. When there is load resistance $R_{\rm L}$, there will be electric current $I = \frac{V}{R+R_{\rm L}}$. When there is no electric current, there will be heat current $Q_h = - A \kappa \nabla T = K \Delta T$. When there is non-zero electric current, there will be heat generation by Thomson and Joule heat and the hot side heat current will be approximately $Q_h \approx K \Delta T + I \frac{V}{\Delta T} T_h - \frac{1}{2} I^2 R$. The approximation becomes exact when there is no temperature dependency in thermoelectric properties (TEPs). The three parameters $V,R,K$ are easily determined from the TE properties. Note that a leg of the device is equivalent to a series of infinitesimal parts $dx$, and it is trivial to write the induced open-circuit voltage ($V$) as an integration of $-\alpha \nabla T$ on $x$, and the resistance of the TE leg ($R_{TE}$ and $1/ K_{TE}$) as an integration of resistivity [$\rho$ and $1/ \kappa$]{} on $x$; see Figure \[fig-deviceParameter\]. Also note that the electrical and thermal resistances should be calculated by integration of the corresponding resistivities on $x$, not on $T$. ![Structure of conventional thermoelectric power devices. For simplicity, we draw only an uni-leg with $p$-type materials where electric current flows from hot to cold side. Since the electric current and heat current flow though the leg, the electrical and thermal resistance of the leg should be considered as the sum of an infinitesimal serial circuit. Thus, the voltage $V$ and the resistance $R$ should be the sum of component voltages and resistances respectively. In the case of thermal conduction, the inverse of thermal conductivity should be used for thermal circuit parameter.[]{data-label="fig-deviceParameter"}](FIGs02_deviceParameter){width="80.00000%"} When the material thermoelectric figure of merit $zT$ is small, the electric current density $J$ is so small that the $R$ and $K$ can be estimated by $R^{(0)}$ and $K^{(0)}$ which are the electrical resistance and thermal conductance for zero-current-density case ($J=0$). Similarly, since the $J$ is small, the temperature can be estimated by the zero-current-density solution $T^{(0)} (x)$ which is the solution of the heat equation $\nabla \cdot (\kappa \nabla T) = 0$ without thermoelectric heat generation. Here the $\kappa$ is thermal conductivity. The heat flows are nearly the same along the thermoelectric leg so the one-dimensional heat equation suggests $\kappa \frac{dT}{dx}$ is constant. Hence the average thermal conductivity $\kappaBar^{(0)}$ for $J=0$ satisfies $\kappaBar^{(0)} \frac{\Delta T}{L} = \kappa \frac{dT}{dx}$ so it can be evaluated by integration over $T$: $\kappaBar^{(0)} = \int \kappaBar^{(0)} \frac{1}{L} dx = \frac{1}{\Delta T} \int \kappa \frac{dT}{dx} dx = \langle \kappa \rangle_T$ by the change of variable $dx= \frac{\kappa dT}{\kappaBar_0 \frac{\Delta T}{L}}$. Here the $\langle \kappa \rangle_T$ denotes the average of the thermal conductivity $\kappa(T)$ over $T$. Meanwhile, the resitivity under the condition of $J=0$ is calculated as $\rhoBar^{(0)} = \frac{1}{L} \int \rho dx = \frac{1}{L} \int \rho \frac{\kappa dT}{\kappaBar^{(0)} \frac{\Delta T}{L}} = \frac{1}{\kappaBar^{(0)} \Delta T} \int \rho \kappa dT = \frac{\left< \rho \kappa \right>_T}{\left< \kappa \right>_T}$. Finally we may rewrite $RK = \rhoBar \,\kappaBar \approx \rhoBar^{(0)} \kappaBar^{(0)} = \left< \rho \kappa \right>_T$ under small $zT$. The above idea to use the device parameters for $J=0$ is the main idea of the one-shot approximation, of which argument is dealt thoroughly in $\S$\[sec-one-shot\]. Every thermoelectric material at the moment has the peak $zT$ smaller than 3, implying that the above idea gives a good approximation $Z_{\rm gen}^{(0)}$ for $Z_{\rm gen}$; see for its definition. However, under large $zT$ or non-zero $J$, the approximation $Z_{\rm gen}^{(0)}$ may have 1 to 10 percent error. Thermoelectric differential equation in one-dimension ===================================================== The thermoelectric effect is expressed in terms of electric current density $J$ and heat current density $J^Q$: $J = \sigma ( E - \alpha \nabla T)$ and $J^Q = \alpha T J - \kappa \nabla T$ where $E$ is electric field. Applying the energy conservation law on $J$ and $J^Q$ and assuming *one-dimensional* circuit case, we can obtain the thermoelectric differential equation[@chung2014nonlocal; @goupil2015continuum] describing evolution of temperature distribution $T(x)$ inside an uni-leg thermoelectric device: $$\label{SI-TEQ-1D} \frac{d}{dx} \left(\kappa \frac{dT}{dx} \right) + \rho J^2 -T \frac{d \alpha}{dT} \frac{dT}{dx} J = 0$$ where $x$ is coordinate inside the one-dimensional thermoelectric leg. We have Dirichlet boundary condition since the temperatures at the end of the leg are fixed: $$\label{SI-TEQ-BC} T(0) = T_h, \quad T(L) = T_c.$$ In the one-dimensonal leg, where the cross sectional area $A$ is constant across the leg, the electric current is calculated as $I = J \times A $ and the heat current is calculated as $Q = J^Q \times A$ Average Parameters and general figure of merit $Z_{\mathrm{gen}}$ ================================================================= To analyze the thermoelectric equation , the following average material properties are helpful: $$\begin{aligned} {3} \alphaBar &:= \frac{1}{\Delta T} \int_{T_c}^{T_h} \alpha \,dT &=& \frac{V}{\Delta T}, \\ \rhoBar &:= \frac{1}{L} \int_{0}^{L} \rho \,dx &=& \frac{A}{L} R, \\ \frac{1}{\kappaBar} &:= \frac{1}{L} \int_{0}^{L} \frac{1}{\kappa} \,dx &=& \frac{A}{L} \frac{1}{K}.\end{aligned}$$ Note the average parameters give the induced open-circuit voltage $V$, electrical resistance $R$ and thermal resistance $1/K$ of the leg. Using these parameters we also define the *general device figure of merit* $Z_{{\mathrm{gen}}}$ for temperature dependent material properties: $$\label{def-Zgen} Z_{{\mathrm{gen}}} := \frac{\alphaBar^2}{R K} = \frac{\alphaBar^2}{\rhoBar \,\kappaBar},$$ which generalize the classical device figure of merit. If the material properties are temperature independent, the $Z_{{\mathrm{gen}}}$ is reduced to the conventional material parameter $z$. Electric Current Equation ========================= With given load resistance $R_{\rm L}$, an equation for the electric current density $J = \sigma \left(E - \alpha \frac{dT}{dx}\right)$ can be found by integrating $\rho J$ along the closed circuit: $\oint \rho J \,dx = \oint E \,dx - \oint \alpha \frac{dT}{dx}\,dx = V$. Hence the electric current $I$ satisfies $(R + R_{\rm L}) I = V$ and we have $$\label{SI-TEQ-J} J = \frac{1}{A} \frac{V}{R+R_{\rm L}}.$$ Note that the $R = \frac{1}{A} \int_0^L \rho \left( T(x) \right) \,dx$ depends on $T$ so does the $J$. Integral Equations of $T(x)$ and $\nabla T(x)$ ============================================== Due to the nonlinearity ($\kappa$, $\alpha$, $\rho$ depend on $T$) and nonlocality ($J$ depends on an integral of $T$) [@chung2014nonlocal], the equation does not have an analytic solution. Instead, we rewrite the equation as an integral form where fixed-point iteration is possible. The integral equation will give us physical insights to derive the remaining degrees of freedom $\tau$ and $\beta$. For simplicity, we denote the term with Joule heat and Thomson heat by $f_T(x)$: $$\label{SI-fT} f_T(x) := \rho J^2 - T \frac{d \alpha}{dT} \frac{dT}{dx} J.$$ Then the equation is $\frac{d}{dx} \left( \kappa \frac{dT}{dx} \right) + f_T = 0$. If the solution $T_{\rm sol}$ of , , is known, we may put $\kappa(x) := \kappa(T_{\rm sol}(x))$ and $f(x) := f_{T_{\rm sol}}(x)$ to find a *linear* equation $$\label{SI-linear-eq} \frac{d}{dx} \left( \kappa(x) \frac{dT}{dx} \right) + f = 0.$$ Since this equation is linear, we can find a solution by decomposing it into a homogeneous solution $T_1$ and particular solution $T_2$: $T = T_1 + T_2$. The $T_1$ and $T_2$ are solutions of $$\begin{aligned} {6} \frac{d}{dx} \left( \kappa(x) \frac{dT_1}{dx} \right) &=& 0, \quad T_1(0) &=& T_h, \quad T_1(L) &=& T_c, \label{SI-T1-eq} \\ \frac{d}{dx} \left( \kappa(x) \frac{dT_2}{dx} \right) + f &=& 0, \quad T_2(0) &=& 0, \quad T_2(L) &=& 0. \label{SI-T2-eq}\end{aligned}$$ This idea is summarized in Figure \[fig-tDecomposition\]. ![Crucial idea to solve the temperature equation. The solution of the PDE can be decomposed to $T_1(x)$ and $T_2(x)$ with proper boundary conditions. Without the reaction term, the solution becomes simple, while still having physical meaning due to the relatively small contribution of $T_2(x)$ in thermoelectric legs.[]{data-label="fig-tDecomposition"}](FIGs03_tDecomposition){width="80.00000%"} To solve the equation , we integrate it over $x$ to yield $\kappa(x) \frac{dT_1}{dx}(x) = C$ for some constant $C$. Dividing both sides by $\kappa$ and integrating from $0$ to $x$, we have $T_1(x) - T_1(0) = C \int_0^x \frac{1}{\kappa(x)}\,dx$. Imposing the boundary conditions yields $C = -K \frac{T_h - T_c}{A}$ and $$T_1 (x) = T_h - \frac{K \Delta T}{A} \int_0^x \frac{1}{\kappa(x)}\,dx.$$ To solve the equation , we integrate it from $0$ to $x$ to yield $\kappa(x) \frac{dT_2}{dx}(x) -C = -\int_0^x f(s)\,ds =: -F(x)$ for some constant $C$. Dividing both sides by $\kappa$ and integrating from $0$ to $x$, we have $T_2 (x) - T_2 (0) = - \int_0^x \frac{F(x)}{\kappa (x)} \,dx + C \int_0^x \frac{1}{\kappa(x)} \,dx$. Imposing the zero boundary conditions yields $$T_2 (x) = -\int_0^x \frac{ F(x)}{\kappa(x)} \,dx +\frac{ K \,\delta T}{A} \int_0^x \frac{1}{\kappa(x)} \,dx,$$ where $\delta T := \int_0^L \frac{F(x)}{\kappa (x)} \,dx$ is a scalar quantity. Summing up, we have the solution $T=T_1 + T_2$ of , and its gradient: $$\begin{aligned} T(x) &= \left( T_h - \frac{K \Delta T}{A} \int_0^x \frac{1}{\kappa} \,dx \right) + \left( -\int_0^x \frac{F(x)}{\kappa(x)}\,dx +\frac{ K \,\delta T}{A} \int_0^x \frac{1}{\kappa} \,dx \right) \label{SI-T-integral-form} \\ \frac{dT}{dx}(x) &= \left( -\frac{K \Delta T}{A} \frac{1}{\kappa (x)} \right) + \left( \frac{F(x)}{\kappa(x)} +\frac{ K \,\delta T}{A} \frac{1}{\kappa (x)} \right) \label{SI-gradT}\end{aligned}$$ where $F(x) = \int_0^x f(s)\,ds$ and $\delta T = \int_0^L \frac{F(x)}{\kappa (x)} \,dx$. Since $\kappa(x)=\kappa(T(x))$ and $f(x) = f_{T}(x)$, the equation is an integral form $T = \varphi[T]$ where $\varphi$ is the integral operator in the right-hand side of . To find $T$, we apply fixed-point iteration [@burden2010numerical] to the relation $T = \varphi[T]$. Choosing an initial guess $T_0$ for $T$ (it can be a linear distribution satisfying Dirichlet condition or the temperature curve satisfying $J=0$), we iteratively compute a sequence of functions $T_{n+1} = \varphi[T_n]$ for $n\geq 0$. Then we expect $T_n$ converges to a function $T_\infty$ which is the solution we are looking for because it satisfies $T_\infty = \varphi[T_\infty]$. Computation reveals that with linear $T_0$, the $T_n$ converges enough within a few iterations (less than 10 iterations). Heat current and additional figure of merit $\tau$ and $\beta$ ============================================================== Using the $\frac{dT}{dx}$ in , the hot-side heat current can be written as $$\label{SI-qHot} Q_h = A J_h^Q = I \alpha_h T_h - A \kappa_h \Big(\frac{dT}{dx}\Big)_h = I \alpha_h T_h + K (\Delta T - \delta T).$$ Now we decompose $\delta T$ into two terms having $I$ and $I^2$. From , $$\begin{split} F_T(x) = \int_0^x f_T(s)\,ds &= I^2 \int_0^x \frac{1}{A^2} \rho(s)\,ds - I \int_0^x \frac{1}{A} T(s) \frac{d \alpha}{dT}(T(s)) \frac{dT}{dx}(s) \,ds \\ &=: I^2 F_T^{(2)}(x) - I F_T^{(1)}(x). \end{split}$$ Hence $$\begin{split} \delta T = \int_0^L \frac{F_T(x)}{\kappa(x)}\,dx &= I^2 \int_0^L \frac{F_T^{(2)}(x)}{\kappa(x)}\,dx - I \int_0^L \frac{F_T^{(1)}(x)}{\kappa(x)}\,dx \\ &=: I^2 \delta T^{(2)} - I \delta T^{(1)}. \end{split}$$ For *temperature-independent* material properties, we can easily check that $\delta T^{(2)} = \frac{1}{2}\frac{R}{K}$ and $\delta T^{(1)} \equiv 0$ so that the hot-side heat current is $$\QhBar = K \Delta T + I \alphaBar T_h -\frac{1}{2} I^2 R.$$ Our strategy is to consider the $Q_h$ in as a perturbation of $\QhBar$ above. To do so, we replace $\alpha_h$ by $\alphaBar$ in and introduce dimensionless perturbation parameters $\tau$ and $\beta$ of which values become zero for temperature-independent material properties. Precisely we let $$\begin{aligned} \tau &:= \frac{1}{\alphaBar \Delta T} \left[ (\alphaBar - \alpha_h) T_h -K \,\delta T^{(1)} \right], \label{SI-tau}\\ \beta &:= \frac{2}{R} K \,\delta T^{(2)} - 1. \nonumber $$ Then we can rewrite the $Q_h$ in by $$\label{SI-Qh-tau-beta} Q_h = K \Delta T + I \alphaBar ( T_h - \tau \Delta T) - \frac{1}{2} I^2 R (1+\beta).$$ Observing the delivered power $P=I (V-IR)=I(\alphaBar \Delta T-IR)$ is equal to $Q_h-Q_c$, we have the cold-side heat current: $$\nonumber Q_c = K \Delta T + I \alphaBar ( T_c - \tau \Delta T) +\frac{1}{2} I^2 R (1-\beta).$$ When the average device parameters are fixed, the $Q_h$ in decreases as $\tau$ or $\beta$ increases while the delivered power $P$ is fixed. Hence the efficiency $\eta = \frac{P}{Q_h}$ increases as $\tau$ or $\beta$ increases. This implies each of $\tau$ and $\beta$ is a figure of merit for efficiency, as well as $Z_{\mathrm{gen}}$ is. Efficiency prediction using thermoelectric degrees of freedom ============================================================= Here we derive an efficiency formula in terms of the thermoelectric degrees of freedom $Z_{\mathrm{gen}}$, $\tau$, $\beta$ and find the maximum efficiency. Let $\gamma := \frac{R_{\rm L}}{R}$. Then the electric current is $I = \frac{\alphaBar \Delta T}{R (1+ \gamma)} $ and the delivered power is $P=I(\alphaBar \Delta T-IR) = \frac{(\alphaBar \Delta T)^2}{R} \frac{\gamma}{(1+ \gamma)^2}$. Using , the efficiency $\eta = \frac{P}{Q_h} = \frac{P/(K\Delta T)}{Q_h/(K\Delta T)}$ can be written as $$\nonumber \eta (Z_{\mathrm{gen}}, \tau, \beta | T_h, T_c, \gamma) = \frac{ Z_{\mathrm{gen}}\Delta T \frac{\gamma}{(1+ \gamma)^2 } }{ 1 + Z_{\mathrm{gen}}\big( \frac{1}{1+ \gamma} \big) ( T_h - \tau \Delta T ) -\frac{1}{2} Z_{\mathrm{gen}}\Delta T \big( \frac{1}{1+ \gamma} \big)^2 (1+\beta). }$$ We can easily check that the efficiency is monotonic on $Z_{\mathrm{gen}}$, $\tau$ and $\beta$ for fixed $T_h,T_c$ and $\gamma$. *Assuming $Z_{\mathrm{gen}}$, $\tau$, $\beta$ changes little* near the $\gamma$ at the maximum efficiency, we solve $\frac{\partial \eta}{\partial \gamma} = 0$ to estimate the maximum efficiency. For simplicity, we let $$T_h' := T_h -\tau \Delta T, \quad T_c' := T_c - (\tau+\beta)\Delta T, \quad T_m' := \frac{1}{2}(T_h' +T_c').$$ Then the solution of $\frac{\partial \eta}{\partial \gamma} = 0$ is $$\label{gamma-gen-max} \gamma_{\rm max}^{\rm gen} = \sqrt{ 1+Z_{\mathrm{gen}}T_m' }.$$ Hence the maximum efficiency is approximated by $$\label{SI-max-efficiency} \eta_{\rm max} \approx \eta_{\rm max}^{\rm gen} := \frac{\Delta T}{T_h'} \frac{ \sqrt{1+Z_{\mathrm{gen}}T_m'}-1}{\sqrt{1+Z_{\mathrm{gen}}T_m'} +\frac{T_c'}{T_h'}}.$$ This formula generalizes the classical maximum efficiency formula for temperature-independent material properties because it has the same form as the classical formula but predicts the exact maximum efficiency accurately; see Figure \[fig-RelativeError\]. One-shot approximation $Z_{\mathrm{gen}}^{(0)}$, $\tau_{\rm lin}^{(0)}$ and $\beta_{\rm lin}^{(0)}$ {#sec-one-shot} =================================================================================================== The computation of $Z_{\mathrm{gen}}$, $\tau$ and $\beta$ requires the exact temperature distribution. But they can be estimated directly from the material properties. In this section we derive an approximate formula for $Z_{\mathrm{gen}}$, $\tau$ and $\beta$. The idea is to use the temperature distribution for $J=0$, which is similar to the exact temperature distribution because most devices induce small $J$ due to the small $zT$. Let $T^{(0)}$ be the temperature distribution for $J=0$ and define $$\begin{aligned} {3} \rhoBar^{(0)} &:= \frac{1}{L} \int_{0}^{L} \rho(T^{(0)}(x)) \,dx &=& \frac{A}{L} R^{(0)}, \\ \frac{1}{\kappaBar^{(0)}} &:= \frac{1}{L} \int_{0}^{L} \frac{1}{\kappa (T^{(0)}(x))} \,dx &=& \frac{A}{L} \frac{1}{K^{(0)}}.\end{aligned}$$ From with $J=0$, we can check that $$\label{SI-heat-flux-approx} -\kappa(T^{(0)}(x)) \frac{d T^{(0)}}{dx}(x) = \kappaBar^{(0)} \frac{\Delta T}{L}.$$ Hence $$\begin{split} \int_{T_c}^{T_h} \rho(T) \kappa(T)\,dT &= \int_{T_c}^{T_h} \rho(T^{(0)}) \Big(-\frac{\Delta T}{L} \kappaBar^{(0)}\Big) \frac{dx}{dT^{(0)}} \,dT^{(0)} \\ &= \frac{\Delta T}{L} \int_0^L \rho(T^{(0)}(x)) \,\kappaBar^{(0)} \,dx \\ &= \Delta T \,\rhoBar^{(0)} \,\kappaBar^{(0)}. \end{split}$$ Replacing $T$ with $T^{(0)}$ in $Z_{\mathrm{gen}}= \frac{\alphaBar^2}{\rhoBar\,\kappaBar}$, we have an one-shot approximation for $Z_{\mathrm{gen}}$: $$\label{one-shot-Zgen} Z_{\mathrm{gen}}\approx \frac{\alphaBar^2}{\rhoBar^{(0)}\,\kappaBar^{(0)}} = \frac{ \left( \int \alpha \,dT \right)^2 }{\Delta T \,\int \rho \kappa \,dT} =: Z_{\mathrm{gen}}^{(0)}.$$ *To approximate $\tau$, we assume* the Seebeck coefficient is a linear function of $T$: $$\alpha(T) \approx \alpha_{\mathrm{lin}}(T) := \alpha_h + \left( \frac{\alpha_c - \alpha_h}{T_c - T_h} \right) \left(T- T_h \right).$$ In this way we can observe the effect of the gradient of $\alpha$ on $\tau$ more clearly. Since the $\tau$ in has $K\,\delta T^{(1)}$ term, we estimate a relevant term: $$\begin{split} F_T^{(1)}(s) &\approx \int_0^s \frac{1}{A} T \frac{d\alpha_{\mathrm{lin}}}{dT}(T(x)) \frac{dT}{dx}\,dx = \int_{T_h}^{T(s)} \frac{1}{A} T \frac{\alpha_c-\alpha_h}{T_c-T_h} \,dT\\ &= \frac{1}{2A} \frac{\alpha_c-\alpha_h}{T_c-T_h} (T(s)^2 - T_h^2) =: \widehat{F^{(1)}}(T(s)). \end{split}$$ *Using* $-\kappa \frac{dT}{dx} \approx \kappaBar^{(0)} \frac{\Delta T}{L}$ from , $$\begin{split} \delta T^{(1)} &= \int_0^L \frac{F_T^{(1)}(x)}{\kappa(x)}\,dx \approx - \int_0^L \frac{\widehat{F^{(1)}}(T(x))}{\kappaBar^{(0)}} \frac{L}{\Delta T} \frac{dT}{dx}\,dx\\ &= \frac{1}{\kappaBar^{(0)}} \frac{L}{\Delta T} \int_{T_c}^{T_h} \widehat{F^{(1)}}(T)\,dT \\ &= \frac{1}{2 K^{(0)}} \frac{1}{\Delta T} \frac{\alpha_c-\alpha_h}{T_c-T_h} \frac{1}{3} (\Delta T)^2 (-3T_h +\Delta T) \\ &= \frac{\alpha_h-\alpha_c}{6 K^{(0)}} (-3T_h +\Delta T)=: \widehat{\delta T^{(1)}} \end{split}$$ where $K^{(0)} := \frac{A}{L}\kappaBar^{(0)}$. Therefore we have an one-shot approximation for $\tau$: $$\begin{split} \tau &\approx \frac{1}{\overline{\alpha_{\mathrm{lin}}} \Delta T} \left[ (\overline{\alpha_{\mathrm{lin}}} - \alpha_h) T_h -K^{(0)} \,\widehat{\delta T^{(1)}} \right]\\ &= -\frac{1}{3} \frac{\alpha_h-\alpha_c}{\alpha_h+\alpha_c} =: \tau_{\rm lin}^{(0)}. \end{split}$$ *To approximate $\beta$, we assume* the $\rho \kappa$ is a linear function of $T$: $$(\rho\kappa)(T) \approx (\rho\kappa)_{\mathrm{lin}}(T) := (\rho\kappa)_h + \left( \frac{(\rho\kappa)_c - (\rho\kappa)_h}{T_c - T_h} \right) \left(T- T_h \right).$$ *Using* $-\kappa \frac{dT}{dx} \approx \kappaBar^{(0)} \frac{\Delta T}{L}$ from , we approximate relevant terms for $\beta$: $$\begin{split} F_T^{(2)}(s) &= \int_0^s \frac{1}{A^2}(\rho\kappa)(T(x)) \frac{1}{\kappa(x)} \,dx \approx \frac{-L}{A^2 \kappaBar^{(0)} \Delta T} \int_0^s (\rho\kappa)_{\mathrm{lin}}(T(x)) \frac{dT}{dx}\,dx\\ &= \frac{-L}{A^2 \kappaBar^{(0)} \Delta T} \int_{T_h}^{T(s)} (\rho\kappa)_{\mathrm{lin}}(T) \,dT \\ &= \frac{-L}{A^2 \kappaBar^{(0)} \Delta T} \Big[ (\rho\kappa)_h (T(s)-T_h) + \frac{1}{2} \frac{(\rho\kappa)_c-(\rho\kappa)_h}{T_c-T_h} (T(s)-T_h)^2 \Big]\\ & =: \widehat{F^{(2)}}(T(s)) \end{split}$$ hence $$\begin{split} \delta T^{(2)} &= \int_0^L \frac{F_T^{(2)}(x)}{\kappa(x)}\,dx \approx \int_0^L \widehat{F^{(2)}}(T(x)) \Big(-\frac{L}{\kappaBar^{(0)} \Delta T} \Big) \frac{dT}{dx}\,dx\\ &= \frac{-L}{\kappaBar^{(0)} \Delta T} \int_{T_h}^{T_c} \widehat{F^{(2)}}(T)\,dT \\ &= \frac{1}{6(K^{(0)})^2} \big( 2(\rho\kappa)_h + (\rho\kappa)_c \big) =: \widehat{\delta T^{(2)}}. \end{split}$$ Therefore we have an one-shot approximation for $\beta$: $$\begin{split} \beta &\approx \frac{2}{\frac{L}{A}\rhoBar^{(0)}} K^{(0)} \,\widehat{\delta T^{(2)}} - 1 = \frac{1}{3 \,\rhoBar^{(0)}\kappaBar^{(0)}} (2(\rho\kappa)_h +(\rho\kappa)_c) -1\\ &\approx \frac{1}{\frac{3}{2} ((\rho\kappa)_h+(\rho\kappa)_c)} (2(\rho\kappa)_h +(\rho\kappa)_c) -1\\ &= \frac{1}{3} \frac{(\rho\kappa)_h - (\rho\kappa)_c}{(\rho\kappa)_h +(\rho\kappa)_c} =: \beta_{\rm lin}^{(0)}. \end{split}$$ In summary, we have one-shot approximations as following: $$\label{one-shot-approx} Z_{\mathrm{gen}}\approx Z_{\mathrm{gen}}^{(0)} \equiv \frac{ \left( \int \alpha \,dT \right)^2 }{\Delta T \,\int \rho \kappa \,dT}, \quad \tau \approx \tau_{\rm lin}^{(0)} \equiv -\frac{1}{3}\frac{\alpha_h - \alpha_c}{\alpha_h+\alpha_c}, \quad \beta \approx \beta_{\rm lin}^{(0)} \equiv \frac{1}{3}\frac{\rho_h\kappa_h-\rho_c\kappa_c}{\rho_h\kappa_h+\rho_c\kappa_c}.$$ The *one-shot approximation* derived above is accurate enough for many cases. See Figure \[fig-one-shot-approx\], where we compare the exact $Z_{\rm gen}$, $\tau$, $\beta$ with their one-shot approximations for 276 materials. ![Estimation of thermoelectric degrees of freedom for 276 materials. Numerical $Z_{\mathrm{gen}}$, $\tau$, $\beta$ are computed using the exact $T$ at the maximum efficiency. One-shot approximations $Z_{\mathrm{gen}}^{(0)}$, $\tau^{(0)}$, $\beta^{(0)}$ are computed using the $T^{(0)}$ for $J=0$. Going further, the $\tau_{\rm lin}^{(0)}$ and $\beta_{\rm lin}^{(0)}$ are computed by assuming the linearity of $\alpha$ and $\rho\kappa$; see for their explicit formula. []{data-label="fig-one-shot-approx"}](FIGs04_Ztaubeta){width="80.00000%"} Furthermore, these one-shot approximations can be used to predict the performance of *segmented* devices. In Figure \[fig-segQ\], we consider a two-stage segmented leg with no contact resistance. The segmented leg consists of SnSe [@zhao_ultralow_2014] for hot side and BiSbTe [@poudel_high-thermoelectric_2008] for cold side. The exact temperature distribution $T$ insdie the leg shows a jump of the gradient at $x=0.6$ due to the inhomogeneity of the material; see Figure \[fig-segQ\](b). Despite the nonlinearity of the $T$, the one-shot approximation using $Z_{\mathrm{gen}}^{(0)}$, $\tau_{\rm lin}^{(0)}$ and $\beta_{\rm lin}^{(0)}$, which does not use the exact $T$, shows high accuracy in prediction of thermoelectric performances; see Figure \[fig-segQ\](c)-(f). The relative error is high near $\gamma=0$, where the reaction term is large due to the large electric current. For large $\gamma$, the error is negligible. Near the $\gamma=1$, the error is acceptable; the relative error is less than 5%. The one-shot approximation predicts the maximum efficiency to be 7.68% while the exact value is 7.53%. ![ The thermoelectric performances of a two-stage segmented leg predicted by the one-shot approximation. The numerical exact values are computed by fixed-point iteration and the one-shot values are computed using $Z_{\mathrm{gen}}^{(0)}$, $\tau_{\rm lin}^{(0)}$ and $\beta_{\rm lin}^{(0)}$; see for the explicit one-shot formula. (a) The geometry of the segmented leg: $\rm SnSe$ [@zhao_ultralow_2014] and $\rm BiSbTe$ [@poudel_high-thermoelectric_2008] are used for hot and cold-side materials. $T_h = 970 K$ and $T_c = 300 K$ are used. (b) Exact temperature distribution obtained by solving the integral equation of $T$ with fixed-point iteration. (c) Power delivered outside, (d) heat current at the hot side, (e) efficiency, and (f) relative errors in power, heat current, efficiency between the numerical value and the one-shot approximation. []{data-label="fig-segQ"}](FIGs05_seqQ){width="\textwidth"} Maximum efficiency prediction using $\eta_{\rm max}^{\rm gen}$ =============================================================== In Figure \[fig-RelativeError\], we can observe that the maximum efficiency estimation formula $\eta_{\rm max}^{\rm gen} (Z_{\mathrm{gen}}, \tau, \beta)$ in is highly accurate. ![Efficiency estimation for 276 materials using the formula $\eta_{\rm max}^{\rm gen} (Z_{\mathrm{gen}}, \tau, \beta)$ in and the peak $zT$. In use of $\eta_{\rm max}^{\rm gen} (Z_{\mathrm{gen}}, \tau, \beta)$, there are five options: using (i) exact $Z_{\mathrm{gen}}, \tau, \beta$ (Ztb), (ii) $Z_{\mathrm{gen}}^{(0)}, \tau^{(0)}, \beta^{(0)}$ (Z0t0b0), (iii) $Z_{\mathrm{gen}}^{(0)}, \tau_{\rm lin}^{(0)}, \beta_{\rm lin}^{(0)}$ (Z0t0linb0lin), (iv) exact $Z_{\mathrm{gen}}$ only, $\tau=0$, $\beta=0$ (Z), (v) $Z_{\mathrm{gen}}^{(0)}$ only, $\tau=0$, $\beta=0$ (Z0). (a) Comparison of the exact maximum efficiency and the estimations. (b) Relative error of the estimations (absolute error divided by the exact maximum efficiency). The peak $zT$ is malfunctioning in efficiency prediction; the relative error can be over $100\%$. On the other hand, the formula $\eta_{\rm max}^{\rm gen} (Z_{\mathrm{gen}}, \tau, \beta)$ has small relative error; even the simplest formula with $Z_{\mathrm{gen}}^{(0)}, \tau_{\rm lin}^{(0)}, \beta_{\rm lin}^{(0)}$ has the standard error (=root mean square of relative errors) less than 2%. See Table \[Table-rel-err-eff-formula\] for detailed values. []{data-label="fig-RelativeError"}](FIGs06_RelativeError){width="\textwidth"} In Table \[Table-rel-err-eff-formula\], various statistics on the relative error of maximum efficiency ($\frac{\eta_{\rm max}^{\rm gen} - \eta_{\rm max}}{ \eta_{\rm max} }$) are given. --------------------- -------- -------- -------- -------- -------- -------- Avg RelErr 0.02% 1.11% 1.08% 1.42% 2.29% 235% StdErr (RMS RelErr) 0.09% 1.38% 1.38% 1.52% 2.47% 1854% max RelErr 1.15% 5.45% 5.23% 5.80% 9.96% 28835% min RelErr -0.61% -1.92% -1.76% -1.78% -2.48% -4% --------------------- -------- -------- -------- -------- -------- -------- : Statistics on the relative error (RelErr) of the maximum efficiency estimation formula $\eta_{\rm max}^{\rm gen} (Z_{\mathrm{gen}}, \tau, \beta)$ in . Average (Avg), root mean square (RMS RelErr or StdErr), maximum (max), and minimum (min) of the relative errors are estimated for 276 materials for thermoelectric power generator working at their available temperature.[]{data-label="Table-rel-err-eff-formula"} If we use the exact $Z_{\mathrm{gen}}, \tau, \beta$, the standard error (=root mean square of relative errors) of $\eta_{\rm max}^{\rm gen}$ is $9.60 \times 10^{-4}$. If we use $Z_{\mathrm{gen}}^{(0)}, \tau_{\rm lin}^{(0)}, \beta_{\rm lin}^{(0)}$, the standard error is $1.75 \times 10^{-2}$. For the signle crystalline $\rm SnSe$ with peak $zT$ of 2.6, the relative error of one shot method is found to be only $6.82 \times 10^{-3}$. However, when we use the different approximation such as linear $T(x)$ or different average scheme for $z$, the error becomes larger than ours due to the non-linearity of $T$ for this material [@kim2015relationship]. If we only use the $Z_{\mathrm{gen}}^{(0)}$ with zero $\tau$ and $\beta$, the efficiency is still well predicted with the standard error of $3.37 \times 10^{-2}$. But, in some materials, the error is relatively large due to the neglect of the $\tau$ and $\beta$. The largest relative error of 10% is found for [@wu_broad_2014], due to the non-vanishing gradient parameters ($\tau = -0.222 \approx \tau^{(0)} = -0.177 \approx \tau_{\rm lin}^{(0)} = -0.204$, $\beta = 0.2085 \approx \beta^{(0)} = 0.228 \approx \beta_{\rm lin}^{(0)} = 0.185$, when $T_h = 918 K$ and $T_c = 304 K$). Efficiency rank estimation using $Z_{\mathrm{gen}}^{(0)}$ {#sec-eff-rank} ========================================================= The $Z_{\mathrm{gen}}$ is a figure of merit, so the bigger $Z_{\mathrm{gen}}$ usually implies the bigger maximum efficiency. Then if we rank TE devices in order of $Z_{\mathrm{gen}}$, will we get the correct rank in order of exact maximum efficiency? To measure such an effect quantitatively, we define the *top-rank-preserving probability* by the ratio of the number of correct top ranks predicted by some estimation parameter, to the total number of top ranks. In Table \[Table-rank-preserving\], we observe the top-rank-preserving probability is high even if we use the simplest estimation $Z_{\mathrm{gen}}^{(0)}$. We computed the maximum thermoelectric efficiency of 5-stage segmented leg for all possible configuration using 18 candidates materials in Table \[table-18-mats-temp-range\]. Thus there are $18^5 = 1,889,568$ device structures. No contact resistance is imposed, but it can be easily imposed in our numerical schme by adding a stage with zero Seebeck coefficient. The result shows with the 82% probability, the top 1% rank configurations in order of exact maximum efficiency can be found in the top 1% ranks in order of $Z_{\mathrm{gen}}^{(0)}$. Hence one may perform faster high-throughput screening by computing $Z_{\mathrm{gen}}^{(0)}$ only, without having to compute the numerical maximum efficiency. -------------------- ----------- ------ ------ ------ Top 0.1% $<$1,891 87% 73% 73% Top 1% $<$18,897 90% 84% 82% Top 2% $<$37,792 93% 88% 86% Top 4% $<$75,584 94% 89% 90% All configurations 1,889,568 100% 100% 100% -------------------- ----------- ------ ------ ------ : Comparison of top ranks in order of exact maximum efficiency and estimation parameters. The Top-rank-preserving probability means the ratio of the number of correct top ranks predicted by the estimation parameter, to the total number of top ranks. The 18 candidates materials in Table \[table-18-mats-temp-range\] are used to generate 5-stage segmented legs. Each stage of the leg has the same cross sectional area $1 \,{\rm mm^2}$ and the same length $1/5 \,{\rm mm}$ (total length is $1 \,{\rm mm}$). The hot- and cold-side temperatures are $T_h=900K$ and $T_c=300K$. []{data-label="Table-rank-preserving"} ### Additional information {#additional-information .unnumbered} The best efficiency in the setting of Table \[Table-rank-preserving\] is 21.95% while the one-shot approximation $\eta_{\rm max}^{\rm gen}(Z_{\mathrm{gen}}^{(0)}, \tau^{(0)}, \beta^{(0)})$ predicts it would be 22.30%. For top 100,000 configurations, the root mean square error is 0.0415. ### Computation algorithm {#computation-algorithm .unnumbered} The maximum thermoelectric conversion efficiency of a given device configuration is computed using the following procedures. 1. Prepare the thermoelectric property curves using digitized data. Each curve is linearly interpolated at intermediate temperature and extrapolated as constant values at the end point temperatures. 2. Choose the linear function as the initial guess $T_0$ of exact temperature distribution. 3. \[step-repeat\] Given a temperature distribution $T_n$, compute thermoelectric degrees of freedom using the definition in and . Then estimate the optimal current density $J$ using the formula . If a given structure is segmented, the material properties are position-dependent as well as temperature-dependent (but there is no additional difficulty in computation). 4. Compute $T_{n+1}$ by evaluating the right-hand side of the integral equation . 5. If $T_{n+1}$ agrees with $T_n$, go to the next step. Otherwise replace $T_n$ by $T_{n+1}$ and go back to the step \[step-repeat\]. 6. Using the converged temperature distribution $T_{n+1}$, compute the maximum efficiency from $\eta_{\rm max}^{\rm gen}(Z_{\mathrm{gen}}, \tau, \beta)$. ### Computation time {#computation-time .unnumbered} In a single core computer, the computation of the maximum efficiency of a segmented leg takes less than 1 second. Thus, for total computation, it may take about 525 hours (22 days). We used a high-performance-computing (HPC) system consisting of 500 processors so the computation took about 1 hour. Why peak $zT$ fails for $\rm BiSbTe$-like and $\rm SnSe$-like materials ======================================================================= While the peak $zT$ of $\rm SnSe$-like materials is significantly greater than that of $\rm BiSbTe$-like materials, the efficiency of the latter is significantly greater than the former ($\rm SnSe$ has the highest peak $zT$ of 2.6 at 923 K); see Figure 1 in the paper. This extreme failure case of $zT$ can be explained by our additional figure of merit $\tau$. Consider three imaginary materials imitaing $\rm BiSbTe$-like, $\rm SnSe$-like, and constant-$z$ materials. For simplicity, we impose some assumptions on their material properties. The $\rho$ and $\kappa$ of them are temperature-independent and they have the same $\alphaBar$. The $\alpha$ of them is linear on temperature; the $\rm BiSbTe$-like material has linearly decreasing $\alpha$, the $\rm SnSe$-like material has linearly increasing $\alpha$, and the constant-$z$ material has the constant $\alpha$. Then, as shown in Figure \[fig-eg-zT-fail\], the peak $zT$ of the $\rm SnSe$-like material is very high. However, due to the temperature-dependent profile of $\alpha$, the $\tau$ of the $\rm SnSe$-like material is negative while the $\tau$ of $\rm BiSbTe$-like material is positive; see . Since the $Z_{\mathrm{gen}}$ is the same for the three materials, the $\tau$ is the main figure of merit which concludes that the $\rm BiSbTe$-like material has higher maximum efficiency than the $\rm SnSe$-like material. This example shows the gradient of material properties can affect the maximum efficiency. ![The $zT$, the maximum efficiency, and $\tau$ for three imaginary materials which imitates $\rm BiSbTe$-like, $\rm SnSe$-like, and constant-$z$ materials. The $\alpha$ of the materials is linear while the $\rho$ and $\kappa$ of them are constant. The materials have the same $\alphaBar$ and $Z_{\mathrm{gen}}$. For working temperature from $300 K$ to $900 K$, the highest maximum efficiency is found in the $\rm BiSbTe$-like material due to the positive $\tau$. []{data-label="fig-eg-zT-fail"}](FIGs07_zTparadox){width="\textwidth"} Optimal doping concentration for $\rm Bi_2Te_3$ =============================================== In this section, using *calculated* material properties, we design funtionally graded materials (FGM) composed of $\rm Bi_2 Te_3$ to maximize the efficiency. The thermoelectric properties are calculated using the density functional theory (DFT) [@hohenberg1964inhomogeneous; @kohn1965self] combined with the Boltzmann transport equation. For the DFT calculations, we use the generalized gradient approximation (GGA) parameterized by PBE(Perdew, Burke, and Ernzerhof) [@perdew1996generalized], and the projector augmented-wave (PAW) pseudopotential [@blochl1994projector]; both of them are implemented in the VASP code [@kresse1996g; @kresse1999ultrasoft]. The experimental lattice parameters for $\rm Bi_2Te_3$ are used, while the internal coordinates are fully relaxed. The electronic band structure is calculated using the spin-orbit interaction. The $k$-point mesh of $36 \times 36 \times 36$ is used. The electronic transport properties are predicted using the DFT band structure coupled with the Boltzmann transport equation within a rigid band approximation and the constant relaxation time approximation; they are implemented in BoltzTraP code [@madsen2006boltztrap; @ryu2017thermoelectric]. Note that we use the experimental band gap of 0.18 eV. The phonon thermal conductivity is calculated using phono3py code [@phono3py; @ryu2016computational]. The force constants are obtained from the 240-atom supercell with the two-atom displacements using VASP code with the single $k$-point $\Gamma$ and then the thrid-order phonon Hamiltonian is constructured. The three phonon scattering rates are calculated using the Fermi’s golden rule. We also include the effective boundary scattering of 10 nm in addition to the three-phonon scattering. Then the thermal conductivity is calculated by integrating the conductivity on the phonon $q$-point mesh of $11 \times 11 \times 11$. We calculate the maximum efficiency of functional gradient layers (FGL) based on $\rm Bi_2Te_3$ for temperature range from 300 K to 600 K. We consider various segmented devices having 1 stage to 8 stages with eight different carrier concentrations ($8 \times 10^{18}, ~1 \times 10^{19}, ~2 \times 10^{19}, ~4 \times 10^{19}, ~8 \times 10^{19}, ~1 \times 10^{20}, ~2 \times 10^{20} ~{\rm cm^{-3}}$). We perform high-throughput computation to find the optimal segmented FGL. There are $8^8$ possible configurations in total. The temperature distribution inside a device is obtained by using fixed-point iteration of the integral equation . At the same time, the current density is optimized to find the maximum efficiency; see *Computation algorithm* in §\[sec-eff-rank\] for more details. Figure \[fig-gradation1\] shows the thermoelectric properties calculated by DFT, various segmented structures with its efficiency, and the optimal carrier concentration as a function of position. Figure \[fig-gradation2\] shows the highest efficiency is obtained for a 5-stage segmented device. For single stage, the maximum efficiency of 10.5 % is found at the doping concentration $4 \times 10^{19} ~{\rm cm^{-3}}$. For multi-stage, the maximum efficiency is found at the 5-stage with the optimal carrier concentration varying from $8 \times 10^{19} \,{\rm cm^{-3}}$ to $1 \times 10^{19} \,{\rm cm^{-3}}$ as going from hot to cold side. ![Design process of functionally graded materials for thermoelectric power generator and its result. The temperature range from 300 K to 600 K is considered. High-throughput computation of efficiency is performed to search the optimal carrier doping concentration. (Left) Thermoelectric properties of $\rm Bi_2Te_3$ calculated by DFT. (Middle top) Schematic structure of segmented devices; different color means different doping concentration. (Right) Top 10 segmented structures when the number of stage (number of segmentation of equal length) is fixed; 1 to 8 stages are considered. (Middle bottom) Optimal carrier doping concentration having the highest efficiency. []{data-label="fig-gradation1"}](FIGs08_gradation1){width="\textwidth"} ![Top 10 segmented structures ranked in order of maximum efficiency. (Left) No segmentation. (Right) 5-stage segmentation. Each color represents a distinct material. The top rank structure is shown in the first row: yellow is the first rank for no segmentation. The 1cyan-2green-2yellow-2orange-1red segmented structure in the first row of the right figure is optimal among the $8^8$ configurations with the highest efficiency of 12.0%. []{data-label="fig-gradation2"}](FIGs09_gradation2){width="\textwidth"} Acknowledgement {#acknowledgement .unnumbered} =============== This work was supproted by Korea Electrotechnology Research Institute (KERI) Primary research program through the National Research Council of Science and Technology (NST) funded by the Ministry of Science and ICT (MSIT) of the Republic of Korea \[No. 18-12-N0101-34 (Development of Design Tools of Thermoelectric and Energy Materials)\]. This work was also supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry and Energy (MOTIE) of the Republic of Korea \[No. 20162000000910 (Development of High Performance Thermoelectric Modules by Power Modulation) and No. 20172010000830 (Developments of Thermoelectric Power Generation System using Unused Heats in Industry and Business Model)\].
amssym.def -10mm =-12mm=-5mm [**[A Class of Bound Entangled States]{}**]{} Shao-Ming Fei$^{1,2,3}$, Xianqing Li-Jost$^3$ and Bao-Zhi Sun$^1$ $^1$ Department of Mathematics, Capital Normal University, Beijing 100037\ $^2$ Institut f[ü]{}r Angewandte Mathematik, Universit[ä]{}t Bonn, 53115, Bonn\ $^3$ Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig\ Abstract We construct a set of PPT (positive partial transpose) states and show that these PPT states are not separable, thus present a class of bound entangled quantum states. Keywords: Bound Entangled states, PPT PACS: 03.65.Bz; 89.70.+c Quantum entangled states are used as key resources in quantum information processing such as quantum teleportation, cryptography, dense coding, error correction and parallel computation [@2; @3]. To quantify the degree of entanglement a number of entanglement measures have been proposed for bipartite states. However most proposed measures of entanglement involve extremizations which are difficult to handle analytically. It turns out that to verify the separability of a general mixed states could be extremely difficult. Among the quantum entangled states, there is a special kind of states that can not be distilled. These states are called bound entangled states. Many powerful separability criteria could not detect the entanglement of these states, e.g. the bound entangled states given in [@UPB; @HorodeckiPRL99; @9]. A few new bound entangled states have been found recently by using the method of positive maps [@piani]. It has been shown [@8] that any state which is entangled and satisfies positive partial transpose (PPT) condition [@6] is not distillable. The existence of PPT entangled states was discussed in [@7] and explicit examples were provided in [@9], based on an elegant separability (range) criterion. In [@10] a special class of quantum states ($d$-computable states) were constructed. The entanglement of formation of these states can be analytically calculated and it turns out that all the states are entangled. In this paper according to the construction of the $d$-computable states, we present first a class of PPT states, then by using the range criterion we prove that they are entangled. Let ${\cal{H}}$ be an $N-$dimensional complex Hilbert space with orthonormal basis $e_i, i=1,\cdots,N$. A general bipartite pure state on ${\cal{H}}\otimes{\cal{H}}$ is of the form, $$\label{psi} |\psi>=\sum_{i,j=1}^Na_{ij}e_i\otimes e_j,\ \ \ \ \ \ \ \ \ a_{ij}\in{\cal{C}}$$ with normalization $\sum\limits_{i,j=1}^Na_{ij}a_{ij}^*=1$. Let $A$ denote the matrix with entries given by $a_{ij}$ in (\[psi\]). Set $$\label{A} A=\left [\matrix { 0 & b_1 & a & -c\cr -b_1 & 0 & c & d\cr -a & -c & 0 & -c_1\cr c & -d & c_1 & 0} \right ],$$ where $a,c,d,b_1,c_1\in{\cal{C}}$. It was shown that all pure states $|\psi>$ with $A$ given by (\[A\]) have a simple formula of the generalized concurrence $C$, such that the entanglement of formation is a monotonically increasing function of $C$ [@10]. Moreover the entanglement of formation of all mixed states $\rho$ with decompositions on pure states with $A$ given by (\[A\]) can be analytically calculated. As all the states with $A$ of (\[A\]) are entangled, these mixed states $\rho$ are also entangled. In fact $A$ is an antisymmetric matrix. Any antisymmetric matrices are equivalent to the following standard form under similarity transformations:                      $A_1= \left[\matrix{ 0&\lambda_1&0&0\cr -\lambda_1&0&0&0\cr 0&0&0&\lambda_2\cr 0&0&-\lambda_2&0} \right]$   or    $A_2= \left[\matrix{ 0&0&\lambda_1&0\cr 0&0&0&\lambda_2\cr -\lambda_1&0&0&0\cr 0&-\lambda_2&0&0} \right]. $ If we set $\lambda_1=\pm b$, $\lambda_2=-c$ in $A_1$, the matrix $A_1$ gives rise to two pure states $$|\psi_{\pm b}>=\left[\matrix{ 0&\pm {b}&0&0&\mp {b}&0&0&0&0&0&0&-{c}&0&0&{c}&0} \right]^t,$$ and hence two projectors $\rho _{\pm b}=|\psi_{\pm b}><\psi_{\pm b}|$, where $t$ denotes the transposition. We define $$\rho_b=\frac{1}{2}\rho_{+b}+\frac{1}{2}\rho_{-b}.$$ And if we set $\lambda_1=\pm a$, $\lambda_2=d$ in $A_2$ we have $$\rho_a=\frac{1}{2}\rho_{+a}+\frac{1}{2}\rho_{-a},$$ where $\rho_{\pm a}=|\psi_{\pm a}><\psi_{\pm a}|$, $$|\psi_{\pm a}>=\left[\matrix{ 0&0&\pm {a}&0&0&0&0& {d}&\mp {a}&0&0&0&0& -{d}&0&0} \right]^t.$$ We define $$\rho_0=\frac{1}{2}\rho_{a}+\frac{1}{2}\rho_{b}$$ The state $\rho_0$ is not separable as its partial transposed matrix has a negative eigenvalue. Below we mix $\rho_0$ with some separable states in such a way that the resulting state will be both partial transposition positive and entangled. Let $I_4$ be a $16\times 16$ matrix with only non-zero entries $(I_4)_{1,1}=(I_4)_{6,6}=(I_4)_{11,11}=(I_4)_{16,16}=1$. We consider $\rho=(1-\varepsilon)I_4+\varepsilon\rho_0$, which is of the form: $$\label{rho} \rho={\small{\left[\matrix{ x_1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&x_3&0&0&-x_3&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&x_2&0&0&0&0&0&-x_2&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&-x_3&0&0&x_3&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&x_1&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&x_5&0&0&0&0&0&-x_5&0&0\cr 0&0&-x_2&0&0&0&0&0&x_2&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&x_1&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&x_4&0&0&-x_4&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&-x_5&0&0&0&0&0&x_5&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&-x_4&0&0&x_4&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&x_1} \right]}},$$ where $x_1=\frac{1-\varepsilon}{4}$, $x_2=\frac{\varepsilon}{2}|a|^2$, $x_3=\frac{\varepsilon}{2}|b|^2$, $x_4=\frac{\varepsilon}{2}|c|^2$, $x_5=\frac{\varepsilon}{2}|d|^2$. The eigenvalues of the partial transposed matrix with respect to the second subspace $\rho^{T_2}$ of $\rho$ are $$0,0,0,0,\frac{\varepsilon}{2}|a|^2,\frac{\varepsilon}{2}|a|^2, \frac{\varepsilon}{2}|b|^2,\frac{\varepsilon}{2}|b|^2, \frac{\varepsilon}{2}|c|^2,\frac{\varepsilon}{2}|c|^2, \frac{\varepsilon}{2}|d|^2,\frac{\varepsilon}{2}|d|^2$$ together with the roots of the following equation: $$\label{4} (\lambda-\frac{1-\varepsilon}{2})^4 -\frac{\varepsilon^2}{4}(|a|^4+|b|^4+|c|^4+|d|^4)(\lambda-\frac{1-\varepsilon}{4})^2 +\frac{\varepsilon^4}{16}(|a|^2|d|^2-|b|^2|c|^2)^2=0,$$ i.e. $\lambda=\frac{1-\varepsilon}{4}\pm\frac{\varepsilon}{4} \sqrt{2[(|a|^4+|b|^4+|c|^4+|d|^4)\pm\sqrt{\Delta_1}]}$, where $\Delta_1$ is defined by the discriminant of (\[4\]): $$\Delta=\frac{\varepsilon^4}{16} [(|a|^2-|d|^2)^2+(|b|^2+|c|^2)^2] [(|a|^2+|d|^2)^2+(|b|^2-|c|^2)^2] =\frac{\varepsilon^4}{16}\Delta_1.$$ If $$\label{cond} \frac{1-\varepsilon}{4}-\frac{\varepsilon}{4} \sqrt{2[(|a|^4+|b|^4+|c|^4+|d|^4)+\sqrt{\Delta_1}]}>0,$$ then $\rho^{T_2}$ is positive semidefinite. It is possible that the condition (\[cond\]) is satisfied while keeping the state $\rho_4$ entangled. For simplicity we take $a=b=c=d=\frac{1}{2}$. In this case $\rho^{T_2}$ is positive semidefinite when $0\leq\varepsilon\leq\frac{1}{2}$, and the state becomes $$\label{rho4} \rho={\scriptsize{\left[\matrix{ \displaystyle\frac{1-\varepsilon}{4}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&\displaystyle\frac{\varepsilon}{8}&0&0&-\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&-\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&-\displaystyle\frac{\varepsilon}{8}&0&0&\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&\displaystyle\frac{1-\varepsilon}{4}&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&-\displaystyle\frac{\varepsilon}{8}&0&0\cr 0&0&-\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&\displaystyle\frac{1-\varepsilon}{4}&0&0&0&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&\displaystyle\frac{\varepsilon}{8}&0&0&-\displaystyle\frac{\varepsilon}{8}&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\cr 0&0&0&0&0&0&0&-\displaystyle\frac{\varepsilon}{8}&0&0&0&0&0&\displaystyle\frac{\varepsilon}{8}&0&0\cr 0&0&0&0&0&0&0&0&0&0&0&-\displaystyle\frac{\varepsilon}{8}&0&0&\displaystyle\frac{\varepsilon}{8}&0\cr 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\displaystyle\frac{1-\varepsilon}{4}\cr }\right]}}.$$ Recall that if a state $\rho$ acting on Hilbert space ${\cal{H}=\cal{H}}\otimes{\cal{H}}$ is separable, then there exists a set of product vectors $\{|\psi_i>\otimes|\phi_k>\}$, $\{i,k\}\in I$ ($I$ is a finite set of pairs of indices with number of pairs $M=\#I\leq N^2$) and probabilities $p_{ik}$ such that the ensemble $\{|\psi_i>\otimes|\phi_k>,p_{ik}\}$ ($\{|\psi_i>\otimes|\phi_k^*>,p_{ik}\}$) corresponds to the matrix $\rho$ ($\rho^{T_2}$), and the vectors $\{|\psi_i>\otimes|\phi_k>\}$ ($\{|\psi_i>\otimes|\phi_k^*>\}$) span the range of $\rho$ ($\rho^{T_2}$). In particular any vector $\{|\psi_i>\otimes|\phi_k>\}$ ($\{|\psi_i>\otimes|\phi_k^*>\}$) belongs to the range of $\rho$ ($\rho^{T_2}$), see [@9]. Now we calculate all the product (unnormalised) vectors belonging to the range of $\rho$. With the basis ordered in the following way $e_1\otimes e_1,\ e_1\otimes e_2,\ e_1\otimes e_3,\ e_1\otimes e_4, \ e_2\otimes e_1,\ e_2\otimes e_2,\ \cdots, \ e_4\otimes e_4$, any vector belonging to the range of $\rho$ can be presented as $$\label{5} \mu=\left[\matrix{ A&B&C&0&-B&D&0&E&-C&0&F&G&0&-E&-G&H }\right]^t,$$ where $A,B,C,D,E,F,H\in{\cal C}$. On the other hand a separable $\mu$ is of the form $$\label{qq} \mu_{sep}= \left[\matrix{b_1&b_2&b_3&b_4}\right]^t \otimes \left[\matrix{c_1&c_2&c_3&c_4}\right]^t,$$ $b_1,b_2,b_3,b_4,c_1,c_2,c_3,c_4\in{\cal C}$. Comparing (\[qq\]) with (\[5\]) we have $$\begin{aligned} &&b_1c_4=b_2c_3=b_3c_2=b_4c_1=0\label{7},\\ &&b_1c_2=-b_2c_1\label{8},\\ &&b_1c_3=-b_3c_1\label{9},\\ &&b_2c_4=-b_4c_2\label{10},\\ &&b_3c_4=-b_4c_3\label{11}.\end{aligned}$$ To find a set of basic separable vectors that span the range of (\[5\]), let us consider the following cases: I\) $b_1b_2\neq0$. Without loss of generality we set $b_1=1$ and $c_1=A,\ c_2=B$. From (\[7\]), we have $c_4=c_3=0$ and $b_3=b_4=0$. From (\[8\]) we have $b_2=-\frac{B}{A}$, and $b_2c_2=D$ by comparing (\[5\]) with (\[qq\]). Therefore $B^2=-AD$, and $B=\pm\sqrt{-AD}$. Hence we have $b_2=\mp\frac{\sqrt{-AD}}{A},\ c_2=\pm\sqrt{-AD}$. Thus we obtain the states $$\label{12} \frac{1}{A}\left[\matrix{A&-\sqrt{-AD}&0&0}\right]^t \otimes \left[\matrix{A&\sqrt{-AD}&0&0}\right]^t$$ and $$\label{13} \frac{1}{A}\left[\matrix{A&\sqrt{-AD}&0&0}\right]^t \otimes \left[\matrix{A&-\sqrt{-AD}&0&0}\right]^t.$$ II\) $b_1b_2=0$    i) $b_1\neq0,\ b_2=0$. We set $b_1=1$.       If $b_3b_4\neq0$ or $b_3=0,\ b_4\neq0$, then only the null vector satisfies these conditions.      If $b_3=b_4=0$, from (\[5\]) and (\[qq\]), we obtain $c_1=A,\ c_2=c_3=c_4=0$ and the vector is of the form $$\label{14} A\left[\matrix{1&0&0&0}\right]^t\otimes\left[\matrix{1&0&0&0}\right]^t.$$       If $b_3\neq0,\ b_4=0$, then similar to the case I), we have the following vectors : $$\label{15} \frac{1}{A}\left[\matrix{A&0&-\sqrt{-AF}&0}\right]^t \otimes \left[\matrix{A&0&\sqrt{-AF}&0}\right]^t$$                 and $$\label{16} \frac{1}{A}\left[\matrix{A&0&\sqrt{-AF}&0}\right]^t \otimes \left[\matrix{A&0&-\sqrt{-AF}&0}\right]^t.$$    ii) $b_1=0,\ b_2\neq0$. We take $b_2=1$. Similar to the previous case, we have the following two cases:       If $b_3=b_4=0$, then $c_1=c_3=c_4=0,\ c_2=D$, the vector is $$\label{19} D\left[\matrix{0&1&0&0}\right]^t \otimes \left[\matrix{0&1&0&0}\right]^t.$$       If $b_3=0,\ b_4\neq0$ then we have $$\label{22} \frac{1}{D}\left[\matrix{0&D&0&-\sqrt{-DH}}\right]^t \otimes \left[\matrix{0&D&0&\sqrt{-DH}}\right]^t$$                 and $$\label{23} \frac{1}{D}\left[\matrix{0&D&0&\sqrt{-DH}}\right]^t \otimes \left[\matrix{0&D&0&-\sqrt{-DH}}\right]^t.$$    iii) $b_1=b_2=0$       If $b_3b_4\neq0$, taking $b_3=1$, we have $$\label{24} \frac{1}{F}\left[\matrix{0&0&F&-\sqrt{-FH}}\right]^t \otimes \left[\matrix{0&0&F&\sqrt{-FH}}\right]^t$$                 and $$\label{25} \frac{1}{F}\left[\matrix{0&0&F&\sqrt{-FH}}\right]^t \otimes \left[\matrix{0&0&F&-\sqrt{-FH}}\right]^t.$$       If $b_3\neq0,\ b_4=0$, taking $b_3=1$, we obtain $$\label{26} F\left[\matrix{0&0&1&0}\right]^t \otimes \left[\matrix{0&0&1&0}\right]^t.$$       If $b_3=0,\ b_4\neq0$, taking $b_4=1$, we get $$\label{27} H\left[\matrix{0&0&0&1}\right]^t \otimes \left[\matrix{0&0&0&1}\right]^t.$$ The vectors (\[12\]), (\[13\]), (\[14\]) and (\[19\]) are linear dependent. So we can exclude (\[13\]). For the same reason, we can remove (\[16\]), (\[23\]) and (\[25\]). The left vectors (\[12\]), (\[14\]), (\[15\]), (\[19\]), (\[22\]), (\[24\]), (\[26\]) and (\[27\]) span the separable linear independent vectors of the range of $\rho$. The partial complex conjugations (PCC) of these vectors, e.g. PCC of (\[12\]), $\frac{1}{A}\left[\matrix{A&-\sqrt{-AD}&0&0}\right]^t \otimes\left[\matrix{A^*&\sqrt{-AD}^*&0&0}\right]^t$, do not span the range of $\rho^{T_2}$ as the vector $$\left[\matrix{1&0&0&0}\right]^t\otimes\left[\matrix{0&1&0&0}\right]^t,$$ which does belong to the range of $\rho^{T_2}$, does not instead belong to their linear span. Hence for any $0\leq k\leq\frac{1}{2}$ the state $\rho$ violates the separability criterion in [@9]. Thus the states (\[rho4\]) are bound entangled ones. We have provided a class of inseparable states with positive partial transposition by using the range criterion. Although we have taken $a=b=c=d=\frac{1}{2}$ for simplicity, in fact, the state (\[rho\]) is bound entangled as long as $\varepsilon$ is small enough such that all the roots of (\[4\]) are positive. It is verified that the trace norm of the realigned matrix of (\[rho4\]) is one. Hence the realignment separability criterion [@Rudolph02] could not detect the entanglement of this bound entangled state. Moreover the trace norm of $\rho^{T_2}$ is also one. Therefore neither the lower bound of concurrence nor the lower bound for the entanglement of formation [@11] could detect the entanglement. [**Acknowledgments**]{} We thank the referee for pointing out a mistake in the first version. The work is partially supported by NKBRPC (2004CB318000). [99]{} M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, 2000. D. Bouwmeester, A. Ekert and A. Zeilinger(Eds.), The Physics of Quantum Information: Quantum Cryptography, Quantum Teleportation and Quantum Computation (Springer, New York, 2000). C.H. Bennett *et al.*, Phys. Rev. Lett. **82**, 5385 (1999). P. Horodecki, M. Horodecki, and R. Horodecki, Phys. Rev. Lett. **82**, 1056 (1999). P. Horodecki, Phys. Lett. A, 232, 233 (1997). M. Piani, [*A class of $2^N \times 2^N$ bound entangled states revealed by non-decomposable maps*]{}, quant-ph/0411098. M. Horodecki, P. Horodecki and R. Horodecki, Phys.Rev. Lett. 80, 5239 (1998);\ P. Horodecki, M. Horodecki and R. Horodecki, Phys. Rev. Lett. 82, 1056 (1999). A. Peres, Phys. Rev. Lett. 77, 1413 (1996). M. Horodecki, P. Horodecki, and R. Horodecki, Phys.Lett. A 223, 1 (1996). S.M. Fei and X.Q. Li-Jost, Rep. Math. Phys. 53, 195(2004);\ S.M. Fei, J. Jost, X.Q. Li-Jost and G.F. Wang, Phys. Lett. A 310, 333-338(2003). O. Rudolph, quant-ph/0202121;\ K. Chen and L.A. Wu, Quant. Inf. Comp. 3, 193(2003). K. Chen, S. Albeverio and S.M. Fei, Phys. Rev. Lett. 95, 040504(2005).\ K. Chen, S. Albeverio and S.M. Fei, Phys. Rev. Lett. 95, 210501(2005).
--- abstract: 'We present results of a fully non-local, compressible model of convection for A-star envelopes. This model quite naturally reproduces a variety of results from observations and numerical simulations which local models based on a mixing length do not. Our principal results, which are for models with  between 7200 K and 8500 K, are the following: First, the photospheric velocities and filling factors are in qualitative agreement with those derived from observations of line profiles of A-type stars. Second, the  and  convection zones are separated in terms of convective flux and thermal interaction, but joined in terms of the convective velocity field, in agreement with numerical simulations. In addition, we attempt to quantify the amount of overshooting in our models at the base of the  convection zone.' date: ' Accepted 2001? December 15.' title: 'A-star envelopes: a test of local and non-local models of convection' --- \[firstpage\] convection, stars: atmospheres, interiors Introduction ============ Over the last five decades the most frequently used approach to describe stellar convection has been the mixing length theory (MLT, Biermann 1948, Böhm-Vitense 1958). However, the great simplicity achieved by describing convection in terms of local variables is only attained at the cost of trade-offs, the most important of which is the specification of a mixing length that can neither be derived from rigorous theory nor from observations. More recently, turbulence models, e.g. by Canuto et al. (1996, hereafter the CGM model) have been used to improve the MLT expressions. These convection models still provide a local expression for the temperature gradient and contain the specification of a scale length $l$. The latter also holds for non-local versions of the MLT which were proposed to account for convective overshooting. However, the intrinsic non-locality of this problem has prohibited a satisfactory solution within the context of models that use any form of local scale length (see Renzini 1987 and Canuto 1993). This difficulty is naturally avoided by numerical simulations which have come into use during the last decade as a tool to study stellar surface convection. Simulations in 3D have mostly been devoted to solar convection (Nordlund & Dravins 1990, Atroshchenko & Gadun 1994, Kim & Chan 1998, Stein & Nordlund 1998), while 2D simulations have been used for more extended computations over the HR diagram (cf. Freytag 1995 and Freytag et al. 1996). Such calculations can include the entire convective part of a stellar envelope only for the case of A-stars (and some types of white dwarfs). Even then, the computational efforts become considerable, especially when realistic microphysics is used and thermally relaxed solutions are required. To use simulations for complete stellar models is thus beyond the range of present computer capabilities (cf. Kupka 2001). Another alternative was pioneered by Xiong (1978) who used the Reynolds stress approach. This approach had previously been applied in atmospheric as well as in engineering sciences. But even in its most recent version (Xiong et al. 1997) his formalism still uses a mixing length to calculate the dissipation rate $\epsilon$ of turbulent kinetic energy. Canuto (1992, 1993) and Canuto & Dubovikov (1998, hereafter CD98) abandoned the use of a mixing length in their Reynolds stress models. These models provide both the mean quantities of stellar structure (temperature $T$, pressure $P$, luminosity $L$, and mass $M$ or radius $r$) as well as the second order moments (SOMs) of temperature and velocity fields created by stellar convection (turbulent kinetic energy $\rho K$, temperature fluctuations $\overline{\theta^2}$, convective flux $F_{\rm C}= c_p \rho \overline{w\theta}$, vertical turbulent kinetic energy $\frac{1}{2} \rho \overline{w^2}$, and the dissipation rate $\epsilon$) as the solution of coupled, non-linear differential equations. Their models are thus fully non-local on the level of second order moments. Numerical solutions of these models for the case of idealized microphysics have been presented by Kupka (1999a) and Kupka (1999b, 2001). The same equations, using realistic microphysics, were later solved for the  convection zone of A-stars (Kupka & Montgomery 2001; these results were first discussed in Canuto 2000). In this paper, we present solutions for complete A-star envelopes. Numerically, this problem is easier than that of convection in the Sun since A-stars are hotter and therefore have thinner convection zones. In addition, A-stars reveal the shortcomings of local convection models more clearly, as their less efficient convection is much more sensitive to details in the modelling. Depending, for example, on whether an $\alpha$ of 0.5 or 1.5 is chosen in MLT for a main sequence star with $\sim 7500$ K, an envelope may either have a mostly radiative temperature gradient or still contain a nearly adiabatic region. This holds for any of the convection models which rely on a convective scale length. Hence, the efficiency of convection in the envelopes of A-stars has remained an open problem and makes them a logical as well as a promising starting point for our study. In the following, we give an outline of the physics and the numerical procedure used to compute our envelope models (the discussion of the moment equation formalism is self-contained, so that readers unfamiliar with it can skip ahead without difficulties). Results are then presented for a sequence of models which differ from each other only in . We include a model with lower gravity in order to illustrate the effect of a change in $\log g$. Finally, we show that the non-local convection model agrees with the known observational constraints and the results of numerical simulations, whereas local models are fundamentally unable to do this. Description of Model {#Sect2} ==================== [ =.48 ]{} The convection model used here is an extension of the CD98 model which requires the solution of five differential equations of first order in time and second order in space for $K$, $\overline{\theta^2}$, $J=\overline{w\theta}$ $=F_C/(\rho c_p)$, $\overline{w^2}$, and $\epsilon$, and of an additional equation for the time evolution of $T$ (cf. equations (1)–(5) and (8) in Kupka 1999b). This system is completed by an equation for the total pressure (“hydrostatic equilibrium” including turbulent pressure, equation (7) in Kupka 1999b) and for the mass (“conservation of mass”). We solve this set of differential equations on an unequally spaced mass grid, with the zoning chosen so as to resolve the gradients in the various quantities. Compared to the model discussed in Kupka (1999b) the following changes and extensions have been included: a) instead of using high Peclet number limits we apply the full form of the CD98 model for the SOMs. We thus take advantage of a better theoretical underpinning of the influence of radiative loss rates on two time scales in the equations for $\overline{\theta^2}$ and $J$, $\tau_{\theta}$ and $\tau_{p\theta}$, which are provided by a well-tested turbulence model (see CD98 for a summary). b) The Prandtl number is set to $10^{-9}$ as a typical value for the outer part of A-star envelopes (values up to 2 orders of magnitude larger than this do not alter our results). c) With the exception of the pressure correlations $\overline{p'w}$ and $\overline{p'\theta}$ which require further study (see Kupka & Muthsam 2002), the complete form of the “compressibility terms”, equations (42)–(48) of Canuto (1993), is used to extend the CD98 model to the non-Boussinesq case. Hence, we now also include the effect of a non-zero gradient in the turbulent pressure $p_{\rm turb}$ on the superadiabatic gradient $\beta$. d) We use a more advanced model for the third order moments (TOMs) published in Canuto et al. (2001), although with a different form for the fourth order moments (see Kupka 2002). If, instead, the original form for the fourth order moments is used, the models with ${\mbox{$T_{\rm eff}$}}\geqslant 8000$ K, discussed in Figure \[models\], show less efficient convection, with the opposite being true for the cooler models. In both cases, however, the results are qualitatively the same as the results we present here. As in Kupka & Montgomery (2001), we use a relation similar to equation (37f) in Canuto (1992) and thus avoid a downgradient approximation for the flux of $\epsilon$ (such as equation (6) of Kupka 1999b). e) The effect of stratification on the pressure correlation time scales, $\tau_{pv}$ and $\tau_{p\theta}$, was accounted for following Canuto et al. (1994). Likewise, the time scales $\tau_{\theta}$ and $\tau_{p\theta}$ include a correction for the optically thin regime of stellar photospheres (cf. Spiegel 1957), while for consistency, the expression for the radiative flux $F_{\rm r}$ was taken from the stellar structure code we use for our initial models and boundary conditions (see Pamyatnykh 1999, it assumes the diffusion approximation for $\tau\geqslant 2/3$, but differs from it by a “dilution factor” for optical depths $\tau < 2/3$). More details on these alterations and comparisons with numerical simulations are discussed in Kupka (2002) and in Kupka & Muthsam (2002). With one exception we have used the original constants of Canuto (1993), CD98, and Canuto et al. (2001). We consider their adjustment to be of little use, because in case of failure it is usually the entire shape of the functional relation which is at variance with measurements or simulations (cf. the MLT example in Sect. \[Sect4\]). The one exception we have made is the high efficiency limit of $\tau_{p\theta}$, for which the CD98 model appears to predict values too low in comparison with simulations for idealized microphysics (see Kupka 2001), and also in comparison with a previous model (Canuto 1993). Most likely this is due to an isotropy assumption in its derivation and we thus use a $\tau_{p\theta}$ increased by a factor of 3 as suggested in Kupka (2001). This problem will be thoroughly discussed in Kupka & Muthsam (2002). A numerical approach to solve the resulting system of equations was briefly described in Kupka (1999a,b); a comprehensive discussion of the code will be given in Kupka (2002). Here we only outline the solution procedure from the viewpoint of stellar structure modelling. We start from an envelope model computed with the code described in Pamyatnykh (1999), where the equation of state and opacity data are from the OPAL project (Rogers et al. 1996, Iglesias & and Rogers 1996). The metallicity, , surface $\log g$, and total stellar radius $R_{\star}$ are taken from this model and held constant during relaxation. We place some 200 mass shells from the mid photosphere (with $\tau_{\rm ross}\sim 10^{-3}$) down to well below the  convection zone. Having embedded the convection zones within stably stratified layers, we can use the boundary conditions of Kupka (1999b) for the SOMs (cf. Kupka 2002). For the mean structure quantities we keep $r$, $T$, and $P$ fixed to their values at the upper photosphere of the input model, while a constant $L$ is enforced at the bottom. The complete system is integrated in time (currently by a semi-implicit method) until a stationary, thermally relaxed state is found. The mass shells can be rezoned to a different relative size to resolve, e.g., steep temperature gradients that may appear and/or disappear during convergence. The radiative envelope below the convection zones may then be obtained from a simple downward integration. Generating a complete stellar model would require fitting such an envelope onto a stellar core, which in turn requires iterating the envelope parameters (, $\log g$, $R_{\star}$) to achieve a match of $P$, $T$, and $r$ at the core/envelope interface. Since we have not yet computed evolutionary models, we have not needed to do this, although this would be a straightforward extension of our work. Results {#Sect3} ======= Figure \[models\] shows the central results of this paper: both the  and  convection zones appear quite separate when the quantity which is examined is the convective flux (a), but completely merged in terms of the convective velocity field (b). Thus, to obtain a self-consistent solution, one must solve the equations for the entire region simultaneously. From Figure \[models\]a (and Figure \[fkin\]), we see that the mid to the upper photospheres of these models (the crosses indicate the point where $\tau=2/3$) are essentially radiative, as they are in the local CGM and MLT models. Thus, the temperature and density structure of both the local and non-local models are virtually identical at small optical depths, which justifies our use of the local models as an outer boundary condition for the non-local models. In Table \[params\], we list these results. Since the  and convection zones are well-separated in terms of $F_{\rm C}/F_{\rm T}$, we have listed their maximum fluxes separately (columns 3 and 4). For the convective velocity, $v_{\rm C}=(\overline{w^2})^{0.5}$, we have listed just a single maximum since this quantity is large throughout the entire region (column 6); the same holds for the relative turbulent pressure (column 8). Since all of these maxima occur below the stellar surface, we have also listed the photospheric ($\tau=2/3$) values of $v_{\rm C}$ and $p_{\rm turb}/p_{\rm tot}$ (columns 7 and 9). Finally, in Figure \[fkin\] we plot the kinetic energy flux as a function of $\log T$, for the four different models from Figure \[models\]. Besides the fact that the cooler models have larger fluxes, which is to be expected, we see from the magnitudes of these fluxes that $F_{\rm kin}$ is essentially negligible for the models we have examined. We are thus in a different regime from that of the Sun, where $|F_{\rm kin}/F_{\rm T}|$ may be as large as 20 per cent (cf. Stein & Nordlund 1998, Kim & Chan 1998). In addition to these results, we have also run low- and high-metallicity models ($Z=0.006, 0.06$, respectively). We find that for the low-$Z$ models, $(v_{\rm C})_{\rm max}$ decreases $\la 3$ per cent while $(F_{\rm C})_{\rm max}$ increases by $\la 10$ per cent, with the opposite trends for the high-$Z$ models. While these changes are not large, we note that they would be enhanced by the use of non-grey atmospheres. On the other hand, reducing $\log g$ (to a value still consistent with a main sequence object) results in much weaker convection caused by a lower density and hence smaller heat capacity of the fluid, as shown by the last model in Table \[params\], which is taken from an (MLT based) evolutionary sequence of a 2.1 $M_\odot$ star. Discussion {#Sect4} ========== --------------- ---------- ------- ------- ------------------------- -------------------------- ---------------------------------------- ----------------------------------------- ------- $T_{\rm eff}$ $\log g$ OV $(v_{\rm C})_{\rm max}$ $(v_{\rm C})_{\tau=2/3}$ $(p_{\rm turb}/p_{\rm tot})_{\rm max}$ $(p_{\rm turb}/p_{\rm tot})_{\tau=2/3}$ (K)   (in $H_p$) (km s$^{-1}$) (km s$^{-1}$) 8500 4.4 0.023 0.019 0.44 5.29 1.60 0.131 0.043 8000 4.4 0.030 0.100 0.46 5.48 1.94 0.146 0.068 7500 4.4 0.041 0.303 0.45 4.61 1.64 0.105 0.053 7200 4.4 0.051 0.612 0.46 4.36 1.85 0.100 0.069 6980 3.53 0.038 0.164 0.52 5.33 1.40 0.130 0.042 \[params\] --------------- ---------- ------- ------- ------------------------- -------------------------- ---------------------------------------- ----------------------------------------- ------- [ ]{} The fact that $F_{\rm kin}$ is positive in the photosphere for each of these models (Figure \[fkin\]) means that the skewness of spectral lines produced in this region is also positive, and that the corresponding filling factors for rising versus falling fluid elements is less than 1/2 (cf. CD98). This is in agreement with the observations of line profiles in A-stars (Landstreet 1998). In the future, quantitative comparisons with such observational data will provide some of the most stringent tests of this model. As previously mentioned, the  and  convection zones may be thought of as being thermally disconnected but dynamically coupled, a situation which is impossible within the context of MLT (or CGM) models. A further shortcoming of MLT is shown in Figure \[mltcomp\]. The convective flux of two MLT models, with mixing lengths of $l = 0.36$ and $0.42 H_p$, respectively, is plotted along with the flux from the non-local solution (upper panel). First, we see that it is impossible for the MLT models to match simultaneously the flux in both the  and  convection zones (at least with the same mixing length). Second, even if we try to model only the  convection zone, fixing the mixing length so as to match the maximum flux results in a convection zone which is much too narrow. In addition, this produces photospheric velocities which are $\sim 3$ orders of magnitude smaller than those of the non-local model and the observations (lower panel, Figure \[mltcomp\], see also Sect. \[Sect5\]). We note that since the upper photosphere is optically thin and therefore locally stable against convection, local convection models will always predict convective fluxes which are extremely small (or zero), even for values of $\alpha$ which are “unreasonably” large. [ ]{} As a further test of our results we have compared them with 2D simulations by Freytag (1995), Freytag et al. (1996), and additional models provided by Freytag (2001, private communication). We find agreement with the following results from our calculations: a) Models over the entire range of A-type main sequence stars with  up to 8500 K have their  and  convection zones dynamically connected. The vertical mean velocities in the overshoot regions around $\log T \sim 4.4$ are of order 1.5 to 3 km s$^{-1}$. b) There is considerable overshooting (OV) below the  convection zone. However, the 2D simulations yield a size of the OV region which is 3 times larger in terms of radius and also much larger in terms of $H_p$, for the entire range of models in Figure \[models\] and Table \[params\]. Such differences are anticipated from a comparison of numerical simulations in 2D and 3D (Muthsam et al. 1995, see also Fig. 1 in Kupka 2001). More detailed examples demonstrating that 2D simulations yield upper limits for the (3D) OV extent will be given in Kupka & Muthsam (2002). c) The maximum of $F_{\rm C}$ and the temperature gradient in the  convection zone for the models with ${\mbox{$T_{\rm eff}$}}\geqslant 8000$ K are in good agreement with those of the 2D simulations. However, for the models with lower , the 2D simulations yield higher convective fluxes and lower temperature gradients and, hence, the two convection zones merge thermally at a  which is $\sim 200$ K to $\sim 300$ K higher than in our non-local models. Apart from the differences between 2D and 3D convection, one important reason for discrepancies is the effect of ionization (cf. also Kupka 2002). Briefly summarized, the current convection model assumes an ideal gas equation of state for the purpose of computing the ensemble averages in the expression for the convective flux. Using an improved, although approximate, expression for the convective (enthalpy) flux, we estimate that this assumption introduces errors of order 15–20 per cent in the convective flux. Finally, a potentially significant source of discrepancies between our models and the 2D simulations is the use of a different equation of state and opacities (OPAL, Rogers et al. 1996 vs. ATLAS6, Kurucz 1979) and the non-diffusive law we use for the photospheric radiative flux (see Sect. \[Sect2\]). This does not allow us to make a detailed quantitative comparison of model sequences. Thus, we have had to restrict ourselves to only a qualitative discussion. Conclusions {#Sect5} =========== Using a fully non-local, compressible convection model together with a realistic equation of state and opacities, we have calculated envelope models for stellar parameters appropriate for A-stars. In examining the results of this model, we have found many points of agreement both with observations and with numerical simulations. First, our photospheric velocities are consistent with the lower limit of the typical micro- and macroturbulence parameters found for A-stars (1.5–2 km s$^{-1}$, see Varenne & Monier 1999 and Landstreet 1998). Line blanketing should further increase these values. We expect a smoother $v_{\rm C}(r)$ (without small minima as in Figures \[models\]b and \[mltcomp\]) from an improved treatment of fourth order moments and inclusion of $\overline{p'w}$ (cf. Sect. \[Sect2\]). Second, we find that the filling factor for rising fluid elements in the photospheres of our models is less than 1/2, also in agreement with observations of line profiles in A-stars. Third, we find in the temperature range 7200 K to 8500 K that the  and  zones are well-separated in terms of the convective flux but [*not*]{} in terms of the convective velocity field. The two zones are thus in some sense thermally separated but dynamically joined. This feature is also shown by the numerical simulations. Finally, we find an OV at the base of the  convection zone of $\sim 0.45 H_p$. The numerical simulations find an even larger OV, but this may also be due to the fact that they were done in 2D. We note that in all cases we find a nearly radiative temperature gradient in the OV region, whereas the velocities in this region remain quite large, within an order of magnitude of their maxima within the convection zone ($\sim 0.5$ km s$^{-1}$). In addition, the non-local model yields smaller temperature gradients than the local model of Canuto et al. (CGM, 1996). Such a comparison with MLT is more difficult due to the large range of $\alpha$ in current use. Nevertheless, we have found evidence that for main sequence models $\alpha$ has to be decreased from values of $\sim 1.0$ at about 7100 K to $\sim 0.4$ for models with ${\mbox{$T_{\rm eff}$}}\ = 8000$ K in order to obtain a comparable value of $(F_{\rm C})_{\rm max}$ in the  convection zone. In order to match $(F_{\rm C})_{\rm max}$ in the  convection zone, a completely different set of $\alpha$’s (with larger values) would be required. As already mentioned, A-stars are excellent choices for this first calculation since they have relatively thin surface convection zones, so that the thermal time scales involved are not so long. In addition, they are interesting stars in their own right, containing high-metallicity stars (the Am stars) as well as two groups of pulsating stars (the roAp and $\delta$ Scuti stars). In the future, it may be possible to use the pulsating stars as probes of the subsurface convection zones, much as has been done in the case of the Sun. Acknowledgments {#acknowledgments .unnumbered} =============== This research was performed within project [*P13936-TEC*]{} of the Austrian Fonds zur Förderung der wissenschaftlichen Forschung (FwF), and was supported by the UK Particle Physics and Astronomy Research Council. We thank Dr. B. Freytag for providing us with results from his simulations. \[lastpage\] [99]{} Atroshchenko I.N., Gadun A.S., 1994, A&A, 291, 635 Biermann L., 1948, Z. Astrophys., 25, 135 Böhm-Vitense E., 1958, Z. Astrophys., 46, 108 Canuto V.M., 1992, ApJ, 392, 218 Canuto V.M., 1993, ApJ, 416, 331 Canuto V.M., 2000, 24th meeting of the IAU, Joint Discussion 5, August 2000, Manchester, England Canuto V.M., Minotti F., Ronchi C., Ypma R.M., Zeman O., 1994, J. Atm. Sci., 51 (No. 12), 1605 Canuto V.M., Goldman I., Mazzitelli I., 1996, ApJ, 473, 550 Canuto V.M., Dubovikov, M.S., 1998, ApJ, 493, 834 (CD98) Canuto V.M., Cheng Y., Howard A., 2001, J. Atm. Sci., 58, 1169 Freytag B., 1995, PhD thesis, University of Kiel Freytag B., Ludwig H.-G., Steffen M., 1996, A&A, 313, 497 Iglesias C.A., Rogers F.J., 1996, ApJ, 464, 943 Kim Y.-C., Chan K.L., 1998, ApJ, 496, L121 Kupka F., 1999a, Theory and Tests of Convection in Stellar Structure, A. Gimenez, E.F. Guinan and B. Montesinos, ASP Conf. Ser. 173, 157 Kupka F., 1999b, ApJ, 526, L45 Kupka F., 2001, in Proceedings of the COROT/SWG Sept. 2000 meeting, edt. E. Michel, Paris Kupka F., 2002, ApJ, to be submitted (Paper I+III) Kupka F., Montgomery M.H., 2001, in Proceedings of the COROT/SWG Sept. 2000 meeting, edt. E. Michel, Paris Kupka F., Muthsam H.J., 2002, ApJ, to be submitted (Paper II) Kurucz R.L., 1979, ApJS, 40, 1 Landstreet J.D., 1998, A&A, 338, 1041 Muthsam H.J., Göb W., Kupka F., Liebich W., Zöchling J., 1995, A&A, 293, 127 Nordlund [Å]{}, Dravins D., 1990, A&A, 228, 155 Pamyatnykh A.A., 1999, Acta Astronomica, 49, 119 Renzini A., 1987, A&A, 188, 49 Rogers F.J., Swenson F.J., Iglesias C.A., 1996, ApJ, 456, 902 Spiegel E.A., 1957, ApJ, 126, 202 Stein R.F., Nordlund [Å]{}, 1998, ApJ, 499, 914 Varenne O., Monier R., 1999, A&A, 351, 247 Xiong D.R., 1978, Chin. Astron., 2, 118 Xiong D.R., Cheng Q.L., Deng L., 1997, ApJS, 108, 529
--- abstract: 'We present a mapping which associates pure $N$-qubit states with a polynomial. The roots of the polynomial characterize the state completely. Using the properties of the polynomial we construct a way to determine the separability and the number of unentangled qubits of pure $N$-qubit states.' author: - 'H. Mäkelä and A. Messina' title: 'Polynomial method to study the entanglement of pure $N$-qubit states' --- Introduction ============ Considerable effort is spent in developing methods for the detection and classification of entangled states. One important aim is to find ways to detect the separability of mixed states consisting of an arbitrary number of subsystems. While a general, easily computable, method to detect the separability of arbitrary mixed multipartite states is still lacking, some partial results exist. Maybe the most famous separability condition for mixed states is the positive partial transposition, also known as Peres-Horodecki criterion [@Peres96; @Horodecki96]. This method is simple and easy to apply, but it can be used to detect only bipartite separability. Therefore various separability conditions which work in an $N$-party setting have been developed. Examples of these are permutation criteria, where the indices of the density matrix are permuted [@Horodecki06], the use of quadratic Bell-type inequalities [@Seevinck08], algorithmic approaches [@Doherty05], and the use of positive maps [@Horodecki01]. For a more comprehensive list, see [@Guhne09; @Horodecki09]. In the case of pure states the situation is simpler. A pure $N$-partite state is separable if and only if all the reduced density matrices of the elementary subsystems describe pure states. Alternatively, in a bipartite case, separability can be determined by calculating the Schmidt decomposition of the state. Unfortunately, the concept of the Schmidt decomposition cannot be straightforwardly generalized to the case of $N$ separate subsystems [@Peres95; @Thapliyal99]. In addition to these two well-known methods, various other approaches to the pure state separability have been discussed. A separability condition based on comparing the amplitudes and phases of the components of the state has been discussed in [@Jorrand03; @Matsueda07]. It has been shown that the separability of pure three-qubit states can be detected by studying two-qubit density operators [@Brassard01] and expectation values of spin operators [@Yu05; @Yu07a]. Separability tests based on studying matrices constructed from the components of the state vector, known as coefficient matrices, have gained attention recently [@Lamata06; @Li08; @Huang09]. In this article we present a mapping which associates the pure states of an $N$-qubit system with a polynomial. The roots of the polynomial determine the state completely and vice versa. We show that this polynomial establishes a simple way to test the separability of pure $N$-qubit states and to study the number of unentangled particles. The idea to associate a state of a quantum mechanical system with a polynomial is not new. Already in 1932 E. Majorana presented a polynomial, nowadays known as the Majorana polynomial, which he used to show that the states of a spin-$S$ particle can be expressed as a superposition of symmetrized states of $2S$ spin-$\frac{1}{2}$ systems [@Majorana32; @Bloch45]. This decomposition, the Majorana representation, has been relatively unknown for a long time. However, it has recently found applications in may different fields, such as in studying the symmetries of spinor Bose-Einstein condensates [@Barnett06; @Barnett07; @Makela07; @Barnett09], in the context of reference frame alignment [@Kolenderski08], in helping to define anticoherent spin states [@Zimba06], and in calculating the spectrum of the Lipkin-Meshkov-Glick model [@Ribeiro07; @Ribeiro08]. It has also been used to give a graphical representation for the states of an $n$-level system [@Bijurkar06]. The states of an $N$-qubit quantum register can be viewed as the spin states of a particle with spin $S=(2^N-1)/2$. Therefore, expressing the pure states of an $N$-qubit system utilizing the Majorana representation requires the use of $2^N-1$ spin-$\frac{1}{2}$ systems. In the approach we present in this article only $N$ two-level systems are needed to characterize the states of this system. The Majorana representation is useful in studying the behavior of spin states under spin rotations as a spin rotation of a spin-$S$ particle is equivalent with rotating the states of the constituent spin-$\frac{1}{2}$ particles [@Bloch45]. However, when discussing the states of an $N$-qubit quantum register, this property is not very helpful and therefore the benefits of the Majorana representation cannot fully be taken advantage of. In this case the simplified description presented in this article becomes useful. This article is organized as follows. In Sec. II we introduce a mapping between the pure states of an $N$-qubit quantum register and polynomials. We argue that the roots of a polynomial determine a unique state and vice versa. In Sec. III we calculate the polynomial of separable pure states and derive a necessary and sufficient condition for the separability of an arbitrary pure $N$-qubit state. We also briefly discuss the generalization of the polynomial approach to systems containing $N$ copies of an $h$-level system. In Sec. IV we show how the polynomial can be used to study the number of unentangled qubits. In Sec. V we present the conclusions. Characteristic polynomial ========================= We denote the basis of the qubit $j$ by $\{|0\rangle_j,|1\rangle_j\}$, so the basis vectors of an $N$-qubit quantum register can be chosen as $|i_0i_1\cdots i_{N-1}\rangle\equiv|i_0\rangle_0\otimes|i_1\rangle_1\otimes\cdots\otimes |i_{N-1}\rangle_{N-1}$, where every $i_j\in \{0,1\}$. Each natural number $0\leq i\leq 2^N-1$ can be written using binary notation as $i=\sum_{j=0}^{N-1} i_j 2^j$, where $i_j\in \{0,1\}$. Using this we can associate the basis vector $|i_0i_1\cdots i_{N-1}\rangle$ with $|i\rangle_d$. Here the subscript $d$ shows that decimal notation is used to label the basis states. Let $$\label{somestate} \phi=\sum_{i=0}^{2^N-1} \, C_i|i\rangle_d, \quad C_i\in \mathbb{C},$$ be some, possibly unnormalized, state vector of an $N$-qubit system. We associate this vector with the polynomial $$\label{poly} P(\phi;x)\equiv \sum_{i=0}^{2^N-1} C_i x^i,$$ which we call the characteristic polynomial of $\phi$. By the fundamental theorem of algebra, this polynomial can be written in a unique way as $$\label{P} P(\phi;x)=C_k \prod_{j=0}^{k-1}(x-x_j),$$ where $\{x_j\,|\,j=0,1,\ldots,k-1\}$ are the roots and $k$ is the degree of $P(\phi;x)$. If $k=0$ we define $\prod_{j=0}^{-1}(x-x_j)=1$. The set of vectors $\{c\,\phi \,|\, c\in\mathbb{C}, c\not=0\}$ determines a unique set of roots and each set of roots $\{x_0,x_1,\ldots ,x_{k-1}\}$ determines the vector $\phi$ up to normalization and phase. Therefore we have a bijective map between the pure states of an $N$-qubit quantum register and the roots of complex polynomials of degree $k\leq 2^{N}-1$ [^1]. Explicitly, the components of $\phi$ are determined by the roots through the formula $$\label{components} C_i=(-1)^{k-i}\sum_{j_0<j_1<j_2<\cdots <j_{k-1-i}} \!\!\!\!\!\!\!\!\!\!\!\!x_{j_0}x_{j_1}x_{j_2}\cdots x_{j_{k-1-i}},$$ where $i=0,1,2,\ldots,k-1$ and we have chosen $C_k=1$. The roots contain the same amount of information on the system as the state vector $\phi$. In particular, all the entanglement properties of $\phi$ are encoded in the set of roots corresponding to $\phi$. With the help of the roots the state $\phi$ can be given a geometrical representation as $2^N-1$ points on the Bloch sphere, see Ref. [@Makela09]. Separable pure $N$-qubit states =============================== In this section we show how the separability of $\phi$ can be detected with the help of $P(\phi;x)$. In order to do so, we first calculate the characteristic polynomial of a separable state. Any separable pure state $\phi_{\textrm{s}}$ can be written as $$\begin{aligned} \label{Productstate} \nonumber \phi_{\textrm{s}}&=\bigotimes_{j=0}^{N-1}\phi_j\\ &=\bigotimes_{j=0}^{N-1}(a_j |0\rangle_j +b_j |1\rangle_j)\quad a_j,b_j\in\mathbb{C}.\end{aligned}$$ Assume that $|l\rangle_d$ is a basis state of an $L$-qubit system and that $|m\rangle_d$ is that of an independent $M$-qubit system. Using the binary expressions for $l$ and $m$ it is easy to see that $$\label{tworegs} |l\rangle_d |m\rangle_d =|l+2^L m\rangle_d$$ holds for the tensor product of $|l\rangle_d$ and $|m\rangle_d$. Here and in what follows we omit the tensor product symbol. Let $\xi^L$ and $\xi^M$ be states of $L$-qubit and $M$-qubit quantum registers, respectively. Then we can write $\xi^L=\sum_{i=0}^{2^L-1}\xi^L_i |i\rangle_d$ and $\xi^M=\sum_{i'=0}^{2^M-1}\xi_{i'} |i'\rangle_d$. If $\phi\in (\mathbb{C}^2)^{L+M}$ can be written as $\phi=\xi^L\xi^M$, then $$\begin{aligned} \phi &=\sum_{i=0}^{2^L-1}\sum_{i'=0}^{2^M-1}\xi_i^L \xi_{i'}^M|i\rangle_d |i'\rangle_d\\ &= \sum_{l=0}^{2^L-1}\sum_{i'=0}^{2^M-1}\xi_i^L \xi_{i'}^M|i+2^L i'\rangle_d, \end{aligned}$$ where we have used Eq. (\[tworegs\]). Consequently, the characteristic polynomial of $\phi$ becomes $$\begin{aligned} \nonumber P(\phi;x)&=\sum_{i=0}^{2^L-1}\sum_{i'=0}^{2^M-1}\xi^L_i \xi_{i'}^M x^{i+2^L i'}\\ \nonumber &=\Big(\sum_{i=0}^{2^L-1}\xi_i^L x^i\Big) \sum_{i'=0}^{2^M-1}\xi_{i'}^M (x^{2^L})^{i'}\\ &= P(\xi^L;x)P(\xi^M;x^{2^L}).\label{ProdPoly}\end{aligned}$$ Therefore, if the state of the quantum register is the product of an $L$-qubit state and an $M$-qubit state, the characteristic polynomial factorizes as the product of the polynomials of the two states. In the polynomial of the $M$-qubit state the variable $x$ is replaced by $x^{2^L}$. Using Eq. (\[ProdPoly\]) it is easy to calculate the characteristic polynomial $P(\phi_{\textrm{s}};x)$ of a separable state $\phi_{\textrm{s}}\equiv\phi_0\phi_1\cdots\phi_{N-1}$ given by Eq. (\[Productstate\]). By defining $\phi_{j;N}\equiv \phi_j\phi_{j+1}\cdots\phi_{N-1}$, so that $\phi_{j;N}=\phi_j\phi_{j+1;N}$, and using Eq. (\[ProdPoly\]) repeatedly we get $$\begin{aligned} \nonumber \label{Poly} P(\phi_{\textrm{s}};x) &=P(\phi_0,x)P(\phi_{1;N},x^2) \\ \nonumber &=P(\phi_0,x)P(\phi_{1},x^2 )P(\phi_{2;N},x^4 )\\ \nonumber &=P(\phi_0,x)P(\phi_{1},x^2)P(\phi_{2},x^4)P(\phi_{3;N},x^8)\\ \nonumber &=\cdots \\ \nonumber &=\prod_{j=0}^{N-1} P(\phi_j,x^{2^{j}})\\ &=\prod_{j=0}^{N-1} (a_j+b_j x^{2^{j}}).\end{aligned}$$ We see that the characteristic polynomial of a separable state can always be written in the form of (\[Poly\]). On the other hand, there always exists a separable state whose characteristic polynomial is given by Eq. (\[Poly\]), namely the state $\phi_{\textrm{s}}$. From the definition of $P(\phi;x)$ it follows that if $P(\phi;x)=P(\tilde{\phi};x)$, then necessarily $\phi=\tilde{\phi}$. Therefore $\phi_{\textrm{s}}$ is the unique vector which gives rise to the polynomial of Eq. (\[Poly\]). In conclusion, a pure $N$-qubit state $\phi$ is separable if and only if $P(\phi;x)$ can be written as in Eq. (\[Poly\]). The roots of this equation are $$\label{roots} x_{jm}=\left(-\frac{a_j}{b_j}\right)^{1/2^j}e^{i\frac{2\pi m}{2^j}},\quad m=0,1,\ldots ,2^j-1,$$ where $b_j$ has to be nonzero. If $b_j$ is zero the degree of the polynomial is decreased by $2^j$ from the maximal degree $2^N-1$. The separability of a state $\phi$ can be determined by calculating the roots of $P(\phi,x)$ and checking if they are of the form given by Eq. (\[roots\]). These calculations can in practice turn out to be very complicated. It may be computationally demanding to achieve accurate enough results in order to reliably see how the roots are distributed in the complex plane. This is partly related to the fact the degree of the polynomial $P(\phi;x)$ can be $2^N-1$, which grows rapidly with $N$, rendering the calculation of roots time-consuming for large $N$. However, we will show next that the roots of $P(\phi;x)$ can be expressed in a simple way in terms of the components $\{C_i\}$ of the state vector if $\phi$ is separable. Let $\phi_{\textrm{s}}$ be the separable state given by Eq. (\[Productstate\]). When this vector is written in the form $\phi_{\textrm{s}}=\sum_{i=0}^{2^N-1} \, C_i|i\rangle_d$, the components $C_i$ are easily obtained by noting that $i_j=0$ ($i_j=1$) corresponds to $a_j$ ($b_j$): $$\label{ck} C_i=\prod_{j=0}^{N-1}[(1-i_j)a_j+i_j b_j].$$ Here we have used the binary form of $i$, that is, we have written $i=\sum_{j=0}^{N-1}i_j 2^j$. We assume that $C_k\not=0,C_{k+1}=\cdots=C_{2^N-1}=0$, so that the degree of $P(\phi_{\textrm{s}};x)$ is $k$. By writing $k=\sum_{j=0}^{N-1} k_j 2^j$ we see that if $k_j=1$, then $(k-2^{j})_{l}=k_l-\delta_{jl}$, $l=0,1,\ldots ,N-1$. Using this and Eq. (\[ck\]) it is easy to see that now $a_j/b_j=C_{k-2^j}/C_k$. On the other hand, if $k_j=0$, then $(k+2^j)_l=k_l+\delta_{jl}$ and Eq. (\[ck\]) gives $b_j/a_j=C_{k+2^j}/C_k=0$. Summarizing, $$\label{ratios} \begin{array}{llll} \dfrac{a_j}{b_j}&=&\dfrac{C_{k-2^j}}{C_k} &\qquad\textrm{if } k_j=1,\\ b_j&=& 0 &\qquad\textrm{if } k_j=0. \end{array}$$ Using Eq. (\[roots\]) we immediately see that the $k$ roots of $P(\phi_{\textrm{s}};x)$ are $$\label{xlm} x_{jm}=\left(-\dfrac{C_{k-2^j}}{C_k}\right)^{1/2^j}e^{i\frac{2\pi m}{2^j}},\quad m=0,1,\ldots ,2^j-1,$$ where $j$ takes those values for which $k_j =1$. On the other hand, if the roots and their multiplicities are known, the polynomial can be determined up to a multiplying constant. In particular, if $x=0$ is a root, then its multiplicity has to be equal to the lowest power of the polynomial. In conclusion, an arbitrary pure state $\phi$ is separable if and only if $$\begin{array}{lll} &\textrm{(Ia) }& \textrm{All the numbers } x_{jm} \textrm{ given by Eq. (\ref{xlm}) are}\\&&\textrm{roots of } P(\phi;x).\\ &\textrm{(Ib) }& \textrm{The number of } x_{jm} \textrm{ equaling zero is equal to}\\&&\textrm{the lowest power of } P(\phi;x). \nonumber \end{array}$$ An alternative formulation is that $\phi$ is separable if and only if the quantity $$S(\phi)\equiv\sum_{j,\, k_j=1}\sum_{m=0}^{2^j-1}|P(\phi;x_{jm})|$$ equals zero and Condition (Ib) holds. Note that if $k=0$ the state is separable. If a state is found to be separable, then the one-particle states it consists of can be explicitly constructed with the help of the ratios $a_j/b_j$ given by Eq. (\[ratios\]). We now present some examples of the detection of separability of states with several freely varying components. Example 1 --------- As the first example we consider a state defined as $$\xi^N=C_0|0\rangle_d+C_1|1\rangle_d+\cdots +C_{k-2}|k-2\rangle_d+C_k|k\rangle_d,$$ where $C_0,C_k\not=0$ and $k$ is odd. Since $k$ is odd $k_0=1$ and Eq. (\[xlm\]) shows that $x_{00}=C_{k-1}/C_k=0$. Because $P(\xi^N;x_{00})=C_0\not=0$, $\xi$ cannot be a separable state. In a three-qubit case we see that, for example, $$\begin{aligned} \nonumber \xi^3 &= C_0|000\rangle+C_1|100\rangle+C_2|010\rangle+C_3|110\rangle\\ &+C_4|001\rangle+C_5|101\rangle+C_7|111\rangle\end{aligned}$$ where $C_0 C_7\not =0$ cannot be separable. In order to compare our approach with other separability tests, we now check the separability of $\xi^3$ using an alternative method. There exist various (partial) multipartite separability criteria for mixed states (see, for example, [@Horodecki06; @Seevinck08; @Doherty05; @Horodecki01]). While these are useful when mixed states are studied, in the case of pure states the most convenient separability check is usually the standard method of calculating the reduced single-qubit density matrices of the $N$-qubit state. This view is supported by the fact that alternative pure state separability tests require examining the properties of matrices that are higher dimensional than the two-by-two dimensional reduced single-qubit density matrices [@Brassard01; @Huang09] or require the calculation of the expectation values of operators expressed as tensor products of the Pauli spin matrices [@Yu05; @Yu07a]. This results in a complex calculation if a state that contains many freely varying components, such as $\xi^3$, is studied. For these reasons we now examine the separability of $\xi^3$ by using the method of partial traces. Here and in what follows we denote the reduced single-qubit density matrix pertaining to qubit $j$ by $\rho_j$. Now the indexing of qubits runs from $0$ to $N-1$. The vector $\xi^3$ is separable if and only if any two of the three density matrices $\rho_0,\rho_1$, and $\rho_2$ describe pure states. The state $\rho_j$ is pure if and only if $\det(\rho_j)=0$, so if $\det(\rho_j)\not=0$ for at least one $j$, then $\xi^3$ is entangled. As an example we determine $\det(\rho_0)$. A simple calculation shows that the single-qubit reduced density matrix of the first qubit is $$\footnotesize \rho_0=\left(\begin{array}{cc} |C_0|^2+|C_2|^2+|C_4|^2 & C_0 C_1^*+C_2 C_3^*+C_4C_5^*\\ C_0^* C_1+C_2^* C_3+C_4^*C_5 & |C_1|^2+|C_3|^2+|C_5|^4+|C_7|^2 \end{array}\right)$$ Using the inequality $\textrm{Re}(C)\leq |C|$, where $C$ is an arbitrary complex number, it can be shown that the following inequality holds for the determinant of $\rho_0$ $$\begin{aligned} \footnotesize \det(\rho_0) \nonumber &\geq |C_7|^2(|C_0|^2+|C_2|^2+|C_4|^2)+ (|C_0C_3|-|C_1C_2|)^2\\ &+ (|C_0C_5|-|C_1C_4|)^2+(|C_2C_5|-|C_3C_4|)^2. \end{aligned}$$ This is bounded below by $|C_0C_7|^2>0$, confirming the aforementioned result concerning the separability of $\xi^3$. Therefore a necessary condition for the separability of $\xi^3$ can be straightforwardly obtained using partial traces. However, the polynomial method provides a simpler separability test in the present example. Even more so if instead of $\xi^3$ the separability of the $N$-qubit state $\xi^N$ is studied. Example 2 --------- In the second example we choose $\xi^N$ such that the degree of $P(\xi^N;x)$ is $k=2^N-2$. Then $k_j=1-\delta_{0j}$. We assume that $C_{2^{N-1}-2}(=C_{k-2^{N-1}})=0$, from which it follows that $x_{(N-1)m}=0$ for $m=0,1,\ldots ,2^{N-1}-1$. According to Condition (Ib) the lowest order of the polynomial has to be at least $2^{N-1}$ for the state to be a product state. Thus, if $C_i\not=0$ for at least one $i$ such that $0\leq i <2^{N-1}$, $i\not= 2^{N-1}-2$, then $\xi^N$ must be entangled. In the case of a three-qubit system this result means that $$\begin{aligned} \nonumber \xi^3 &=&C_0|000\rangle+C_1|100\rangle+C_3|110\rangle+C_4|001\rangle\\ &&+C_5|101\rangle+C_6|011\rangle, \quad C_6\not =0,\end{aligned}$$ cannot be a product state if $C_0,C_1,$ or $C_3$ is nonzero. If $C_4=0$ we have $x_{10}=x_{11}=0$, which means that all $x_{jm}$ are equal to zero. Then $\xi^3$ cannot be separable unless all $C_i$ except $C_6$ are zero. The reduced single-qubit density matrices $\rho_0,\rho_1$, and $\rho_2$ can be straightforwardly calculated and are not presented here. The determinant of $\rho_2$ is $$\begin{aligned} \nonumber \det(\rho_2)&=(|C_0|^2+|C_1|^2+|C_3|^2)(|C_4|^2+|C_5|^2+|C_6|^2)\\ &-|C_0 C_4^*+C_1C_5^*|^2\\ \nonumber &\geq |C_6|^2(|C_0|^2+|C_1|^2+|C_3|^2) \\ \label{ex2} &+|C_3|^2(|C_4|^2+|C_5|^2) +(|C_0 C_5|-|C_1 C_4|)^2, \end{aligned}$$ where we have obtained a lower bound for the determinant in the same fashion as in the previous example. We reproduce the earlier result that $\xi^3$ is necessarily entangled if $C_1,C_2$ or $C_3$ is nonzero. In order to determine the separability conditions in the case $C_4=0$ one has to calculate $\det(\rho_0)$ and repeat the above calculation for this quantity. The result agrees with the one obtained using the polynomial approach, that is, if $\xi^3$ is separable and $C_4=0$, then only $C_6$ can be nonzero. We see that also in this case the polynomial approach provides an easier way to check the separability than the method of partial traces. Example 3 --------- As the final example we study a state given by $$\begin{aligned} \nonumber\label{xi} \xi^N &=\sum_{\substack{i=1\\i\not =0,4,8,\ldots,2^N-4}}^{2^N-1}|i\rangle_d+e^{i\theta}\sum_{i=0}^{2^{N-2}-1}|4i\rangle_d\\ &=\sum_{i=0}^{2^N-1}|i\rangle_d+\left(e^{i\theta}-1\right)\sum_{i=0}^{2^{N-2}-1}|4i\rangle_d. \end{aligned}$$ Now $C_{(2^N-1)-2^j}/C_{2^N-1}=1$ for all $j$, so Eq. (\[roots\]) gives $$x_{jm}=e^{i\frac{(2m+1)\pi}{2^j}},\quad m=0,1,\ldots ,2^j-1,$$ where $j=0,1,2,\ldots, N-1$. Using the sum formula of geometric series we find that the characteristic polynomial can be written as $$\label{Pxi} P(\xi^N;x)=\frac{x^{2^N}-1}{x-1}+(e^{i\theta}-1)\frac{x^{2^N}-1}{x^4-1}.$$ It is easy to see that for $j=2,3,\ldots,N-1$ $$\label{j2} P(\xi^N;x_{jm})=0,\quad m=0,1,\ldots,2^{j}-1,$$ while $$\label{j0} P(\xi^N;x_{00})=P(\xi^N;x_{10})=P(\xi^N;x_{11})=2^{N-2}(e^{i\theta}-1).$$ The state $\xi^N$ is separable if and only if $\theta=2\pi n$ for some integer $n$. If $\xi^N$ is separable Eq. (\[ratios\]) shows that $\xi^N=\otimes_{j=0}^{N-1}(|0\rangle_j+|1\rangle_j)$. Now the $N$ reduced single-qubit density matrices of $\xi^N$ can be straightforwardly determined. Lengthy calculation shows that $\textrm{det}(\rho_0)=\textrm{det}(\rho_1)=2^{2N-3}(1-\cos\theta)$ and $\textrm{det}(\rho_j)=0$ when $j=2,3,\ldots, N-1$, confirming the earlier result. In the present example the polynomial method does not seem to provide as obvious calculational simplification as in the previous two examples. Generalization to $h$-level systems ----------------------------------- We now briefly discuss a generalization of the separability test to a system consisting of $N$ copies of an $h$-level system. We write the basis of a single $h$-level system as $\{|0\rangle_h,|1\rangle_h,\ldots,|h-1\rangle_h\}$ and choose the basis vectors for the $N$-partite system as $|i\rangle_d=|i_0i_1\cdots i_{N-1}\rangle_h$, where $i=\sum_{j=0}^{N-1}i_j h^j$ and $i_j\in \{0,1,2,\ldots, h-1\}$. An arbitrary pure state can be expressed as $$\begin{aligned} \label{initialbasis} \phi =\sum_{i=0}^{h^N-1} C_i|i\rangle_d. \end{aligned}$$ Let $\phi_{\textrm{s}}^h=\phi_0^h\phi_1^h\cdots\phi_{N-1}^h$ be a separable state where $\phi_j^h=a_j|0\rangle_h+b_j|1\rangle_h+c_j|2\rangle_h+\cdots +q_j|h-1\rangle_h$. A straightforward calculation shows that $$\begin{aligned} \label{Ph} &&P(\phi_{\textrm{s}}^h;x)= \!\!\prod_{j=0}^{N-1} \left(a_j+b_j x^{h^{j}}+\cdots +q_j x^{(h-1)h^{j}} \right).\end{aligned}$$ In order to establish a separability test, one has to express the roots of this polynomial in terms of the coefficients $C_0,C_1,\ldots,C_{h^N-1}$. This is possible but complicated if $2<h<6$. If $h\geq 6$, the roots cannot be calculated analytically and therefore cannot be written using the coefficients $C_i$. Thus the separability test can be extended to systems containing less than six levels, but it is more complicated to apply than in the two-level case. An extension is not feasible if the number of levels is equal to or larger than six. Number of unentangled qubits ============================ Entangled states can be classified based on the number of unentangled one-qubit states. The state $\phi$ is said to contain $n$ unentangled qubits if it can be written as a product of $n$ single-qubit states $\phi_l $ and an $(N-n)$ -qubit state $\phi^{N-n}$. In order to study the number of unentangled particles, we determine the characteristic polynomial of a state which separates as a product of a one-qubit state and an $(N-1)$-qubit state. We write $\phi=\phi_j \phi^{N-1}$, where $\phi_j=a_j|0\rangle_j+b_j|1\rangle_j$ is the state of the qubit $j$ and $\phi^{N-1}$ gives the state of the rest of the qubits. As before, the degree of the polynomial is denoted by $k$. Using Eq. (\[tworegs\]) we see that the characteristic polynomial of the basis states reads $$\begin{aligned} \nonumber P(|i\rangle_d;x)&=P(|i_0 i_1\cdots i_{N-1}\rangle;x)\\ \label{Pk} &= x^{i_0 2^0}x^{i_1 2^{1}} x^{i_2 2^{2}} \cdots x^{i_{N-1} 2^{N-1}}. \end{aligned}$$ We write the $(N-1)$-qubit state as $$\phi^{N-1}=\!\!\!\!\!\!\!\!\!\!\sum_{i_l \in\{0,1\},l\not =j } \!\!\!\!\!\!\! C_{i_0\cdots i_{j-1};i_{j+1}\cdots i_{N-1}} |i_0\cdots i_{j-1} i_{j+1}\cdots i_{N-1}\rangle,$$ so using Eq. (\[Pk\]) we find that $$\begin{aligned} \nonumber P(\phi;x)&=b_j(x^{2^j}-x_{jm}^{2^j})\sum_{i_l \in\{0,1\},l\not =j } \!\!\!\!\!\!\! C_{i_0\cdots i_{j-1};i_{j+1}\cdots i_{N-1}}\\ \label{no2toj} &\times x^{i_0 2^0+\cdots +i_{j-1} 2^{j-1}+i_{j+1} 2^{j+1}+\cdots +i_{N-1} 2^{N-1}}\end{aligned}$$ where we have assumed that $b_j\not =0$, which is equivalent to $k_j=1$. We have also written $(a_j + b_j x^{2^j})=b_j(-x_{jm}^{2^j}+x^{2^j})$. Note that $x_{jm}^{2^j}$ is independent of $m$. If $b_j=0$, we get an expression which is obtained by multiplying the sum of Eq. (\[no2toj\]) by $a_j$. Equation (\[no2toj\]) shows that the polynomial $P(\phi;x)/(x^{2^j}-x_{jm}^{2^j})$ contains only those powers of $x$ which do not have $2^j$ in their binary representation and that $x_{jm}$ is a root of $P(\phi;x)$ for each $m=0,1,\ldots ,2^j-1$. Therefore, if $k_j=1$, necessary conditions for the qubit $j$ to be unentangled with respect to the rest of the qubits are $$\begin{array}{lll} &\textrm{(IIa) }& P(\phi;x_{jm})=0\textrm{ for every }m=0,1,\ldots ,2^j-1.\\ &\textrm{(IIb) }& 2^j\textrm{ does not appear in the binary representations}\\ &&\textrm{of the exponents of }x \textrm{ in } P(\phi;x)/(x^{2^j}-x_{jm}^{2^j}). \nonumber \end{array}$$ If $k_j=0$ there is only one condition, namely, $$\begin{array}{lll} &\textrm{(IIc) }& 2^j\textrm{ does not appear in the binary representations}\\ &&\textrm{of the exponents of }x \textrm{ in } P(\phi;x). \nonumber \end{array}$$ It is easy to see that these are also sufficient conditions. The number of unentangled qubits can be obtained by checking Conditions (IIa) and (IIb) for every qubit $j$ for which $k_j=1$ and Condition (IIc) for the rest of the qubits. It is possible to extract information about the number of unentangled qubits without using Conditions (IIb) and (IIc), namely, an upper bound for this quantity can be obtained by adding to the number of qubits for which (IIa) holds the number of indices $j$ for which $k_j=0$. This corresponds to assuming that either (IIb) or (IIc) holds for every qubit. Example 1 --------- As an example of the use of this method we consider the state given by Eq. (\[xi\]). Now $k=2^N-1$ and therefore $k_j=1$ for every $j$. Equations (\[j2\]) and (\[j0\]) together with Condition (IIa) show that the number of unentangled qubits is at most $N-2$ $(N)$ if $\theta\not =2\pi n$ $(\theta =2\pi n)$. In order to simplify the polynomial $P(\xi;x)$ we note that $$x^{2^N}-1=(x^2-1)(x^2+1)(x^4+1)\cdots (x^{2^{N-1}}+1).$$ With the help of this and Eq. (\[Pxi\]) we get $$\begin{aligned} \nonumber &P(\xi^N;x)=(x+1)(x^2+1)(x^4+1)\cdots (x^{2^{N-1}}+1)\\ &+(e^{i\theta}-1)(x^4+1)(x^8+1)\cdots (x^{2^{N-1}}+1)\end{aligned}$$ Now $x^{2^j}-(x_{jm})^{2^j}=x^{2^j}+1$ when $j\geq 2$ and using the above equation one can see that Condition (IIb) holds for $j=2,3,\ldots,N-1$ regardless of the value of $\theta$. Furthermore, (IIb) holds for every $j$ if $\theta=2\pi n$. In conclusion, the qubits $j=2,3,\ldots,N-1$ are always unentangled with respect to the rest of the qubits and if $\theta=2\pi n$ the state is separable. The same result can be obtained using the reduced single-qubit density matrices $\rho_j$. The number of unentangled qubits is equal to the number of $\rho_j$ for which $\textrm{det}(\rho_j)=0$. The values of these determinants have been presented in Example 3 and reproduce the aforementioned result. A necessary step in the calculation of the number of unentangled qubits is to apply Condition (IIa) to all qubits $j$ for which $k_j=1$. This is equivalent to checking the separability of the state. In addition to this, Conditions (IIb) and (IIc) have to be controlled. On the other hand, in the case of single-qubit reduced density matrices $\rho_j$, the determination of the number of unentangled qubits does not require any additional operations in comparison with testing the separability. In both cases $\textrm{det}(\rho_j)$ has to be calculated. This suggests that the method of reduced single-qubit density matrices is preferable if the number of unentangled qubits is studied. Conclusions =========== We have defined a mapping which associates pure $N$-qubit states with a polynomial. The roots of this polynomial determine the state completely and vice versa. The structure of the polynomial is inspired by the one used in the Majorana representation [@Majorana32; @Bloch45]. The separability of a state can be studied by examining the properties of the roots of the corresponding polynomial. In particular, we have presented a method which establishes a necessary and sufficient condition for a given pure $N$-qubit state $\phi$ to be separable. This method provides a new point of view to the pure state separability and gives an alternative to the conventional separability test of calculating the reduced single-qubit density matrices of the state. The separability of $\phi$ can be determined by checking whether the numbers $x_{jm}$, defined in equation (\[xlm\]), are roots of the polynomial $P(\phi;x)$ of equation (\[poly\]). Both the numbers $x_{jm}$ and the polynomial $P(\phi;x)$ can be easily obtained as a function of the components of the state $\phi$. We have illustrated through examples that in some cases the polynomial separability test is easier and faster to apply than the method of reduced single-qubit density matrices. We have also shown how the number of unentangled qubits can be obtained with the help of the polynomial $P(\phi;x)$. It seems, however, that for this task the method of single-qubit density matrices is preferable. The authors are grateful to V.I. and M.A. Man’ko for helpful discussions. A.M. acknowledges partial support by MIUR Project II04C0E3F3 Collaborazioni Interuniversitarie ed Internazionali Tipologia C. H.M. wants to thank E. Kyoseva, B.W. Shore, and N.V. Vitanov for comments on an earlier version of the manuscript and EC Projects CAMEL and EMALI for financial support. [11]{} A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996). M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996). M. Horodecki, P. Horodecki, and R. Horodecki, Open Sys. Inf. Dyn. [**13**]{}, 103 (2006). M. Seevinck and J. Uffink, Phys. Rev. A [**78**]{}, 032101 (2008). A. C. Doherty, P. A. Parrilo, and F. M. Spedalieri, Phys. Rev. A [**71**]{}, 032333 (2005). M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**283**]{}, 1 (2001). O. Gühne and G. Tóth, Phys. Rep. [**474**]{}, 1 (2009). R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. [**81**]{}, 865 (2009). A. Peres, Phys. Lett. A [**202**]{}, 16 (1995). A. V. Thapliyal, Phys. Rev. A [**59**]{}, 3336 (1999). P. Jorrand and M. Mhalla, Int. J. Found. Comput. Sci. [**14**]{}, 797 (2003). H. Matsueda and D. W. Cohen, Int. J. Theor. Phys. [**46**]{}, 3169 (2007). G. Brassard and T. Mor, J. Phys. A:Math. Gen. [**34**]{}, 6807 (2001). C.-S. Yu and H.-S. Song, Phys. Rev. A [**72**]{}, 022333 (2005). C.-S. Yu and H.-S. Song, Eur. Phys. J. D [**42**]{}, 147 (2007). L. Lamata, J. León, D. Salgado and E. Solano, Phys. Rev. A [**74**]{}, 052336 (2006). D.-F. Li, X.-R. Li, H.-T. Huang and X.-X. Li, Comm. Theor. Phys [**49**]{} 1211 (2008). Y. Huang, J. Wen and D. Qiu, J. Phys. A: Math. Theor. [**42**]{}, 425306 (2009). E. Majorana, Nuovo Cimento [**9**]{}, 43 (1932). F. Bloch and I. I. Rabi, Rev. Mod. Phys. [**17**]{}, 237 (1945). R. Barnett, A. Turner, and E. Demler, Phys. Rev. Lett. [**97**]{}, 180412 (2006). R. Barnett, A. Turner, and E. Demler, Phys. Rev. A [**76**]{}, 013605 (2007). H. Mäkelä and K.-A. Suominen, Phys. Rev. Lett. [**99**]{}, 190408 (2007). R. Barnett, D. Podolsky, and G. Refael, Phys. Rev. B [**80**]{}, 024420 (2009). P. Kolenderski and R. Demkowicz-Dobrzanski, Phys. Rev. A [**78**]{}, 052333 (2008). J. Zimba, Electron. J. Theor. Phys. [**3**]{}, 143 (2006). P. Ribeiro, J. Vidal and R. Mosseri, Phys. Rev. Lett. [**99**]{}, 050402 (2007). P. Ribeiro, J. Vidal and R. Mosseri, Phys. Rev. E [**78**]{}, 021106 (2008). R. Bijurkar, arXiv:quant-ph/0604210. H. Mäkelä and A. Messina, arXiv:0910.0630. [^1]: We could naturally define the characteristic polynomial as $P(\phi;x)\equiv \sum_{i=0}^{2^N-1} C_i g_i x^i$, where $\{g_i\}$ is a set of $2^N$ arbitrarily chosen nonzero complex numbers. However, in order to simplify the ensuing calculations we choose $g_i=1$ for each $i$. The choice $g_i={2^N-1\choose i}^{1/2}$ corresponds to the Majorana representation, see Refs. [@Majorana32; @Bloch45]. \[footnote\]
--- abstract: 'In this article we show that the Czech mathematician Václav [Šimerka]{} discovered the factoriation of $\frac19 (10^{17}-1)$ using a method based on the class group of binary quadratic forms more than 120 years before Shanks and Schnorr developed similar algorithms. [Šimerka]{} also gave the first examples of what later became known as Carmichael numbers.' address: 'Mörikeweg 1, 73489 Jagstzell, Germany' author: - 'F. Lemmermeyer' title: | Václav Šimerka:\ Quadratic Forms and Factorization --- According to Dickson [@Dick I. p. 172], the number $$N = 11111111111111111 = \frac{10^{17}-1}9$$ was first factored by Le Lasseur in 1886, and the result was published by Lucas in the same year. Actually the factorization of $N$ already appeared as a side result in a forgotten memoir [@Sim1] of Václav[^1] [Šimerka]{}, in which he presented his ideas on composition of positive definite forms, computation of class numbers, and the prime factorization of large integers such as $N$. In fact, consider the binary quadratic form $$Q = (2, 1, 1388888888888889)$$ with discriminant $\Delta = -N$. If we knew that $h = 107019310$ was (a multiple of) the order of $[Q]$ in ${{\operatorname{Cl}}}(-N)$, then a simple calculation would reveal that $$Q^{h/2} \sim (2071723, 2071723, 1341323520),$$ from which we could read off the factorization $$N = 2071723 \cdot 5363222357.$$ This idea for factoring integers was later rediscovered by Daniel Shanks in the 1970s; subsequent work on this idea led Shanks to introduce the notion of infrastructure, which has played a major role in algorithmic number theory since then. In [@Sim1], [Šimerka]{} explains Gauss’s theory of composition using the language from Legendre’s Théorie des Nombres. The rest of his article [@Sim1] is dedicated to the calculation of the order of a quadratic form in the class group, and an application to factoring integers. In this article we will review [Šimerka]{}’s work and explain some of his calculations so that the readers may convince themselves that [@Sim1] contains profound ideas and important results. A Short Biography ================= Václav [Šimerka]{} was born on Dec. 20, 1819, in Hochwesseln (Vysokém Veselí). He studied philosphy and theology in Königgrätz, was ordained in 1845 and worked as a chaplain in Žlunice near Jičín. He started studying mathematics and physics in 1852 and became a teacher at the gymnasium of Budweis. He did not get a permanent appointment there, and in 1862 became priest in Jenšovice near Vusoké Mýto. Today, [Šimerka]{} is remembered for his textbook on algebra (1863); its appendix contained an introduction to calculus and is the first Czech textbook on calculus. [Šimerka]{} died in Praskačka near Königgrätz (Praskačce u Hradce Králové) on Dec. 26, 1887. [Šimerka]{}’s contributions to the theory of factoring have not been noticed at all, and his name does not occur in any history of number theory except Dickson’s: see [@Dick II, p. 196] for a reference to [Šimerka]{}’s article [@Sim3], which deals with the diophantine problem of rational triangles. In [@Dick III, p. 67], Dickson even refers to [@Sim1] in connection with the composition of binary quadratic forms. In [@Sim2], [Šimerka]{} gave a detailed presentation of a large part of Legendre’s work on sums of three squares. In [@SimFN], [Šimerka]{} proved that $7 \cdot 2^{14} + 1 \mid F_{12}$ and $5 \cdot 2^{25} + 1 \mid F_{23}$ (these factors had just been obtained by Pervouchin), where $F_n$ denotes the $n$-th Fermat number. In [@SimF], [Šimerka]{} listed the Carmichael numbers [@CN] $$n = 561, 1105, 1729, 2465, 2821, 6601, 8911$$ long before Korselt [@Kors] gave criteria hinting at their existence and Carmichael [@Carm] gave what was believed to be the first example. All of [Šimerka]{}’s examples are products of three prime factors, and there are no others below $10\,000$. For more on [Šimerka]{}, see [@Cupr; @Kop; @Panek]. The Šimerka Map =============== Let us now present [Šimerka]{}’s ideas from [@Sim1] in a modern form. At the end of this section, we will explain [Šimerka]{}’s language. Let $Q$ be a positive definite binary quadratic form with discriminant $\Delta$. If $Q$ primitively represents a (necessarily positive) integer $a$, then $Q$ is equivalent to a unique form $(a,B,C)$ with $-a < B \le a$. Let $$a = p_1^{a_1} \cdots p_r^{a_r}$$ denote the prime factorization of $a$. For each prime $p_j \mid a$, fix an integer $-p_j < b_j \le p_j$ with $B \equiv b_j \bmod p_j$ and set $$s_j = \begin{cases} +1 & \text{ if } b_j \ge 0, \\ -1 & \text{ if } b_j < 0. \end{cases}$$ Thus if $a = Q(x,y)$, then we can define $${\operatorname{\check{s}}}(Q,a) = \prod p_j^{s_ja_j}.$$ [**Example.**]{} The principal form $Q_0 = (1,0,5)$ with discriminant $-20$ represents the following values: $$\begin{array}{c|ccccccc} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}a & 1 & 5 & 6 & 9 & 14 & 21 & 21 \\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}Q & (1,0,5) & (5,0,1) & (6,2,1) & (9,4,1) & (14,6,1) & (21,8,1) & (21,20,5) \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}{\operatorname{\check{s}}}(a,Q_0) & 1 & 5 & 2 \cdot 3 & 3^2 & 2 \cdot 7 & 3 \cdot 7 & 3^{-1} \cdot 7 \end{array}$$ Forms equivalent to $Q = (2,2,3)$ give us the following values: $$\begin{array}{c|ccccc} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}a & 2 & 3 & 7 & 87 & 87 \\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}Q & (2,2,3) & (3,-2,2) & (7,6,2) & (87,26,2) & (87,32,3) \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}{\operatorname{\check{s}}}(a,Q_0) & 2 & 3^{-1} & 7 & 3 \cdot 29 & 3 \cdot 29^{-1} \end{array}$$ The ideal theoretic interpretation of the [Šimerka]{} map is the following: there is a correspondence between binary quadratic forms $Q$ with discriminant $\Delta < 0$ and ideals ${\mathfrak a}(Q)$ in a suitable order of the quadratic number field ${{\mathbb Q}}(\sqrt{\Delta}\,)$. Equivalent forms correspond to equivalent ideals, and integers $a$ represented by $Q$, say $Q(x,y) = a$, correspond to norms of elements $\alpha {\mathfrak a}(Q)$ via $a = N\alpha/N{\mathfrak a}(Q)$. Integers represented primitively by $Q$ are characterized by the fact that $\alpha \in {\mathfrak a}(Q)$ is not divisible by a rational prime number. If we fix prime ideals ${{\mathfrak p}}_j = {\mathfrak a}(Q_j)$ by ${\mathfrak a}(Q_j)$ for $Q_j = (p_j, B_j, C)$ with $0 \le B_j \le p_j$ and formally set ${{\mathfrak p}}_j^{-1} = {\mathfrak a}(Q_j')$ with $Q_j' = (p_j, -B_j, C)$, then ${\operatorname{\check{s}}}(a,Q) = p_1^{a_1} \cdots p_r^{a_r}$ is equivalent to $(\alpha) = {{\mathfrak p}}_1^{a_1} \cdots {{\mathfrak p}}_r^{a_r} {\mathfrak a}(Q)$. Assume that $a = p_1 \cdots p_r$, and that $Q = (a,B,C)$. Then $$(a,B,C) = (p_1,B,p_2\cdots p_rC) \cdot (p_2,B,p_1p_3 \cdots p_rC) \cdots (p_r,B,p_1 \cdots p_{r-1}C).$$ If we write $b_j \equiv B \bmod 2p_j$ with $-p_j < b_j \le p_j$, then $${\operatorname{\check{s}}}(a,Q) = {\operatorname{\check{s}}}(p_1,Q_1) \cdots {\operatorname{\check{s}}}(p_r,Q_r)$$ by definition of ${\operatorname{\check{s}}}$. We start by showing that the value set of ${\operatorname{\check{s}}}$ is closed with respect to inversion. To this end we use the notation $(A,B,C)^{-1} = (A,-B,C)$. Then it follows right from the definition of ${\operatorname{\check{s}}}$ that if ${\operatorname{\check{s}}}(a,Q) = r$, then ${\operatorname{\check{s}}}(a,Q^{-1}) = r^{-1}$. Now we claim \[Sm\] Let $\Delta$ be a fundamental discriminant. Assume that $Q_1(x_1,y_1) = a_1$ and $Q_2(x_2,y_2) = a_2$, and that $Q_3 \sim Q_1Q_2$. Then there exist integers $a_3, x_3, y_3$ such that $Q_3(x_3,y_3) = a_3$ and ${\operatorname{\check{s}}}(a_3,Q_3) = {\operatorname{\check{s}}}(a_1,Q_1) \cdot {\operatorname{\check{s}}}(a_2,Q_2)$. Writing $Q_1 = (a_1,B_1,C_1) = (p_1,B_1,a_1C_1/p_1) \cdots (p_r,B_1,a_1C_1/p_r)$ and $Q_2 = (a_2,B_2,C_2) = (q_1,B_2,a_2C_2/q_1) \cdots (q_s,B_2,a_2C_2/q_s)$, where $a_1 = p_1 \cdots p_r$ and $a_2 = q_1 \cdots q_s$ are the prime factorizations of $a_1$ and $a_2$, we see that it is sufficient to prove the result for prime values of $a_1$ and $a_2$. There are several cases: 1. $Q_1 = (p,b_1,c_1)$, $Q_2 = (q,b_2,c_2)$ with $p \ne q$: for composing these forms using Dirichlet’s method, we choose an integer $b$ satisfying the congruences $$b \equiv b_1 \bmod 2p, \quad \text{and} \quad b \equiv b_2 \bmod 2q.$$ Then $Q_1 \sim (p,b,qc')$ and $Q_2 \sim (q,b,pc')$, and we find $Q_1Q_2 = (pq,b,c')$ as well as ${\operatorname{\check{s}}}(pq,Q_1Q_2) = {\operatorname{\check{s}}}(p,Q_1){\operatorname{\check{s}}}(q,Q_2)$ by the definition of ${\operatorname{\check{s}}}$. 2. $Q_1 = (p,b_1,c_1)$, $Q_2 = (p,-b_1,c_1) = Q^{-1}$: here Dirichlet composition shows $Q_1Q_2 = (1,b_1,pc_1) \sim Q_0$, and since ${\operatorname{\check{s}}}(Q_2) = {\operatorname{\check{s}}}(Q_1)^{-1}$ we also have $1 = {\operatorname{\check{s}}}(1,Q_1Q_2) = {\operatorname{\check{s}}}(p,Q_1){\operatorname{\check{s}}}(p,Q_2)$. 3. $Q_1 = (p,b_1,c_1)= Q_2$: if $p \nmid \Delta$, then $p \nmid b_1$, and we can easily find an integer $b \equiv b_1 \bmod 2p$ with $b^2 \equiv \Delta \bmod 2p_1^2$. But then $Q_1 \sim (p,b,pc')$ and, by Dirichlet composition, $Q_1^2 = (p^2,b,c')$. As before, the definition of ${\operatorname{\check{s}}}$ immediately shows that ${\operatorname{\check{s}}}(p^2,Q_1^2) = {\operatorname{\check{s}}}(p,Q_1)^2$. If $p \mid \Delta$ and $p$ is odd, on the other hand, then $p \mid b_1$. Since $\Delta$ is fundamental, the form $Q_1$ is ambiguous, hence $Q_1^2 \sim Q_0$. Since ${\operatorname{\check{s}}}(Q_1) = 1$, the multiplicativity is clear. This completes the proof. Let $Q_0$ denote the principal form with discriminant $\Delta < 0$. Then the elements ${\operatorname{\check{s}}}(a,Q_0)$ form a subgroup ${{\mathcal R}}$ of ${{\mathbb Q}}^\times$. It remains to show that if $Q$ represents $a$ and $b$, then it represents $ab$ in such a way that ${\operatorname{\check{s}}}(ab,Q_0) = {\operatorname{\check{s}}}(a,Q_0) {\operatorname{\check{s}}}(b,Q_0)$. Again we can reduce this to the case of prime values of $a$ and $b$, and in this case the claim follows from the proof of Lemma \[Sm\]. Assume that $a$ is represented properly by $Q$, and that $a'$ is represented properly by $Q'$. If $Q \sim Q'$, then $${\operatorname{\check{s}}}(a,Q) \equiv {\operatorname{\check{s}}}(a',Q') \bmod {{\mathcal R}}.$$ Since equivalent forms represent the same integers it is sufficient to show that if a form $Q$ properly represents numbers $a$ and $b$, then ${\operatorname{\check{s}}}(a,Q) \equiv {\operatorname{\check{s}}}(b,Q) \bmod {{\mathcal R}}$. Assume that $Q = (A,B,C)$, and set ${\operatorname{\check{s}}}(a,Q) = r$ and ${\operatorname{\check{s}}}(b,Q) = s$. If $a$ and $b$ are coprime, then ${\operatorname{\check{s}}}(ab,Q_0) = r \cdot s^{-1} \in {{\mathcal R}}$, where $Q_0$ is the composition of $Q$ and $Q^{-1}$. This implies the claim. If $a$ and $b$ have a factor in common, then there is an integer $c$ such that $n=ab/c^2$ is represented by $Q_0$ in such a way that ${\operatorname{\check{s}}}(n,Q_0) = r \cdot s^{-1} \in {{\mathcal R}}$, and the claim follows as above. These propositions show that ${\operatorname{\check{s}}}$ induces a homomorphism $${\operatorname{\check{s}}}: {{\operatorname{Cl}}}(\Delta) {\longrightarrow}{{\mathbb Q}}^\times/{{\mathcal R}}$$ from the class group ${{\operatorname{Cl}}}(\Delta)$ to ${{\mathbb Q}}^\times/{{\mathcal R}}$, which we will also denote by ${\operatorname{\check{s}}}$, and which will be called the [Šimerka]{} map. Let $\Delta < 0$ be a fundamental discriminant. Then the [Šimerka]{} map is an injective homomorphism of abelian groups. We have to show that ${\operatorname{\check{s}}}$ is injective. To this end, let $[Q]$ denote a class with $a = {\operatorname{\check{s}}}(Q) \in {{\mathcal R}}$. Then there is a form $Q_0' = (A,B,C) \sim Q_0$ with ${\operatorname{\check{s}}}(A,Q_0) = a$. But then $Q_1 = Q \cdot (A,-B,C)$ is a form equivalent to $Q$ with ${\operatorname{\check{s}}}(Q_1) = 1$. This in turn implies that $Q_1$ represents $1$, hence is equivalent to the principal form by the classical theory of binary quadratic forms. [Šimerka]{}’s idea is to use a set of small prime numbers $S = \{p_1, \ldots, p_r\}$ which are smaller than $\sqrt{-\Delta/3}$ (and a subset of these if $|\Delta|$ is large), find integers $a_j$ primitively represented by $Q$ whose prime factors are all in $S$, and using linear combinations to find a relation in ${{\mathcal R}}$, which gives him an integer $h$ such that $Q^h \sim 1$. It is then easy to determine the exact order of $Q$. [Šimerka]{}’s Language {#šimerkas-language .unnumbered} ---------------------- [Šimerka]{} denotes binary quadratic forms $Ax^2 + Bxy + Cy^2$ by $(A,B,C)$ and considers forms with even as well as with odd middle coefficients. The principal form with discriminant $\Delta$ is called an end form[^2] (Endform, Schlussform), and ambiguous[^3] forms are called middle forms (Mittelformen). The subgroup generated by a form $Q$ is called its period, the exponent of a form $Q$ in the class group is called the length of its period. [Šimerka]{} represents a form $f = (A,B,C)$ by a small prime number $p$ represented by $f$; the powers $f1 = f$, $f2$, $f3$ of $f$ then represent $p$, $p^2$, $p^3$ etc., and the exponent $m$ of the $m$-th power $fm$ is called the pointer (Zeiger[^4]) of $f$. What we denote by ${\operatorname{\check{s}}}(Q^m) \equiv a \bmod {{\mathcal R}}$, [Šimerka]{} wrote as $fm = a$. [Šimerka]{} introduced this notation in [@Sim1 Art. 10]; instead of ${\operatorname{\check{s}}}(Q) = 2$ for $Q = (2,0,c)$ he simply wrote $(2,0,d) = 2$. He explained the general case as follows: > So ist z.B. $(180,-17,193) = \frac{3^2 \times 5}{2^2}$ weil $180 = 2^2 \times 3^2 \times 5$ und $-17 \equiv -1 \pmod 4$, $-17 \equiv 1 \pmod 6$, $-17 \equiv 3 \pmod {10}$.[^5] One of the tricks he used over and over again is the following: $$\label{E11} (A,B,C) \sim (A, B \pm 2A, A \pm B + C) \sim (A \pm B + C, -B \mp 2A, A)$$ shows that if $Q = (A,B,C)$ represents an integer $m = Q(1,-1) = A \pm B + C$, then ${\operatorname{\check{s}}}(Q)$ can be computed from $Q \sim (m, \mp 2A - B, A)$. Similarly, we have $$(A,B,C) \sim (A \pm B + C, B \pm 2C, C).$$ [Šimerka]{}’s Calculations ========================== In this section we will reconstruct a few of [Šimerka]{}’s calculations of (factors of) class numbers and factorizations. $\Delta = -10079$ {#delta--10079 .unnumbered} ----------------- [Šimerka]{} first considers a simple example (see [@Sim1 p. 58]): he picks a discriminant $\Delta$ for which $\Delta + 1$ is divisibly by $2$, $3$, $5$ and $7$, namely $\Delta = -10079$. Consider the form $Q = (5,1,504)$ with discriminant $\Delta$. The small powers of $Q$ provide us with the following factorizations: $$\begin{array}{c|c|c} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}n & Q^n & {\operatorname{\check{s}}}(Q^n) \\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}1 & \sim(504,-1,5) & 2^{-3} \cdot 3^{-2} \cdot 7^{-1} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}3 & (36,17,72) & 2^2 \cdot 3^{-2} \\ & \sim (72,-17,36) & 2^{-3} \cdot 3^2 \end{array}$$ This implies $$\begin{aligned} {\operatorname{\check{s}}}(Q^6) & \equiv {\operatorname{\check{s}}}(Q^3) {\operatorname{\check{s}}}(Q^3) \equiv 2^2 \cdot 3^{-2} \cdot 2^{-3} \cdot 3^2 \equiv 2^{-1}, \\ {\operatorname{\check{s}}}(Q^{15}) & \equiv {\operatorname{\check{s}}}(Q^3)^3 {\operatorname{\check{s}}}(Q^3)^2 \equiv 2^6 \cdot 3^{-6} \cdot 2^{-6} \cdot 3^4 \equiv 3^{-2}, \\ {\operatorname{\check{s}}}(Q^{32}) & \equiv {\operatorname{\check{s}}}(Q^{-1}) {\operatorname{\check{s}}}(Q^{-3}) {\operatorname{\check{s}}}(Q^6)^6 \equiv 7. \end{aligned}$$ Now $7 = {\operatorname{\check{s}}}(R)$ for $R = (7,1,360)$: this is easily deduced from $\Delta \equiv 1 \equiv 1^2 \bmod 7$. From $R^2 \sim (49,-41,60)$ [Šimerka]{} reads off ${\operatorname{\check{s}}}(Q^{64}) \equiv 2^2 \cdot 3^{-1} \cdot 5$. But then ${\operatorname{\check{s}}}(Q^{63}) \equiv 2^2 \cdot 3^{-1}$ and therefore $${\operatorname{\check{s}}}(Q^{75}) \equiv {\operatorname{\check{s}}}(Q^{63}) \cdot {\operatorname{\check{s}}}(Q^6)^2 \equiv 2^2 \cdot 3^{-1} \cdot 2^{-2} \equiv 3 \bmod {{\mathcal R}}.$$ This implies ${\operatorname{\check{s}}}(Q^{150}) \equiv {\operatorname{\check{s}}}(Q^{15})$ and therefore ${\operatorname{\check{s}}}(Q^{135}) \equiv 1 \bmod {{\mathcal R}}$. Since neither $Q^{45}$ nor $Q^{27}$ are principal, the class of $Q$ has order $135$. For showing that $h(\Delta) = 135$, [Šimerka]{} would have to determine the pointers of all primes $p < \sqrt{-\Delta/3} \approx 100.3$. The fact that $h$ is odd would then also show that $\Delta$ is a prime number. $\Delta = - 121271$ {#delta---121271 .unnumbered} ------------------- For larger discriminants, [Šimerka]{} suggests the following method: > Bei grossen Determinanten, oder wo die vorige Methode nicht zum Ziele führt, nimmt man die Zeiger einiger kleiner Primzahlen als unbekannt an, scheidet dann jene Grössen aus den Producten der Bestimmungsgleichungen aus, und sucht die anderen Primzahlen in Bestimmungsgleichungen durch jene unbekannten Zeiger darzustellen.[^6] [Šimerka]{} chooses the discriminant $\Delta = -121271$; in the course of the calculation it becomes clear that $\Delta = 99^2 - 2^{17}$, and quite likely the discriminant was constructed in this way. This is supported by [Šimerka]{}’s remark on [@Sim1 p. 64] that if $D = a^m - b^2$ is a (positive) determinant and if $a$ is odd, then the exponent of the form $(a,2b,a^{m-1})$ is divisible by $m$, as can be seen from the “period” $$(a,2b,a^{m-1}), (a^2,2b,a^{m-2}), \ldots, (a^m,2b,1).$$ Observe that this statement only holds under the additional assumption that these forms be reduced, i.e., that $0 < 2b \le a$. Examples are $D = 3^3 - 1 = 26$ and $h(-4 \cdot 26) = 6$, or $D = 3^5 - 4 = 239$ and $h(-4 \cdot 239) = 15$. A similar observation was made by Joubert [@Joub] just a few years after [Šimerka]{}. The connection between classes of order $n$ and solutions of the diophantine equation $a^m - Dc^2 = b^2$ was investigated recently in [@HL]. Let us write $Q_2 = (2, 1, 15159)$ and $Q_3 = (3,1,10106)$. Then $Q_2^2 \sim (4,5,7581)$ and ${\operatorname{\check{s}}}(Q_2^2) \equiv 3 \cdot 7^{-1} \cdot 19^{-2}$. Since ${\operatorname{\check{s}}}(Q_3) \equiv 3$, we find ${\operatorname{\check{s}}}(Q_2^{-2} Q_3) \equiv 7 \cdot 19$. $Q_2^3 \sim (8,13,3795)$ gives ${\operatorname{\check{s}}}(Q_2^3) \equiv 3^{-1} \cdot 5 \cdot 11^{-1} \cdot 23$ and ${\operatorname{\check{s}}}(Q_2^3 Q_3) \equiv 5 \cdot 11^{-1} \cdot 23$. We can summarize [Šimerka]{}’s calculations as follows: $$\begin{array}{r|c|c} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}n & Q_2^n \sim & {\operatorname{\check{s}}}(Q_2^n) \bmod {{\mathcal R}}\\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}2 & (4,5,7581) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (7581,-5,4 ) & 3 \cdot 7^{-1} \cdot 19^{-2} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}3 & (8,13,3795) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (3795,-13,8) & 3^{-1} \cdot 5 \cdot 11^{-1} \cdot 23 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}4 & (16,29,1908) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (1953,-61,16) & 3^{-2} \cdot 7^{-1} \cdot 31 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}5 & (32,29,954) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (957,35,32) & 3^{-1} \cdot 11 \cdot 29 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (1015,-93,32) & 5^{-1} \cdot 7 \cdot 29 \end{array}$$ $$\begin{array}{r|c|c} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}n & Q_2^n \sim & {\operatorname{\check{s}}}(Q_2^n) \bmod {{\mathcal R}}\\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}6 & (64,29,477) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (477,-29,64) & 3^2 \cdot 53 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (675,227,64) & 3^{-3} \cdot 5^{-2} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}7 & (128,157,285) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (285,-157,128) & 3^{-1} \cdot 5 \cdot 19^{-1} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (483,355,128) & 3^{-1} \cdot 7 \cdot 23^{-1} \end{array}$$ Note that if ${\operatorname{\check{s}}}(Q_2^n) \equiv 2^{-1}u$ for some odd number $u$, then ${\operatorname{\check{s}}}(Q_2^{n+1}) \equiv u$. Thus ${\operatorname{\check{s}}}(Q_2^4) \equiv 2^{-2} \cdot 3^2 \cdot 53$ implies ${\operatorname{\check{s}}}(Q_2^6) \equiv 3^2 \cdot 53$, and in such cases we have listed only the relation that does not involve a power of $2$. The computation of $Q_2^7$ reveals $\Delta = 99^2 - 2^{17}$, and shows that ${\operatorname{\check{s}}}(Q_2^7) \equiv 2^{-8}$, which gives ${\operatorname{\check{s}}}(Q_2^{15}) \equiv 1$. Now [Šimerka]{} continues as follows: the relations $${\operatorname{\check{s}}}(Q_2^2) \equiv 3 \cdot 7^{-1} \cdot 19^{-2} \quad \text{and} \quad {\operatorname{\check{s}}}(Q_2^7) \equiv 3^{-1} \cdot 5 \cdot 19^{-1}$$ give $${\operatorname{\check{s}}}(Q_2^{12}) \equiv {\operatorname{\check{s}}}((Q_2^7)^2Q_2^{-2}) \equiv 3^{-2} \cdot 5^2 \cdot 19^{-2} \cdot 3^{-1} \cdot 7 \cdot 19^{2} = 3^{-3} \cdot 5^2 \cdot 7.$$ Using the relations $${\operatorname{\check{s}}}(Q_2^{12} Q_3^3) \equiv 5^2 \cdot 7, \quad \text{and} \quad {\operatorname{\check{s}}}(Q_2^6 Q_3^3) \equiv 5^{-2},$$ [Šimerka]{} deduces $$\label{ES7} {\operatorname{\check{s}}}(Q_2^3 Q_3^6) \equiv {\operatorname{\check{s}}}(Q_2^{18} Q_3^6) \equiv 7.$$ This allows him to eliminate the $7$s from his relations, which gives $$\begin{aligned} {\operatorname{\check{s}}}(Q_2^{-4} Q_3^7) & \equiv {\operatorname{\check{s}}}(Q_2^{-7}) {\operatorname{\check{s}}}(Q_3) {\operatorname{\check{s}}}(Q_2^3 Q_3^6) \equiv 23, \\ {\operatorname{\check{s}}}(Q_2^7 Q_3^8) & \equiv {\operatorname{\check{s}}}(Q_2^4) {\operatorname{\check{s}}}(Q_3^2) {\operatorname{\check{s}}}(Q_2^3 Q_3^6) \equiv 31.\end{aligned}$$ For the actual computation of the order of $Q_3$, only the relation (\[ES7\]) will be needed. [Šimerka]{} also investigates the powers of $Q_3$ and finds $$\begin{array}{r|c|c} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}n & Q_3^n \sim & {\operatorname{\check{s}}}(Q_3^n) \bmod {{\mathcal R}}\\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}1 & (3,1,10106) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (10108,2,3) & 2^2 \cdot 7 \cdot 19^2 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}3 & (27,43,1140) & \\ & (1210,-97,27) & 2^{-1} \cdot 5 \cdot 11^{-2} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (1162, 65,27) & 2 \cdot 7^{-1} \cdot 83 \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}4 & (81, 43, 380) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (380,-43,81) & 2^2 \cdot 5^{-1} \cdot 19^{-1} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (418,119,81) & 2^{-1} \cdot 11 \cdot 19 \end{array}$$ $$\begin{array}{r|c|c} {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}n & Q_3^n \sim & {\operatorname{\check{s}}}(Q_3^n) \bmod {{\mathcal R}}\\ \hline {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}5 & (243,205,168) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (616,541,168) & 2^3 \cdot 7^{-1} \cdot 11^{-1} \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}6 & (729,205, 56) & \\ {\raisebox{0em}[2.3ex][1.3ex]{\rule{0em}{2ex} }}& (56,-205,729) & 2^{-3} \cdot 7 \end{array}$$ [Šimerka]{} observes $${\operatorname{\check{s}}}(Q_2^2 Q_3^9) \equiv {\operatorname{\check{s}}}(Q_3^3) {\operatorname{\check{s}}}(Q_2^{-1}) {\operatorname{\check{s}}}(Q_2^3 Q_3^6) \equiv 83,$$ but does not use this relation in the sequel. He continues with $${\operatorname{\check{s}}}(Q_2 Q_3^4) \equiv 11 \cdot 19, \quad {\operatorname{\check{s}}}(Q_2^3 Q_3^{-5}) \equiv 7 \cdot 11,$$ from which he derives the following relations: $$\begin{aligned} {\operatorname{\check{s}}}(Q_3^{-11}) & \equiv {\operatorname{\check{s}}}(Q_2^3 Q_3^{-5}) {\operatorname{\check{s}}}(Q_2^{-3} Q_3^{-6}) \equiv 11, & {\operatorname{\check{s}}}(Q_2 Q_3^{15}) & \equiv {\operatorname{\check{s}}}(Q_2 Q_3^4){\operatorname{\check{s}}}(Q_3^{11}) \equiv 19, \\ {\operatorname{\check{s}}}(Q_2^8 Q_3^{16} & \equiv {\operatorname{\check{s}}}(Q_2^7) {\operatorname{\check{s}}}(Q_3) {\operatorname{\check{s}}}(Q_2 Q_3^{15}) \equiv 5, & {\operatorname{\check{s}}}(Q_2^{22} Q_3^{35}) & \equiv {\operatorname{\check{s}}}(Q_2^{16} Q_3^{32}) {\operatorname{\check{s}}}(Q_2^6Q_3^3) \equiv 1.\end{aligned}$$ Raising the last relation to the $15$th power yields ${\operatorname{\check{s}}}(Q_3^{525}) \equiv 1$. Checking that $Q_3^{75}$, $Q_3^{105}$ and $Q_3^{175}$ are not principal then shows that $Q_3$ has order $h = 525 = 3 \cdot 5^2 \cdot 7$. In fact, [pari]{} tells us that this is the class number of $\Delta = -121271$. Class Number Calculations ========================= Let us remark first that [Šimerka]{} does not compute class numbers but rather the order of a given form in the class group. Note that this is sufficient for factoring the discriminant. [Šimerka]{} is well aware of the fact that his method only produces divisors of the class number: in [@Sim1 art. 13], he writes > Was die Länge $\theta$ anbelangt, sucht man $fm = 1$ zu erhalten, wo dann entweder $\theta = m$ oder ein Theiler von $m$ ist. Die wichtigsten Glieder der Perioden sind die zu kleinen Primzahlen gehörigen Formen. Welches die grösste Primzahl wäre, deren Zeiger man kennen müsse, um vor Irrthum sicher zu sein, konnte ich bis jetzt nicht ermitteln, jedenfalls ist sie kleiner als $\sqrt{D/3}$ bei den unpaaren, und als $2 \sqrt{D/3}$ bei den paaren Formen, wahrscheinlich aber reichen dazu nur wenige Primzahlen hin.[^7] In the example $\Delta = -121271$ above we have seen that the powers of $Q_2$ only give a subgroup of order $15$ in the class group, whereas the powers of $3$ include all forms representing the primes $$p = 2, 3, 5, 7, 11, 19, 23, 29, 31, 53, 83.$$ For verifying that $h(-121271) = 525$, one would have to find the pointers for the other primes $p$ with $(\Delta/p) = +1$ and $\Delta < 202$ as well, namely those of $$p = 47, 61, 73, 79, 89, \ldots, 197.$$ Since the pointers of all small primes are known, this is only a little additional work. The fact that the class number is odd then implies that $-\Delta = 121271$ is a prime. $\Delta = - 4 \cdot 265371653$ {#delta---4-cdot-265371653 .unnumbered} ------------------------------ Consider the forms $$Q_3 = (3,2, 88457218), \quad Q_{11} = (11,10,24124698), \quad \text{and} \quad Q_{13} = (13,10, 20413206).$$ Using a computer it is easily checked that $Q_3 \sim Q_{11}^{5} Q_{13}^{-3}$, but this relation was apparently not noticed by [Šimerka]{}. It would follow easily from $$\begin{aligned} Q = Q_{11}^5 & = (6591, -6568, 41899), & Q(0,1) & = 11 \cdot 13 \cdot 293, \\ Q = Q_{13}^3 & = (2197, -2174, 121326), & Q(1,-1) & = 3 \cdot 11 \cdot 13 \cdot 293, \end{aligned}$$ but perhaps the prime $293$ was not an element of [Šimerka]{}’s factor base. A computer also finds the following relations among the small powers of these three forms: $$\begin{aligned} Q_{11}^{13} Q_{13}^{11} & = (1058, 918, 251023); & {\operatorname{\check{s}}}( Q_{11}^{13} Q_{13}^{11} ) & \equiv 2 \cdot 23^{-2}, \\ Q_3^{14} Q_{11}^{12} Q_{13} & = (529, -140, 501657); & {\operatorname{\check{s}}}( Q_3^{14} Q_{11}^{12} Q_{13} ) & \equiv 23^{-2}. \end{aligned}$$ Composition shows that $$\begin{aligned} Q_3^{-14} Q_{11} Q_{13}^{10} & \equiv Q_{11}^{13} Q_{13}^{11} Q_3^{-14} Q_{11}^{-12} Q_{13}^{-1} \\ & = (1058, 918, 251023) (529, 140, 501657) = (2, 918, 132791167), \end{aligned}$$ and squaring yields $$Q_3^{-28} Q_{11}^2 Q_{13}^{20} \sim Q_0.$$ Similarly, $$\begin{aligned} Q_3^3 Q_{11}^{15} Q_{13}^{11} & = (16389, -16010, 20102), & {\operatorname{\check{s}}}(Q_3^3 Q_{11}^{15} Q_{13}^{11}) & \equiv 2 \cdot 19 \cdot 23^2, \\ Q_3^{12} Q_{11}^{15} Q_{13}^8 & = (6859, 5028, 39611), & {\operatorname{\check{s}}}(Q_3^{12} Q_{11}^{15} Q_{13}^8) & \equiv 19^3, \\ \intertext{which implies} Q_3^{3} Q_{11}^{15} Q_{13}^{11} \cdot Q_{11}^{13} Q_{13}^{11} & \sim (19, 12, 13966931), & {\operatorname{\check{s}}}(Q_3^{3} Q_{11}^{28} Q_{13}^{22}) & \equiv 19, \end{aligned}$$ and so $$1 \equiv {\operatorname{\check{s}}}(Q_3^{3} Q_{11}^{28} Q_{13}^{22})^{3}/{\operatorname{\check{s}}}(Q_3^{12} Q_{11}^{15} Q_{13}^8) \equiv {\operatorname{\check{s}}}(Q_3^{-3} Q_{11}^{69} Q_{13}^{58}).$$ Eliminating $Q_3 \sim Q_{11}^{5} Q_{13}^{-3}$ from the relations $$Q_3^{-28} Q_{11}^2 Q_{13}^{20} \sim Q_3^{-3} Q_{11}^{69} Q_{13}^{58} \sim Q_0$$ then implies $$Q_{11}^{-138} Q_{13}^{104} \sim Q_0 \quad \text{and} \quad Q_{11}^{54} Q_{13}^{67} \sim Q_0,$$ hence $$Q_{11}^{14862} \sim Q_0.$$ It is then easily checked that $Q_3$ and $Q_{11}$ have exponent $14862$ in the class group, whereas $Q_{13}$ is a sixth power and has order $2477$. A quick calculation with [pari]{} reveals that $h(\Delta) = 14862$. [Šimerka]{} must have proceeded differently, as he records the relations $$Q_3^{119} Q_{11}^{11} Q_{13}^8 \sim Q_0, \quad Q_3^{1276} Q_{11}^{94} Q_{13}^{26} \sim Q_0, \quad Q_3^{385} Q_{11}^{31} Q_{13}^4 \sim Q_0.$$ It is not impossible that by playing around with small powers of $Q_3$, $Q_{11}$ and $Q_{13}$, [Šimerka]{}’s calculations can be reconstructed. It is more difficult to reconstruct [Šimerka]{}’s factorization of $N = \frac19(10^{17}-1)$, since he left no intermediate results at all (apparently he was forced to shorten his manuscript drastically before publication). [Šimerka]{} knew that it is often not necessary to determine the class number for factoring integers; in [@Sim1 Art. 17] he observed: > Bei Zahlenzerlegungen nach dieser Methode findet man oft $f 2a = m^2$, oder es lässt sich aus den Bestimmungsgleichungen eine solche Form ableiten; dann hat man $\frac{f 2a}{m^2} = (\frac{fa}m)^2 = 1$, und es kann $fa:m$ blos eine Schluss- oder Mittelform sein. Gewöhnlich ist das letztere der Fall. [^8] To illustrate this idea we present an example that cannot be found in [Šimerka]{}’s article. Let $\Delta = -32137459$ and consider the form $Q = (5,1, 1606873)$ with discriminant $\Delta$. It is quickly seen that $Q^{26}(1,0) = 11^2$. This observation immediately leads to a factorization of $\Delta$: the form $Q^{26}$ represents $11^2$, hence $Q^{13}$ represents $11$, as does $Q_{11} = (11, 3, 730397)$. Thus $(Q^{13}R^{-1})^2$ represents $1$, which implies that $Q^{13}R^{-1}$ is ambiguous (see [@Sim1 S. 36]). In fact, $Q^{13}R^{-1} = (1511, 1511, 5695)$, which gives the factorization $\Delta = - 1511 \cdot 21269$. Shanks ====== The factorization method based on the class group of binary quadratic forms was rediscovered by Shanks [@Sha], who, however, used a completely different method for computing the class group: he estimated the class number $h$ using truncated Dirichlet L-series and the found the correct value of $h$ with his baby step – giant step method. Attempts of speeding up the algorithm led, within just a few years, to Shanks’s discovery of the infrastructure and his square form factorization method SQUFOF. The factorization method described by [Šimerka]{} was rediscovered by Schnorr [@Schn]; the [Šimerka]{} map is defined in [@Schn Lemma 4] (see also [@Sey Thm. 3.1]), although in a slightly different guise: a quadratic form $Q = (a,b,c)$ is factored into “prime forms” $I_p = (p,b_p,C)$, where $B = b_p$ is the smallest positive solution of $B^2 \equiv \Delta \bmod 4p$ for $\Delta = -N \equiv 1 \bmod 4$. Thus the equation corresponding to our $${\operatorname{\check{s}}}(Q) = \prod_{i=1}^n p_i^{\pm e_i} \quad \text{looks like} \quad Q = \prod_{i=1}^n (I_p)^{\pm e_i}$$ in [@Sey], “where the plus sign in the exponent $e_i$ holds if and only if $b \equiv b_{p_i} \bmod 2p_i$. Variations of this method were later introduced by Mc Curley and Atkin. [Šimerka]{}’s method is superior to Schnorr’s for calculations by hand since it allows him to use the factorizations of $Q(0,1)$ and $Q(1,\pm 1)$. The main difference between the two methods is that [Šimerka]{} factors the forms $Q_p^n$ for small prime numbers $p$ and small exponents $n$, whereas Schnorr factors products $Q_1^{n_1} \cdots Q_r^{n_r}$ of forms $Q_j = (p_j,*,*)$ for primes in his factor based and exponent vectors $(n_1,\ldots, n_r)$ chosen at random. [Šimerka]{}’s question in Section 4 concerning the number of primes $p$ such that the forms $(p,B,C)$ generate the class group was answered under the assumption of the Extended Riemann Hypothesis by Schoof [@Schoof Cor. 6.2], who showed that the first $c \log^2|\Delta|$ prime numbers suffice; Bach [@Bach] showed that, for fundamental discriminants $\Delta$, we can take $c = 6$. The basic idea of combining relations, which is also used in factorization methods based on continued fractions, quadratic sieves or the number field sieve, is not due to [Šimerka]{} but rather occurs already in the work of Fermat and played a role in his challenge to the English mathematicians, notably Wallis and Brouncker. In this challenge, Fermat explained that if one adds to the cube $343 = 7^3$ all its proper divisors, then the sum $1 + 7 + 7^2 + 7^3 = 400 = 20^2$ is a square, and asked for another cube with this property. Fermat’s solution is best explained by studying a simpler problem first, namely that of finding a number $n$ with $\sigma(n^2) = m^2$, where $\sigma(n) = \sum_{d \mid n} 1$ is the sum of all divisors of a number. Making a table of $\sigma(p)$ for small prime powers $p$ one observes that $\sigma(2^4) = \sigma(5^2) = 31$, hence $\sigma(20^2) = 31^2$. The solution[^9] of Fermat’s challenge also exploits the multiplicativity of $\sigma(n)$: with little effort one prepares a table for the values of $\sigma(p)$ for small primes $p$ such as the following: $$\begin{array}{r|c} p & \sigma(p^3) \\ \hline 2 & 3 \cdot 5 \\ 3 & 2^3 \cdot 5 \\ 5 & 2^2 \cdot 3 \cdot 13 \\ 7 & 2^4 \cdot 5^2 \\ 11 & 2^3 \cdot 3 \cdot 61 \end{array} \qquad \qquad \begin{array}{r|c} p & \sigma(p^3) \\ \hline 13 & 2^2 \cdot 5 \cdot 7 \cdot 17 \\ 17 & 2^2 \cdot 3^2 \cdot 5 \cdot 29 \\ 19 & 2^3 \cdot 5 \cdot 181 \\ 23 & 2^4 \cdot 3 \cdot 5 \cdot 53 \\ 29 & 2^2 \cdot 3 \cdot 5 \cdot 421 \end{array} \qquad \qquad \begin{array}{r|c} p & \sigma(p^3) \\ \hline 31 & 2^6 \cdot 13 \cdot 37 \\ 37 & 2^2 \cdot 5 \cdot 19 \cdot 137 \\ 41 & 2^2 \cdot 3 \cdot 7 \cdot 29^2 \\ 43 & 2^3 \cdot 5^2 \cdot 11 \cdot 37 \\ 47 & 2^5 \cdot 3 \cdot 5 \cdot 13 \cdot 17 \end{array}$$ Then it is readily seen that $n = 751530 = 2 \cdot 3 \cdot 5 \cdot 13 \cdot 41 \cdot 47$. Concluding Remarks {#concluding-remarks .unnumbered} ================== [Šimerka]{}’s contributions to the theory of quadratic forms and the factorization of numbers would have remained unknown if his articles could not be found online. In particular, his memoirs [@Sim1; @Sim2; @Sim59] can be accessed via google books[^10], and the articles that appeared in the journal Časopis are available on the website of the GDZ[^11] in Göttingen. I would also like to remark that a prerequisite for understanding the importance of [@Sim1] is a basic familiarity with composition of binary quadratic forms. I do not know where [Šimerka]{} acquired his knowledge of number theory. [Šimerka]{} was familiar with Legendre’s “Essais de Théorie des Nombres” and Gauss’s “Disquisitiones Arithmeticae”, as well as with publications by Scheffler [@Sch] on diophantine analysis[^12], and by Dirichlet [@Dir] and Lipschitz [@Lip] on the class number of forms with nonsquare discriminants. Since Lipschitz’s article appeared in 1857, [Šimerka]{} must have had access to Crelle’s Journal while he was teaching in Budweis. [Šimerka]{}’s article [@Sim1] contains other ideas that we have not discussed. In particular, in [@Sim1 Art. 12] he tries to get to grips with decompositions of noncyclic class groups into “periods” (cyclic subgroups); in this connection he gives the example $\Delta = -2184499$ with class group of type[^13] $(5,5,11)$. In [@Sim1 Art. 18], [Šimerka]{} solves diophantine equations of the form $pz^m = ax^2 + bxy +cy^2$. [99]{} E. Bach, [*Explicit bounds for primality testing and related problems*]{}, Math. Comp. [**55**]{} (1990), 355–380 R.D. Carmichael, [*On composite numbers $P$ which satisfy the Fermat congruence $a^{P-1} \equiv 1 \bmod P$*]{}, Amer. Math. Monthly [**19**]{} (1912), 22–27 K. Čupr, [*Málo známé jubileum*]{}, Časopis pro pěstování matematiky a fysiky [**43**]{} (1914), 482–489 L. Dickson, [*History of the Theory of Numbers*]{}, vol. I (1919); vol. II (1920); vol. III (1923) P.G.L. Dirichlet, [*Recherches sur diverses applications de l’analyse infinitésimale à la théorie des nombres*]{}, J. Reine Angew. Math. [**21**]{} (1839), 1–12 A. v. Ettingshausen, [*Die combinatorische Analysis als Vorbereitungslehre zum Studium der theoretischen höhern Mathematik*]{}, Vienna 1825 J.E. Hofmann, [*Neues über Fermats zahlentheoretische Herausforderungen von 1657*]{}, Abh. Preuss. Akad. Wiss. 1943, Nr. 9, 52pp P. Joubert, [*Sur la théorie des fonctions elliptiques et son application à la théorie des nombres*]{}, C.R. Acad. Sci. Paris [**50**]{} (1860), 774–779 S. Hambleton, F. Lemmermeyer, [*Arithmetic of Pell surfaces*]{}, Acta Arith. [**146**]{} (2011), 1–12 A. Kopáčková, [*Václav Šimerka a počátky matematické analýzy v české školské matematice*]{}, preprint A. Korselt, [*Problème Chinois*]{}, L’interméd. Math. [**6**]{} (1899), 142–143 R. Lipschitz, [*Einige Sätze aus der Theorie der quadratischen Formen*]{}, J. Reine Angew. Math. [**53**]{} (1857), 238–259 A. Pánek, [*Život a pusobení p. Václava Šimerky*]{}, Časopis pro pěstování matematiky a fysiky [**17**]{} (1888), 253–256 H. Scheffler, [*Die unbestimmte Analytik*]{}, Hannover 1853, 1854 C.P. Schnorr, [*Refined analysis and improvements on some factoring algorithms*]{}, J. Algorithms [**2**]{} (1982), 101–127 R. Schoof, [*Quadratic fields and factorisation*]{}, Computational Methods in Number Theory (R. Tijdeman & H. Lenstra, eds.), Mathematisch Centrum, Amsterdam, Tract 154, 1982, 235–286 M. Seysen, [*A probabilistic factorization algorithm with quadratic forms of negative discriminant*]{}, Math. Comp. [**48**]{} (1987), 757–780 D. Shanks, [*Class number, a theory of factorization and genera*]{}, Proc. Symp. Pure Math. [**20**]{}, AMS 1971 W. Šimerka, [*Die Perioden der quadratischen Zahlformen bei negativen Determinanten*]{}, Sitzungsber. Kaiserl. Akad. Wiss., Math.-Nat.wiss. Classe [**31**]{} (1858), 33–67; presented May 14, 1858 W. Šimerka, [*Die trinären Zahlformen und Zahlwerthe*]{}, Sitzungsber. Kaiserl. Akad. Wiss., Math.-Nat.wiss. Classe [**38**]{} (1859), 390–481 W. Šimerka, [*Lösung zweier Arten von Gleichungen*]{}, Sitz.ber. Wien [**33**]{} (1859), 277–284 W. Šimerka, [**]{} Arch. Math. Phys. [**51**]{} (1866), 503–504 V. Šimerka, [*Poznámka*]{} (Number theoretic note), Casopis [**8**]{} (1879), 187–188 V. Šimerka, [*Zbytky z arithmetické posloupnosti*]{} (On the remainders of an arithmetic progression), Casopis [**14**]{} (1885), 221–225 N. Sloane, [*Online Encyclopedia of Integer Sequences*]{}, A002997 at [http://oeis.org/A002997]{} J. Wallis, [*Commercium Epistolicum de questionibus quibusdam mathematicis nuper habitum*]{}, 1658 [^1]: In his German publications, [Šimerka]{} used the germanized name Wenzel instead of Václav. [^2]: Computing the powers of a form $Q$, one finds $Q$, $Q^2$, …, $Q^h \sim Q_0$ before everything repeats. The last form in such a “period” of reduced forms is thus always the principal form. [^3]: The word ambiguous was coined by Poullet-Deslisle in the French translation of Gauss’s Disquisitiones Arithmeticae; it became popular after Kummer had used it in his work on higher reciprocity laws. [Šimerka]{} knew Legendre’s “diviseurs quadratiques bifides” as well as Gauss’s “forma anceps”. [^4]: This word is apparently borrowed from the book [@Ett] on combinatorial analysis by Andreas von Ettinghausen, professor of mathematics at the University of Vienna. Ettinghausen used the word “Zeiger” (see [@Ett p. 2]) as the German translation of the Latin word “index”. [Šimerka]{} refers to [@Ett] in [@Sim1 p. 55]. [^5]: Thus we have, for example, $(180,-17,193) = \frac{3^2 \times 5}{2^2}$ because $180 = 2^2 \times 3^2 \times 5$ and $-17 \equiv -1 \pmod 4$, $-17 \equiv 1 \pmod 6$, $-17 \equiv 3 \pmod {10}$. [^6]: For large determinants, or in cases where the preceding method is not successful, we take the indices of some small primes as unknowns, eliminates those numbers from the products of the determination equations, and seeks to represent these unknown indices by the other primes in these determination equations. [^7]: As for the length $\theta$ of the period, one tries to find $fm = 1$, and then either $\theta = m$, or $\theta$ is a divisor of $m$. The most important members of the period are those belonging to small prime numbers. I have not yet found what the smallest prime number is whose pointer must be known in order not to commit an error; in any case it is smaller than $\sqrt{D/3}$ for odd forms, and than $2 \sqrt{D/3}$ for the even forms, but most likely just a few prime numbers are sufficient. [^8]: In factorizations with this method one often finds $fa = m^2$, or such a form can be derived from certain determination equations; then we have $\frac{f 2a}{m^2} = (\frac{fa}m)^2 = 1$, and $fa:m$ can only be an end or a middle form. Most often, the latter possibility occurs. [^9]: Sufficiently many hints can be found in Frenicle’s letter in [@Wallis XXXI], and in subsequent letters by Wallis and Schooten. See also the detailed exposition given by Hofmann [@Hof]. [^10]: See [http://books.google.com]{} [^11]: see [http://gdz.sub.uni-goettingen.de/dms/load/toc/?PPN=PPN31311028X]{} [^12]: This is an interesting book, which contains not only the basic arithmetic of the integers up to quadratic reciprocity, but also topics such as continued fractions in Gaussian integers, which are discussed using geometric diagrams, and the quadratic reciprocity law in ${{\mathbb Z}}[i]$. [^13]: [Šimerka]{} remarks that this is a “remarkably rare case”. In fact, the smallest discriminant with a noncyclic $5$-class group is $\Delta = -11199$, and the minimal $m$ with $\Delta = -4m$ and noncyclic $5$-class group is $m = 4486$.
--- abstract: | We prove the non-existence of recurrent words with constant Abelian complexity containing 4 or more distinct letters. This answers a question of Richomme et al.\ Keywords: Combinatorics on words, abelian complexity, words on graphs author: - | James Currie[^1] and Narad Rampersad[^2]\ Department of Mathematics and Statistics\ University of Winnipeg\ 515 Portage Avenue\ Winnipeg, Manitoba R3B 2E9 (Canada)\ [j.currie@uwinnipeg.ca](j.currie@uwinnipeg.ca)\ [n.rampersad@uwinnipeg.ca](n.rampersad@uwinnipeg.ca) title: Recurrent words with constant Abelian complexity --- Introduction ============ One of the central notions in combinatorics on words is that of the subword complexity of an infinite word. Richomme, Saari, and Zamboni [@RSZ09] have recently begun a systematic study of the Abelian analogue of the subword complexity of infinite words. In this paper we resolve one of the open problems from their study by showing the non-existence of recurrent words with constant Abelian complexity containing 4 or more distinct letters. Let $\Sigma$ be a finite alphabet and let $\Sigma^*$ be the set of all finite words over the alphabet $\Sigma$. Consider the equivalence relation $\sim$ on $\Sigma^*$, defined by $$u\sim v\mbox{ if }u\mbox{ is an anagram of }v.$$ Thus $1232\sim 2132$. We write $[u]$ for the equivalence class of $u$ under $\sim$. For example, $[121]=\{112, 121, 211\}$. We call $[u]$ an [**Abelian word**]{}. If $u$ is a factor of a word $w$, we call $[u]$ an [**Abelian factor**]{} of $w$. The length of an Abelian factor is the length of any one of its representatives. If $w$ is an infinite word, the [**subword complexity function**]{} of $w$ is the function $f : \mathbb{N} \to \mathbb{N}$, where for $m = 1,2,\ldots$, the value of $f(m)$ is the number of factors of $w$ of length $m$. Similarly, the [**Abelian complexity function**]{} of $w$ is the function $\tilde{f}(m) : \mathbb{N} \to \mathbb{N}$, where for $m = 1,2,\ldots$, the value of $\tilde{f}(m)$ is the number of Abelian factors of $w$ of length $m$. An infinite word $w = w_0w_1\cdots$, where $w_i \in \Sigma$ for $i = 0,1,2\ldots$, is [**ultimately periodic**]{} if there exist a non-negative integer $c$ and a positive integer $p$ such that $w_i = w_{i+p}$ for all $i \geq c$. A classical result of Morse and Hedlund [@MH40] shows that an infinite word $w$ is ultimately periodic if and only if its complexity function $f$ is eventually constant. If $w$ is not ultimately periodic, then $f(m) \geq m+1$ for all $m$. The well-studied [**Sturmian words**]{} are precisely the aperiodic words of minimal complexity (i.e., those words for which $f(m) = m+1$ for all $m \geq 1$). Coven and Hedlund [@CH73] showed that any Sturmian word has constant Abelian complexity. In particular, for any Sturmian word, one has $\tilde{f}(m) = 2$ for all $m \geq 1$. Sturmian words are necessarily over a binary alphabet; it is therefore natural to ask if over an $n$-letter alphabet, where $n \geq 3$, there is an infinite word $w$ with Abelian complexity function $\tilde{f}(m) = n$ for all $m \geq 1$. Without further qualification, this question is not very interesting, as one easily sees that the word $$123\cdots (n-1)nnnnnnnn\cdots$$ over the alphabet $\{1,2,\ldots,n\}$ has exactly $n$ Abelian factors of each length $m \geq 1$. This observation leads us to the following definition. We say that a word $w$ is [**recurrent**]{} if every factor of $w$ occurs infinitely often in $w$. Any Sturmian word is recurrent, so such words provide examples of recurrent words with constant Abelian complexity over a binary alphabet. Richomme, Saari, and Zamboni showed that there are recurrent words over a $3$-letter alphabet with exactly $3$ Abelian factors of each length $m \geq 1$, thereby answering a question of Rauzy. They also posed a question of their own, namely, “Does there exist a recurrent word over a $4$-letter alphabet with exactly $4$ Abelian factors of each length?” They also conjectured that the answer to the question should be “no”. We show that this is indeed the case. Moreover, our main result also applies to alphabets of size greater than $4$. \[main\] Let $n \geq 4$ be an integer. There is no recurrent word over an $n$-letter alphabet with exactly $n$ Abelian factors of each length $\geq 1$. Proof of Theorem \[main\] ========================= Fix a positive integer $n\ge 4$. Let $\Sigma$ be the alphabet $\{1, 2, 3, \ldots, n\}$. Let $w$ be a finite or infinite word. Consider the graph $G$ with vertex set $\Sigma$, and an edge $ij$ whenever at least one of $ij$ and $ji$ is a factor of $w$. Note that $G$ may contain loops, but not multiple edges.[^3] From now on suppose that $w$ is a fixed recurrent word, having constant Abelian complexity $n$. \[one cycle\]Graph $G$ consists of a spanning tree and one additional edge. Thus $G$ contains a unique cycle $C$ (which is possibly a loop). [[**Proof:** ]{}]{}Since $w$ has Abelian complexity $n$, it contains all $n$ letters. It follows that $G$ must be connected. This implies that $G$ contains a spanning tree. The spanning tree contains $n-1$ edges. Since the factors of $w$ of length 2 represent exactly $n$ Abelian words, $G$ contains exactly $n$ edges. $\Box$ Let $b\in\Sigma$. Define $T(b)=\{[abc]:abc\mbox{ is a factor of }w\mbox{ for some }a,c\in\Sigma.\}$. We call an element of $T(b)$ a [**triple associated with**]{} $b$. Since $w$ is recurrent, every letter of $\Sigma$ occurs in $w$ as the middle letter of at least one factor of length 3. This means that each letter of $\Sigma$ has at least one triple associated with it. \[distinct triples\]Suppose that $a\ne b$ but triple $[abc]$ is associated with both $a$ and $b$. Then exactly one of he following occurs: 1. [$abc$ is a triangle in $G$]{} 2. [$a=c$ and $G$ contains the loop $aa$.]{} 3. [$b=c$ and $G$ contains the loop $bb$.]{} [[**Proof:** ]{}]{}Since $abc$ is associated to $b$, at least one of $abc$ and $cba$ is a factor of $w$. It follows that $ab$ and $bc$ are edges of $G$. Since $abc$ is a triple associated to $a$, at least one of $bac$ and $cab$ is a factor of $w$, so that $ca$ is also an edge of $G$. If $a$, $b$ and $c$ are distinct, then $G$ contains triangle $abc$. If two of them are the same, then since $a\ne b$, one of $bc$ and $ca$ is a loop.$\Box$ \[degree 3\] Suppose that $b$, $c$ and $d$ are distinct neighbours of $a$ in $G$. Then $|T(a)|\ge 2$. [[**Proof:** ]{}]{}Since $b$ is a neighbour of $a$, then either $ba$ or $ab$ is a factor of $w$. By recurrence, either $bax$ or $xab$ will therefore be a factor of $w$ for some $x\in\Sigma$, and $[xab]$ is a triple associated with $a$. Similarly, $a$ must have associated triples $[cay]$, $[daz]$ for some letters $y,z\in \Sigma$. If $[bax]\ne[cay]$, then we are done. Otherwise, $x=c$ and $y=b$ so that $[daz]\ne[bax].\Box$ \[degree 2\]Suppose that $a$ has distinct neighbours $b$ and $c$ in $G$. Either $[bac]\in T(a)$ or $|T(a)|\ge 2$. [[**Proof:** ]{}]{}One of $ab$ and $ba$ is a factor of $G$. Suppose $ab$ is a factor of $w$. (The other case is similar). By recurrence, $w$ has a factor $xab$ for some $x$. If $x=c$, then $[bac]$ is associated to $a$, and we are done. Otherwise, $[xab]\in T(a)$, and $T(a)$ also includes a triple involving $c.\Box$ Case 1: Cycle $C$ is an $m$-cycle, $m\ge 4$. {#case-1-cycle-c-is-an-m-cycle-mge-4. .unnumbered} -------------------------------------------- By Lemma \[one cycle\],$C$ is the unique cycle in $G$. This implies that $G$ contains no loops or triangles, so that triples associated with distinct vertices are distinct by Lemma \[distinct triples\]. At least one triple is associated with each of the $n$ vertices of $G$. Since the Abelian complexity of $w$ is exactly $n$, the total number of triples associated with the vertices of $G$ is $n$. We conclude that $|T(a)|=1$ for each $a\in\Sigma$. From Lemma \[degree 3\] we conclude that each vertex of $C$ has degree exactly 2, so that $C$ is a connected component of $G$. Since $G$ is connected, $G=C$ is an $n$-cycle. Without loss of generality let the vertices be connected in the natural order $123\cdots n1$. By Lemma \[degree 2\], we conclude that the triples associated with the vertices of $G$ are $[123],[234],[345],\ldots,[(n-2)(n-1)n],[(n-1)n1],[n12]$. Since $w$ must be walked on $G$ respecting the possible triples, we conclude that $w$ is a suffix of $(123\cdots n)^\omega$ or of $(n\cdots 321)^\omega$ and thus has period $n$. However, this means that $w$ contains exactly one factor of length $n$, up to anagrams. This is a contradiction. Case 2: Cycle $C$ is a loop. {#case-2-cycle-c-is-a-loop. .unnumbered} ---------------------------- Without loss of generality, let the loop edge be $11$. At least one triple is associated with each of the $n$ vertices of $G$. A triple of the form $[111]$ could only ever be associated to $1$. Also, $b,c$ are neighbours of $1$ and $[b1c]$ is associated to $b$, then $1bc$ or $cb1$ is a factor of $w$. This implies that $bc$ is an edge of $G$ so that $ibc$ is a triangle (if $b\ne c$) or $bc$ is a loop (if $b=c$). Since $11$ is the only cycle in $G$ by Lemma \[one cycle\], this is impossible. It follows that triples of the form $111$ or $b1c$ with $b$ and $c$ neighbours of $1$ can only ever be associated to $1$. It now follows that $1$ can be associated to at most a single triple of the form $111$ or $b1c$ where $b,c$ are neighbours of $1$; if $1$ is associated to two such triples $T_1$ and $T_2$, then each of the $n-1$ other vertices of $G$ is associated to a triple, and these triples are distinct from $T_1$ and $T_2$, and from each other by Lemma \[distinct triples\]. Then, however, we have at least $n+1$ distinct triples, violating the Abelian complexity of $w$. We make cases based on whether $1$ is associated to a triple of the form $111$ or $b1c$ where $b$ and $c$ are neighbours of $1$. ### Case 2a: Vertex $1$ is associated to a triple of the form $[111]$. {#case-2a-vertex-1-is-associated-to-a-triple-of-the-form-111. .unnumbered} Each vertex of $G-\{1\}$ is associated to some triple other than $[111]$, and those triples are distinct from each other and from $111$. Let $b$ be a neighbour of $1$. At least one of $b1$ and $1b$ is a factor of $w$. Since $1$ is not associated to any triple $[b1c]$ where $b$ and $c$ are neighbours of 1, it follows that $b11$ or $11b$ must be a factor of $w$. Since the Abelian complexity of $w$ is $n$, we conclude that $[11b]=[1b1]$ is the unique triple associated with $b$. From Lemma \[degree 2\], it follows that $1$ is the only neighbour of $b$. Graph $G$ is therefore the star with center $1$; the edges of $G$ are precisely $E(G)=\{1k:1\le k\le n\}$. Let $m$ be least such that $w$ has a factor $d1^me$ where $d,e\ne 1$. Without loss of generality, say that $21^m3$ is a factor of $w$. Let $b$ be any vertex of $G-\{1\}.$ Since $1b$ is an edge of $G$, $w$ has a factor $b1$ or $1b$, hence a factor $1^2b1^m$ or $1^mb1^2$. (Recall that 1 is the only neighbour of $b$ in $G$.) It follows that up to anagrams, the $n$ factors of $w$ of length $m+3$ are $121^m3$, $1^m21^2$, $1^m31^2$, $1^m41^2,\ldots$, $1^mn1^2$. In particular, $w$ has no factor $1^{m+2}$, and in any factor of the form $b1^kc$ with $b,c\ne 1$ and $k\le m$ we must have $\{b,c\}=\{2,3\}$ and $k= m$. Now consider the shortest factor of $w$ containing a letter from $\{2,3\}$ and a letter from $\{4, 5, \ldots, n\}$. By our last remark, it must have the form $b1^kc$ or $c1^kb$ where $b\in\{2,3\}$, $c\in\{4, 5, \ldots, n\}$ and $k\ge m+1$. Since $w$ has no factor $1^{m+2}$, $k=m+1$, and we have found an $(n+1)^{st}$ Abelian factor of $w$. This is a contradiction. ### Case 2b: Vertex $1$ is associated to exactly one triple of the form $[b1c]$ where $b$ and $c$ are neighbours of $1$. {#case-2b-vertex-1-is-associated-to-exactly-one-triple-of-the-form-b1c-where-b-and-c-are-neighbours-of-1. .unnumbered} In this case, $[111]$ is not associated with $1$; i.e., $111$ is not a factor of $w$. Each vertex of $G-\{1\}$ is again associated to some triple other than $[b1c]$, and these triples are distinct from each other. Let $d$ be a neighbour of $1$ other than $b$ or $c$. Vertex $1$ cannot be associated to another triple $[d1e]$ where $e$ is a neighbour of $1$. Therefore, at least one of $d11$ and $11d$ is a factor of $w$. It follows that $[11d]=[1d1]$ is the unique triple associated with $d$. We conclude that $1$ is the only neighbour of $d$; viz., $d$ is a leaf. We see also that (except possibly once at the beginning of $w$) $d$ always appears in $w$ in the context $11d11$. Now consider the shortest factor of $w$ containing a letter from $\{b,c\}$ and a neighbour of $1$ other than $b$ or $c$. By our last remark, and relabeling $b$ and $c$ if necessary, this factor must have the form $c1^kd$ or $d1^kc$, $k\ge 2$, $d$ some neighbour of $1$ other than $b$ and $c$. If $[11c]$ is not associated to $c$, then $[11c]$ and $[b1c]$ are associated only to 1, and we can count $n+1$ distinct triples associated to vertices of $G$. This violates the Abelian complexity of $G$. We conclude that $[11c]$ must be the unique triple associated with $c$, and $c$ is a leaf. ### Case 2bi: Vertex $b$ is a leaf. {#case-2bi-vertex-b-is-a-leaf. .unnumbered} In this case, each neighbour of 1 is a leaf; graph $G$ is the star with center $1$. The edges of $G$ are precisely $E(G)=\{1k:1\le k\le n\}$. Let $m$ be least such that $w$ has a factor $x1^md$ or $d1^mx$ where $x \in\{b,c\}, d\notin\{1,b,c\}$. Since $[b1c]$ is the unique triple associated only with 1, $m\ge 2$. On the other hand $111$ is not a factor of $w$, so $m=2$. Without loss of generality, assume that $b=2$, $c\ne 3$, and $2113$ or $3112$ is a factor of $w$. Since $1$ is the only neighbour of $2$, it follows that $12113$ or $31121$ is a factor of $w$. We have already seen that $11d11$ is a factor of $w$ if $d\ne 1, b, c$. It follows that, up to anagrams, the following $n+1$ factors of length 4 appear in $w$: $$121c, 2113, 1121, 1131, 1141, \ldots, 11n1.$$ This is a contradiction. ### Case 2bii: Vertex $b$ has degree at least 2. {#case-2bii-vertex-b-has-degree-at-least-2. .unnumbered} Since our Abelian complexity is $n$, and triple $b1c$ is associated only with $1$, for any vertex $d$ of $G-\{1\}$, $|T(d)|=1$. By Lemma \[degree 3\], deg$(d)\le 2$. We may therefore assume that the edges of $G$ are $$11, 1n, 1(n-1), 1(n-2), \ldots 1(r+1), 1r, r(r-1), (r-1)(r-2), (r-2)(r-3),\ldots, 32$$ and the triples are $$[11n],[11(n-1)],[11(n-2)],\ldots$$ $$[11(r+1)],(r+1)1r],[1r(r-1)],[r(r-1)(r-2)],\ldots,[432],[323].$$ (We have $c=r+1$, $b=r$.) It follows that up to anagrams, $w$ has the $n$ length 4 factors $$\begin{array}{ll}11n1, 11(n-1)1,\ldots, 11(r+1)1&\mbox{ ($n-r$ factors)}\\ 1(r+1)1r, (r+1)1r(r-1)&\mbox{ (2 factors)}\\ 1r(r-1)(r-2), r(r-1)(r-2)(r-3),\ldots, 5432, 4323&\mbox{ ($r-2$ factors)}. \end{array}$$ Now however, consider the shortest factor of $w$ containing letters from both $\{n, n-1,\ldots, r-2\}$ and $\{r+1,r\}$. This must have the form $x1^ky$ or $y1^kx$ where $x\in \{n, n-1,\ldots, r-2\}$ and $y\in\{r+1,r\}$. Since $[b1c]$ is the unique triple associated only to $1$, we cannot have $k=1$. Since $111$ is not a factor of $w$, we must have $k=2$. This gives an $(n+1)^{st}$ length 4 Abelian word in $w$, namely $x11y$. This is a contradiction. ### Case 2c: Every triple associated with 1 has the form $[11b]$, $b\ne 1$. {#case-2c-every-triple-associated-with-1-has-the-form-11b-bne-1. .unnumbered} In this case, $w$ has no factors 111 or $b1c$ with $b,c\ne 1$. If $b$ is any neighbour of $1$ therefore, either $b11$ or $11b$ is a factor of $w$. If $1$ has no non-leaf neighbour, $G$ is a star centered at $1$; for $2\le k\le n$, the only length three factors of $w$ containing $k$ are among $11k$, $1k1$ and $11k$. The triples of $G$ are precisely those of the form $[11k]$, $k\ne 1$, and $G$ has only $n-1$ triples. This is a contradiction. Therefore, let $b$ be a non-leaf neighbour of 1, let $b'\ne 1$ be a neighbour of $b$. The shortest factor of $w$ containing $1$ and $b'$ must be $1bb'$ or $b'b1$, so $[1bb']$ is a triple associated with $b$. Now every vertex of $G-\{1\}$ has at least one triple associated with it, and all such associated triples must be distinct. Moreover, $b$ has triples $[11b]$ and $[1bb']$ associated with it. We have now listed $n$ distinct triples associated with the vertices of $G-\{1\}$. If 1 had another non-leaf neighbour $c\ne b$, then an $(n+1)^{st}$ triple $[1cc']$ would be associated to $c$. Since this is impossible, it follows that $b$ is the only non-leaf neighbour of $1$. Without loss of generality, let the neighbours of $1$ be exactly $2,3,\ldots r=b$, and let $r+1$ be a neighbour of $r$. The $r$ triples $[112],[113],\ldots[11r],[1r(r+1)]$ will be associated to vertices $1,2,\ldots, r$, while the triples associated with vertices $(r+1), (r+2),\ldots, n$ must be distinct from these and from each other. This means that exactly one triple is associated to each of vertices $(r+1), (r+2),\ldots, n$, so that by Lemma \[degree 3\], they each have degree at most 2. Without loss of generality we may thus assume that the edges of $G$ are $$11, 12, 13, \ldots$$ $$1r, r(r+1), (r+1)(r+2), (r+2)(r+3),\ldots, (n-1)n$$ some $r$, $1<r\le n$. The $n$ triples associated to vertices of $G$ must thus be precisely $$[112],[113],[114],$$ $$[11r],[1r(r+1)],[r(r+1)(r+2)],\ldots,[(n-2)(n-1)n], [(n-1)n(n-1)].$$ For $2\le k\le r-1$, the only neighbour of vertex $k$ is vertex $1$. Since $w$ has no factors 111 or $b1c$ with $b,c\ne 1$, it follows that $w$ has a factor $11k1$ for $2\le k\le r-1$. In addition to these $r-2$ factors of length 4, the specification of triples forces $w$ to have (up to reversal) the $n-r+1$ factors $$11r(r+1),1r(r+1)(r+2),r(r+1)(r+2)(r+3),(r+1)(r+2)(r+3)(r+4),$$ $$\ldots, (n-3)(n-2)(n-1)n, (n-2)(n-1)n(n-1).$$ In addition, since $11r$ or $r11$ is a factor of $w$, so is a word $c11r$ or $r11c$, where $c$ is some neighbour of $1$ in $G$. This brings the count of Abelian factors of length 4 to $(r-2)+(n-r+1)+1=n$. Suppose now that $d$ is a neighbour of $1$ other than $r$ and $c$. Then $w$ contains a factor $d11$ or $11d$, hence a word $d11e$ or $e11d$, where $e$ is a neighbour of $1$. This brings the number of length 4 Abelian factors of $w$ to $n+1$, which is a contradiction. It follows that the only neighbours of 1 are $r$ and $c$. (Note that perhaps $r=c$.) The length 4 Abelian factors of $w$ are thus $$[c11r], [11r(r+1)], [1r(r+1)(r+2)],[r(r+1)(r+2)(r+3)],[(r+1)(r+2)(r+3)(r+4)],\ldots,$$ $$[(n-3)(n-2)(n-1)n], [(n-2)(n-1)n(n-1)].$$ In the case that $c\ne r$, this forces $w$ to be a suffix of $$\left(c11r(r+1)(r+2)(r+3)(r+4)\cdots\right.$$ $$\left.(n-2)(n-1)n(n-1)(n-2)\cdots(r+4)(r+3)(r+2)(r+1)r11\right)^\omega,$$ and $w$ is periodic, with period $2n$. However, this means that $w$ contains exactly one factor of length $2n$, up to anagrams, which is a contradiction.\ In the case that $c= r$, this forces $w$ to be a suffix of $$\left(r(r+1)(r+2)(r+3)(r+4)\cdots\right.$$ $$\left.(n-2)(n-1)n(n-1)(n-2)\cdots(r+4)(r+3)(r+2)(r+1)r11\right)^\omega,$$ and again $w$ is periodic, with a contradiction. Case 3: Cycle $C$ is a $3$-cycle. {#case-3-cycle-c-is-a-3-cycle. .unnumbered} --------------------------------- Let the vertices of $C$ be $a, b, c$. By Lemma \[distinct triples\], the only triple which can be associated with more than one vertex is $[abc]$. Each vertex of $G-\{a,b,c\}$ is associated with some triple, and these must all be distinct. This accounts for $n-3$ triples. Since $G$ is connected, $w$ must contain some factor of the form $xab$, $xba$, $xbc$, $xcb$, $xca$ or $xac$, for some $x\notin \{a,b,c\}$. Suppose without loss of generality that $[xab]$ is associated to $a$ for some $x\not\in\{a,b,c\}$. Then $a$ has degree at least $3$, so that $|T(a)|\ge 2$ by Lemma \[degree 3\]. Since $|T(b)|,|T(c)|\ge 1$ but the total number of distinct triples associated to vertices of $G$ is $n$, at least two of $a$, $b$ and $c$ have an associated triple in common. That triple must be $[abc]$. So far, we have found that $[xab],[abc]\in T(a)\cup T(b)\cup T(c).$ ### Case 3a: $T(a)\cup T(b)\cup T(c)=\{[abc],[bax]\}$. {#case-3a-tacup-tbcuptcabcbax. .unnumbered} The shortest factor of $w$ starting with $x$ and ending in one of $b$ or $c$ will be $xab$. (Such a factor exists because $w$ is recurrent.) Let $uxab$ be a prefix of $w$. As the only triple associated to $b$ is $[abc]$, $w$ has $uxabc$ as a prefix. Again, $T(c)=\{[abc]\}$, so $uxabca$ is a prefix of $w$. The only triple in $T(a)$ having $c$ as one of its letters is $[abc]$, so $uxabcab$ is a prefix of $w$. Continuing in this way, we find that $w=ux(abc)^\omega$. This is impossible, since $x$ must appear in $w$ infinitely often, by recurrence. ### Case 3b: $|T(a)\cup T(b)\cup T(c)|=3$. {#case-3b-tacup-tbcuptc3. .unnumbered} The argument of [**Case 3a**]{} can still be applied if we add to $T(a)\cup T(b)\cup T(c)$ another triple from $bab$, $cbc$ or $aca$; none of these triples allows us to break the circular order $a-b-c-a$ on $\{a,b,c\}$ which commences with $xab$. Similarly, adding to $T(a)\cup T(b)\cup T(c)$ a triple $[ybc]$ where $y\ne\{a,c\}$ is a neighbour of $b$ would lead to the same contradiction. Again a triple $[yca]$ where $y\ne\{b,a\}$ is a neighbour of $c$, or a triple $[yab]$ where $y\ne\{b,c\}$ is a neighbour of $a$ leads to a contradiction. We may therefore assume $w$ contains a factor $aba$, $bcb$, $cac$, or a factor of the form $aby$, $bcy$ or $cay$, $y\not\in \{a,b,c\}$. Since $T(a)\cup T(b)\cup T(c)$ contains three distinct triples, and a distinct triple is associated to each of the $n-3$ vertices of $G-\{a,b,c\}$, we deduce that each vertex of $G-\{a,b,c\}$ is only associated with a single triple, and thus has degree at most 2 by Lemma \[degree 3\]. We recall that $G$ contains exactly one cycle. Graph $G$ therefore consists of the triangle $abc$, together with 1 or more paths radiating from its vertices. ### Case 3bi: Word $w$ contains a factor $aba$, $bcb$, $cac$. {#case-3bi-word-w-contains-a-factor-aba-bcb-cac. .unnumbered} In this case, the only vertex of $G-\{a,b,c\}$ which is adjacent to any of $a$, $b$ and $c$ is $x$. Graph $G$ consists of triangle $abc$ together with a single path adjacent to $a$. Relabel $x=a_1$, and let the edges of $G$ be $$ab, bc, ca, aa_1, a_1a_2, a_2a_3,\ldots, a_{r-1}a_r$$ where $r=n-3$.[^4] By Lemma \[degree 2\], the triples of $G$ are precisely $$[baa_1],[aa_1a_2],[a_1a_2a_3],[a_2a_3a_4],\ldots,[a_{r-2}a_{r-1}a_r],[a_{r-1}a_ra_{r-1}],[abc],T$$ where $T$ is one of $[aba]$, $[bcb]$ and $[cac].$ The only two triples containing both $a$ and $a_1$ are $[baa_1]$ and $[aa_1a_2]$. We conclude that $w$ contains Abelian factor $[aba_1a_2]$. Reasoning similarly, we find that for $r\ge 4$, the following $n-2$ length 4 Abelian factors must be in $w:$ $$[baa_1a_2],[aa_1a_2a_3],[a_1a_2a_3a_4],\ldots,[a_{r-3}a_{r-2}a_{r-1}a_r],[a_{r-2}a_{r-1}a_ra_{r-1}].$$ (The stipulation $r\ge 4$ is only for notational convenience. If $r=3$, let $a_0=a$, and the 3 length 4 Abelian factors are $[baa_1a_2],[aa_1a_2a_3],[a_1a_2a_3a_2]$. If $r=2$, let $a_0=a$, $a_{-1}=b$, and the 2 length 4 Abelian factors are $[baa_1a_2],[aa_1a_2a_1].$ Finally, if $r=1$, a length 4 Abelian factor is $[baa_1a].$) Now consider a factor $v$ of $w$ of the form $a_1\{a,b,c\}^*a_1$, containing $ac$ or $ca$ as a factor. Such a $v$ exists by recurrence. Since the only triple joining $abc$ and $G-\{a,b,c\}$ is $[baa_1]$, $v$ can be written in the form $a_1abv_1baa_1$ where $v_1\in\{a,b,c\}^*$ and $ac$ or $ca$ appears in $v_1$. The circular order of $abv_1ba$ changes exactly once, from $a-b-c-a$ to $a-c-b-a$, at triple $T$. Thus $v_1$ cannot both begin and end with $a$, lest $aba$ appear twice in $abv_2ba$. Thus $v_1$ must either begin or end with $c$, so that $a_1abc$ or $cbaa_1$ is a factor of $w$, yielding Abelian factor $[a_1abc]$ in either case. Notice that we have shown that $abv_1ba$ cannot both begin and end with a palindrome.[^5] It thus follows that $[tz]$ is also an Abelian factor of $w$, where $t\in T$ is a palindrome and $z$ is the letter of $\{a,b,c\}$ not appearing in $t$. Suppose now that $v_1$ begins with $a$. The case where $v_1$ ends in $a$ is similar. Then $a_1aba$ is a factor of $w$, and we have enumerated all $n$ length 4 Abelian factors of $w$: the $n-3$ previously listed, plus $[a_1abc],[tz]=[abac]$ and $[a_1aba]$. The circular order of $\{a,b,c\}$ in $abv_1ba$ changes exactly once (with $aba$), so that $abv_1ba\in aba(cba)^+$. It follows that $bacba$ is a suffix of $abv_1ba$, and $w$ also contains Abelian factor $[bacb]$. This is a contradiction. We conclude that $v_1$ cannot begin or end with $a$, and hence must begin and end with $c$. Since $[ca]$ is an Abelian factor of $v$, we cannot have $abv_1ba=abcba$. Thus far, $w$ has Abelian factors $[a_1abc]$ and $[tz]$ in addition to the $n-3$ length 4 Abelian factors previously listed. Let $y$ be the central letter of palindrome $t$ and write $abv_1ba=v_2yv_3$ where $v_2y$ is a prefix of $(abc)^\omega$ and $yv_3$ is a suffix of $(cba)^\omega$. We must have $|v_2|\equiv|v_3|$ (mod 3). Also, $|v_2|,|v_3|\ge 2$. Suppose that $|v_2|>|v_3|$. Then $|v_2|\ge|v_3|+3\ge 5.$ In this case, $abv_1$ has a prefix $abcabc$, and $w$ contains Abelian factors $[abca]$, $[bcab]$, $[cabc]$. One of these is $[tz]$, but this still gives $n+1$ length 4 Abelian factors of $w$, which is impossible. We similarly rule out $|v_2|<|v_3|$. Note that we may also assume that $|v_2|\le 4$. Since $|abv_1ba|>5$, we find that $3\le |v_2|=|v_3|\le 4$. If $|v_2|=3$, then $abv_1ba=abcacba$, $t=cac$ and $[abca]$ and $[bcac]$ are Abelian factors of $w$. We have now specified all length 4 Abelian factors of $w$; none of these is $[a_1t]$, the central letter in $t$ is not $c$ and the set of length 4 Abelian factors of $w$ turns out to be determined by $t$ and $|v|=2|v_2|+3$. Similarly, if $|v_2|=4$, then $abv_1ba=abcabacba$, $t=aba$ and $[abca]$ and $[bcab]$ are Abelian factors of $w$. Again the set of all length 4 Abelian factors of $w$ is determined by $t$ and $|v|$, none of the Abelian factors is $[a_1t]$ and $c$ is not the central letter in $t$. Since the two different possible lengths for $v$ give different sets of Abelian factors in $w$, it follows that $w$ contains exactly one factor $v$ of the form $a_1\{a,b,c\}^*a_1$ containing $[ac]$ as an Abelian factor. Now let $v'$ be any factor of $w$ of the form $a_1\{a,b,c\}^+a_1$. Word $v'$ must have prefix $a_1ab$ and suffix $baa_1$. However, since $[a_1t]$ is not an Abelian factor of $w$, $v'$ cannot have $a_1aba$ as a prefix or $abaa_1$ as a suffix. Word $v'$ therefore has $a_1abc$ as a prefix and $cbaa_1$ as a suffix. Again, $v'\ne a_1abcbaa_1$, since the central letter of $t$ is not $c$. We deduce that $v'$ has prefix $a_1abca$ or suffix $acbaa_1$, and must contain $[ac]$ as an Abelian factor. In summary, $w$ contains exactly one factor of the form $v=a_1\{a,b,c\}^*a_1$. If $r=1$, this shows that $w$ is periodic, giving a contradiction. If $r\ge 2$, our earlier specification of the $n-3$ triples of $w$ along the path $baa_1\cdots a_r$ shows that $w$ contains a single factor of the form $a_1(\Sigma-\{a,b,c\})^*a_1$, namely $a_1a_2\cdots a_{r-1}ra_{r-1}\cdots a_2a_1$. Since $aa_1a$ is not a factor of $w$, we again deduce that $w$ is periodic, giving a contradiction. ### Case 3bii: Word $w$ contains a factor $aby$, $bcy$ or $cay$, $y\not\in \{a,b,c\}$. {#case-3bii-word-w-contains-a-factor-aby-bcy-or-cay-ynotinabc. .unnumbered} We consider first the case where $w$ contains a factor $aby$, $y\not\in \{a,b,c\}$. Since $b$, $c$ and $x$ are neighbours of $a$, and $abc$ is the only cycle in $G$, we cannot have $y=x$. Graph $G$ consists of triangle $abc$ together with two paths adjacent to $a$ and $b$. Relabel $x=a_1$, $y=b_1$ and let the edges of $G$ be $$ab, bc, ca, aa_1, a_1a_2, a_2a_3,\ldots, a_{r-1}a_r, bb_1, b_1b_2, \ldots, b_{s-1}b_s$$ where $r+s=n-3$. By Lemma \[degree 2\], the $n$ triples of $G$ are $$[baa_1],[aa_1a_2],[a_1a_2a_3],[a_2a_3a_4],\ldots,[a_{r-2}a_{r-1}a_r],[a_{r-1}a_ra_{r-1}],[abc]$$ and $$[abb_1],[bb_1b_2],[b_1b_2b_3],[b_2b_3b_4],\ldots,[b_{s-2}b_{b-1}b_s],[b_{s-1}b_sb_{s-1}].$$ The following $n-3$ length 4 Abelian factors must be in $w$: $$[baa_1a_2],[aa_1a_2a_3],[a_1a_2a_3a_4],\ldots,[a_{r-3}a_{r-2}a_{r-1}a_r],[a_{r-2}a_{r-1}a_ra_{r-1}]$$ and $$[abb_1b_2],[bb_1b_2b_3],[b_1b_2b_3b_4],\ldots,[b_{s-3}b_{s-2}b_{s-1}b_s],[b_{s-2}b_{s-1}b_sb_{s-1}].$$ Let $v$ be a shortest factor of $w$ of the form $\{a_1,b_1\}uc$. A prefix of $v$ must be $a_1ab$ or $b_1ba$. Suppose $v$ has prefix $a_1ab$. Then $v\in a_1ab\{a,b,c\}^*c$. Letters $a,b,c$ must have circular order $a-b-c-a$ in $v$, since there are no palindromes in $v$ to change the circular order, and $v$ starts $a_1abc$. Let $p$ be a prefix of $w$ of the form $qa_1(abc)^j$ with $j$ as large as possible. If $j=2$, then $abcabc$ is a factor of $w$, so that $w$ contains 4 more Abelian factors: $$[a_1abc],[abca],[bcab],[cabc].$$ This is impossible, since then $w$ has $n+1$ distinct length 4 Abelian factors. It follows that $j=1$, and $qa_1abcabb_1$ is a prefix of $w$. (Recall that the only triples associated with $a, b$ or $c$ are $[abc],[a_1ab],[abb_1]$.) Now, however, $w$ contains Abelian factors $$[a_1abc],[abca],[bcab],[cabb_1],$$ again giving a contradiction. Now consider the case where $w$ contains a factor $bcy$, $y\not\in \{a,b,c\}$. Since $abc$ is the only cycle in $G$, $y$ is not a neighbour of $b$ or $a$. Since $ac$ is an edge of $G$, either $ac$ or $ca$ is a factor of $w$. Suppose $ac$ is a factor of $w$. (The other case is similar.) Recall that the only triples associated with one of $a$, $b$ or $c$ are $[abc],[xab]$ and $[bcy]$. The only one of these containing both $a$ and $c$ is $[abc]$. If $ac$ is a factor of $w$, then it must therefore be preceded and followed by $b$, and occurs in the context $bacb$. Since neither of $[xab]$ and $[bcy]$ is associated with $b$, $cb$ is followed by $a$. Again. $ba$ is preceded by $c$, so $ac$ occurs in the context $cbacba$. Since $w$ is recurrent, it cannot have $(cba)^\omega$ as a suffix. It follows that $w$ must have a factor $cbacbax$. This implies that $[cbac],[bacb],[acba],[cbax]$ are length 4 Abelian factors of $w$. As in previous cases, the paths attached to vertices $a$ and $c$ of triangle $abc$ furnish another $n-3$ distinct length 4 Abelian factors, giving a contradiction. The final case occurs when $w$ contains a factor $cay$, $y\not\in \{a,b,c\}$. In this case $G$ consists of a triangle with two disjoint paths attached at $a$. In the usual way, we find $n-3$ length 4 Abelian factors of $w$, each containing at least two path vertices (i.e. vertices of $G-\{a,b,c\}$). If $abcabc$ or $cbacba$ were a factor of $w$, $w$ would then contain 4 additional length 4 Abelian factors $[abca]$, $[bcab]$, $[cabc]$ and $[bcay]$, giving a contradiction. We therefore conclude that the only factor of $w$ of the form $x\{a,b,c\}^*y$ is $xabcay$, and the only factor of $w$ of the form $y\{a,b,c\}^*x$ is $xabcay$. Thus, if $L_1$ is the leaf at the end of the path starting with $a-x$ and $L_2$ is the leaf at the end of the path starting with $a-y$, $w$ has only one factor of the form $L_1(\Sigma-\{L_1,L_2\}^*)L_2$, and only one factor of the form $L_2(\Sigma-\{L_1,L_2\}^*)L_1$, so that $w$ is periodic, oscillating between $L_1$ and $L_2$. The periodicity of $w$ gives a contradiction.$\Box$ [99]{} E. M. Coven and G. A. Hedlund, “Sequences with minimal block growth”, [*Mathematical Systems Theory*]{} [**7**]{} (1973), 138–153. M. Morse and G. A. Hedlund, “Symbolic Dynamics II: Sturmian trajectories”, [*Amer. J. Math.*]{} [**62**]{} (1940), 1–42. G. Richomme, K. Saari, L. Q. Zamboni, “Abelian complexity in minimal subshifts”. Preprint available at <http://arxiv.org/abs/0911.2914>. [^1]: The author is supported by an NSERC Discovery Grant. [^2]: The author is supported by an NSERC Postdoctoral Fellowship. [^3]: For definiteness of notation, let us say that we never call $a$ a neighbour of itself; however, we will count a loop based at $a$ as contributing 1 to the degree of $a$. Thus the degree of a vertex $a$ in $G$ will be the number of distinct neighbours of $a$, plus the number of loops based at $a$; since we do not allow multiple edges, the number of loops based at $a$ is 0 or 1. [^4]: If $r=1$, we use the convention $a_0=a_2=a$. [^5]: Throughout, when we say “palindrome” we mean one of the three palindromes $aba, bcb, cac$.
--- author: - Michael Chiu - 'Kenneth R. Jackson[^1]' - 'Alexander Kreinin[^2]' date: 'August 29, 2017' title: Correlated Multivariate Poisson Processes and Extreme Measures --- Introduction {#sec:intro} ============ Analysis and simulation of dependent Poisson processes is an important problem having many applications in Insurance, Finance, Operational Risk modelling and many other areas (see [@Aue], [@Bock2], [@Chav], [@kreinin2], [@EmbPuc], [@Panj], [@Shev] and references therein). In the modelling of multivariate Poisson processes, the specification of the dependence structure is an intriguing problem. In some applications, such as Operational Risk, the realized correlations between components of multivariate Poisson Processes exhibit negative correlations that cannot be ignored, as exemplified in the correlation matrix below. $$\begin{bmatrix} 1.0 & 0.14 & 0.29 & 0.32 & 0.15 & 0.16 & 0.03 \\ 0.14 & 1.0 & 0.55 & -0.12 & 0.49 & 0.52 & -0.16 \\ 0.29 & 0.55 & 1.0 & 0.11 & 0.27 & 0.17 & -0.31 \\ 0.32 & -0.12 & 0.11 & 1.0 & -0.12 & -0.23 & 0.19 \\ 0.15 & 0.49 & 0.27 & -0.12 & 1.0 & 0.49 & -0.17 \\ 0.16 & 0.52 & 0.17 & -0.23 & 0.49 & 1.0 & -0.02 \\ 0.03 & -0.16 & -0.31 & 0.19 & -0.17 & -0.02 & 1.0 \\ \end{bmatrix}$$ In the literature, several different bivariate processes with Poisson marginal distributions are available for applications in actuarial science and quantitative risk management. One of the most popular models is the common shock model [@CSM] where several common Poisson processes drive the dependence between the components of the multivariate Poisson process. The resulting correlation structure is time invariant and cannot exhibit negative correlations in this case. An alternative, more flexible approach to this problem is based on the Backward Simulation (BS) introduced in [@kreinin] for the bivariate Poisson processes. The BS of correlated Poisson processes and an approach to the calibration problem using transformations of Gaussian variables was proposed in [@kreinin2]. In [@kreinin], the idea of BS was extended to the class of multivariate processes containing both Poisson and Wiener components. It was also proved that the linear time structure of correlations is observed both in the Poisson and the Poisson-Wiener model. Further steps in the bivariate case were proposed in [@TBak] where the BS was combined with copula functions. This method allows one to extend the correlation pattern by using the Marshall-Olkin type copula functions that are simple to simulate. In this paper, we continue the analysis and development of the BS method for the class of multivariate Poisson processes. By the multivariate Poisson process, we understand any vector-valued process such that all its components are (single-dimensional) Poisson processes. The idea of our approach is to use the relationship between the extreme measures describing the joint distribution with maximal or minimal correlation coefficient of the components of the multivariate process at the terminal simulation time and the time structure of correlations. We describe the class of admissible correlation structures given parameters of the marginal Poisson processes and exploit convex combinations of the extreme measures to represent the multivariate Poisson process with given correlations of the components. We believe that our approach can simplify the solution to the calibration problem and extend the variety of the correlation patterns of the multivariate Poisson processes. There is a connection between our problem and the Optimal Transport literature (see [@MKP1] for a general overview of the area and [@MKP_ruschendorf_1; @MKP_ruschendorf_2] for a more probabilistic focus). Our computation of the extreme measures at the terminal simulation time can be viewed as a solution to a special multi-objective Monge-Kantorovich Mass Transportation Problem (MKP), with quadratic cost functions. However, this connection is not discussed in the present paper. In this paper, we are mainly concerned with the construction of the multivariate Poisson processes. The rest of the paper is organized as follows. In Section \[chiu\_sec:em\_monot\_2d\] we begin by discussing the background and motivation for the 2-dimensional problem. We introduce extreme measures and generalize the results of the bivariate problem to higher dimensions in Section \[sec:ejd\_hd\]. In Section \[sec:algor\] we describe a general algorithm for the computation of the joint distribution of the extreme measures. Section \[sec:calibration\] is concerned with the calibration problem. We discuss the simulation problem in Section \[sec:simulation\] and propose a Forward-Backward extension of the BS method. The paper is concluded with some directions for future research in Section \[sec:chiu\_future\_work\]. Extreme Measures and Monotonicity of the Joint Distributions {#chiu_sec:em_monot_2d} ============================================================= We begin with a description of the Common Shock Model (CSM) [@CSM] and the motivation of the approach proposed in [@kreinin2]. Afterwards, we discuss the results obtained in [@kreinin] for the case of two Poisson processes and describe the computation of the extreme measures in the case $J=2$. The CSM has become very popular within actuarial applications as well as in Operational Risk modeling [@Dian]. This model is based on the following idea. Suppose we want to construct two dependent Poisson processes. Consider three independent Poisson processes $\nu^{(1)}_t$, $\nu^{(2)}_t$, $\nu^{(3)}_t$ with the intensities $\lambda_1$, $\lambda_2$, $\lambda_3$. Let $N^{(1)}_t = \nu^{(1)}_t + \nu^{(2)}_t$ and $N^{(2)}_t = \nu^{(3)}_t + \nu^{(2)}_t$, which are also Poisson processes, formed by the superposition operation. Then, the Poisson processes $N^{(1)}_t$ and $N^{(2)}_t$ are dependent with the Pearson correlation coefficient $$\rho(N^{(1)}_t,N^{(2)}_t) = \frac{\lambda_2}{\sqrt{(\lambda_1 + \lambda_2)(\lambda_2 + \lambda_3)}}.$$ Clearly, the correlation coefficient can only be positive. A more advanced approach to the construction of negatively correlated Poisson processes is based on the idea of the backward simulation of the Poisson processes [@kreinin]. The conditional distribution of the arrival moments of a Poisson process, conditional on the value of the process at the terminal simulation time, $T$, is uniform. Then, using a joint distribution maximizing or minimizing correlation between the components at time, $T$, one can construct a Poisson process with a linear time structure of correlations in the interval $t\in [0, T]$. Thus, the problem of constructing the $2$-dimensional Poisson process with the extreme correlation of the components at time $T$ is reduced to that of random variables having Poisson distributions with the parameters $\lambda T$ and $\mu T$, where $\lambda$ and $\mu$ are parameters of the processes. It is not difficult to see that maximization (minimization) of the correlation coefficient of two random variables (r.v.), $X$ and $Y$, given their marginal distributions, is equivalent to maximization (minimization) of $\mathbb{E}[XY]$, if the r.v. have finite first and second moments and positive variances. The admissible range of the correlation coefficients can be computed using the Extreme Joint Distributions (EJD) Theorem in [@kreinin] (see Theorem \[thm:ejd\_2d\] in this section). The key statement, the characterization of the EJDs, is equivalent to the Frechet-Hoeffding theorem [@Fre] for distributions on the positive quadrant of the two-dimensional lattice, ${{\mathbb{Z}}}^{(2)}_{+}=\{(i, j): i,j=0,1,2,\dots\}$. However, taking into account the numerical aspect of the problem, we prefer to use equations, derived in [@kreinin], written in terms of the probability density function, not in terms of the cumulative distribution function. Given marginal distributions of the non-negative, integer-valued random variables $X_1$ and $X_2$, with finite first and second moments, there exist two joint distributions, $F^{*}(i, j)$ and $F^{**}(i, j)$ minimizing and maximizing the correlation, $\rho={\text{corr}}(X_1, X_2)$, respectively. \[def:chiu\_extrem\_measure\_2d\] The probability measures corresponding to the joint distributions $F^{*}$ and $F^{**}$ are called extreme probability measures. The EJD theorem in [@kreinin] allows one to construct the extreme measures $p^*$ and $p^{**}$, given marginal distributions of $X_1$ and $X_2$, with the minimal negative correlation $\rho^*$ and maximal positive correlation $\rho^{**}$, respectively. The extreme correlation coefficient uniquely defines the extreme measure. Given a probability measure, $p$, corresponding to the joint distribution of the vector $(X_1, X_2)$ on ${{\mathbb{Z}}}^{(2)}_{+}$ we define a functional $f_\rho(p)={\text{corr}}(X_1, X_2)$. Then we have $\rho^* =f_\rho( p^*) , \text{and}\,\, \rho^{**} = f_\rho( p^{**})$. This functional $f_{\rho}$ preserves the convex combination property. Indeed, taking a convex combination of the extreme measures, $p= \theta p^* + (1-\theta)p^{**}$, $(0\le \theta\le 1)$, we obtain $$f_\rho (p) = \theta f_\rho(p^*) + (1-\theta) f_\rho(p^{**}). \label{eq:lin_rho}$$ Thus, for any $\rho \in [\rho^*,\rho^{**}]$, we can find a probability measure $p$ for a joint distribution of the vector $(X_1,X_2)$ such that $f_\rho(p) = {\text{corr}}(X_1,X_2) = \rho$ and $p$ has the required marginal distributions for $X_1$ and $X_2$. Connection to Optimization Problem {#connection-to-optimization-problem .unnumbered} ================================== Computation of the extreme measures in the case $J=2$ was accomplished in [@kreinin] using a very efficient EJD algorithm having linear complexity with respect to the number of points in the support of the marginal distributions. It is interesting to note that this algorithm is applicable to a more general class of linear optimization problems on a lattice. In the case $J>2$, the corresponding optimization problem becomes multi-objective with $M=J(J-1)/2$ objective functions. Let us first recall the case $J=2$. Let $(X_1, X_2)$ be a random vector with support $\mathbb{Z}^{(2)}_{+}$ and given marginal probabilities ${{\mathbb{P}}}(X_1=i)=P_1(i)$ and ${{\mathbb{P}}}(X_2=j)=P_2(j)$. Denote $$h(p) := \mathbb{E}[X_1 X_2] = \sum^\infty_{i=0}\sum^\infty_{j=0} ij\,p(i,j) \label{eq:objective_fn_2D}$$ where $p(i,j)={{\mathbb{P}}}( X_1 = i, X_2 = j)$. The measure $p^{**}$ is the solution to the problem $ h(p)\to \max$ with the constraints shown below in (\[eq:optimization\_problem\_2D\]) on the marginal distributions of $p^{**}$. Similarly, the extreme measure $p^{*}$ is the solution to the optimization problem $ h(p)\to \min$ with the same constraints [@kreinin]. For the sake of brevity, we write these two problems as $$\begin{aligned} & \quad h(p) \rightarrow {\operatorname*{extr}}\label{eq:optimization_problem_2D} \\ \text{subject to} \nonumber \\ & \quad \sum^\infty_{j=0} p(i,j) = P_1(i), \quad i=0,1,\dots \nonumber \\ & \quad \sum^\infty_{i=0} p(i,j) = P_2(j), \quad j=0,1,\dots \nonumber \\ & \quad p(i,j) \geq 0 \quad i,j=0, 1, 2, \dots \nonumber\end{aligned}$$ where $\sum_{i=0}^\infty\limits P_1(i)=\sum_{j=0}^\infty\limits P_2(j)=1$. The symbol ${\operatorname*{extr}}$ denotes $\max$ in the case of measure $p^{**}$ and $\min$ in the case of $p^*$. It is not difficult to see that Problem (\[eq:optimization\_problem\_2D\]) is infinite dimensional; its numerical solution requires construction of the compact subset of the lattice for the computation of the approximate solution [@kreinin]. A solution to the infinite dimensional optimization problem (\[eq:optimization\_problem\_2D\]) is the joint distribution describing one of the extreme measures, given the marginal distributions of the random variables. The EJD algorithm discussed in [@kreinin] allows one to find a unique solution to the problem to any user specified accuracy. Taking the marginal distributions to be Poissonian, we find the extreme measures, $p^*$ and $p^{**}$, describing the joint distribution of the processes, $N_T=(N^{(1)}_T, N^{(2)}_T)$ with the extreme correlation of the components at time $T$. The convex combination of these measures can be calibrated to the desired value of the correlation coefficient, $\rho$. Then, applying the BS method we obtain the sample paths of the processes. Note that the EJD algorithm is applicable to a more general class of linear optimization problems: there is no need to assume normalization conditions as long as $P_1(i)\ge 0$ and $P_2(j)\ge 0$ for all $i \geq 0$ and $j \geq 0$ and these functions are integrable: $\sum_{i=0}^\infty\limits P_1(i)< \infty$ and $\sum_{j=0}^\infty\limits P_2(j)< \infty$. Monotone Distributions {#sec:montone_distr .unnumbered} ====================== Extreme measures are closely connected to the monotone distributions in the case $J=2$. It was proved in [@kreinin] that the joint distribution is comonotone in the case of maximal correlation and antimonotone in the case of minimal (negative) correlation. Let us review the properties of extreme measures used in what follows. Consider a set $\mathcal{S} = \{ s_n \}_{n \geq 0}$, where $s_n = (x_n,y_n) \in \mathbb{R}^2$. Define the two subsets $\mathcal{R}_+ = \{(x,y) \in \mathbb{R}^2 : x \cdot y \geq 0 \} \qquad \text{and} \qquad \mathcal{R}_- = \{(x,y) \in \mathbb{R}^2 : x \cdot y \leq 0 \}$. A set $\mathcal{S} = \{s_n\}_{n\geq 0} \subset \mathbb{R}^2$ is comonotone if $\forall \, i, j$, $s_i - s_j \in \mathcal{R}_+ $. Similarly, $\mathcal{S}$ is antimonotone if $\forall \, i, j$, $s_i - s_j \in \mathcal{R}_-$. \[def:monotone\_sets\_2d\] We say that a distribution $P$ is comonotone (antimonotone) if its support is a comonotone (antimonotone) set. It is also useful to recall the following classical statement on monotone sequences of real numbers, usually attributed to Hardy.[^3] Consider two vectors $x \in \mathbb{R}^N$ and $y \in \mathbb{R}^N$. Their inner product is $$\langle x, y \rangle := \sum^N_{k=1} x_k y_k$$ Denote by $\mathfrak{S}_N$ the set of all permutations of $N$ elements. \[lem:H\] For any monotonically increasing sequence, $x_1 \leq x_2 \leq \dots \leq x_N$ and a vector $y \in \mathbb{R}^N$, there exist permutations $\pi_+$ and $\pi_-$ solving the optimization problems $$\langle x, \pi_+ y \rangle = \max_{\pi \in \mathfrak{S}_N} \langle x, \pi y \rangle$$ and $$\langle x, \pi_- y \rangle = \min_{\pi \in \mathfrak{S}_N} \langle x, \pi y \rangle$$ The permutations $\pi_+$ and $\pi_-$ sort vectors in ascending and descending order, respectively. Lemma \[lem:H\] motivates the introduction of monotone distributions in the $2$-dimensional case. \[thm:ejd\_2d\] The joint distribution $p^{**}$ for $X_1$ and $X_2$ having maximal positive correlation coefficient $\rho^{**}$, given marginal distributions $F_1(i)$ and $F_2(j)$, is comonotone. The probabilities $p^{**} (i,j)=\mathbb{P}(X_1=i, X_2=j)$ satisfy the equation $$\begin{aligned} p^{**} (i,j) & = [\min(F_1(i),F_2(j)) - \max(F_{1}(i-1),F_2(j-1))]^+ \quad i,j = 0,1,2,\dots \label{eq:mr_J} \end{aligned}$$ where $[\,x\,]^+ = \max(x,0)$ and $F_i(\cdot)$ denote the marginal CDFs, with $F_i(-1) = 0$. The joint distribution $p^*$ for $X_1$ and $X_2$ having minimal negative correlation coefficient $\rho^*$ is antimonotone. In this case $$p^*(i,j) = [\min(F_1(i),\bar{F}_2(j-1)) - \max(F_1(i-1),\bar{F}_2(j))]^+ \quad i,j = 0,1,2,\dots \label{eq:max_negat}$$ where $\bar{F}_i(j) = 1 - F_i(j)$ and $\bar{F}_i(-1)=1$. Theorem \[thm:ejd\_2d\] is equivalent to the Frechet theorem in the case the marginal distributions are discrete. The case of the Poisson marginal distributions is a particular case of Theorem \[thm:ejd\_2d\]. This result is applicable to much more general classes of distributions. In particular, one can describe the joint probabilities corresponding to $p^*$ and $p^{**}$ in the case the components of the vector have a negative binomial distribution. The EJD algorithm for computation of the joint probabilities is also applicable to more general cases. If both marginal distributions have finite second moments, the joint distribution can be approximated to any user specified accuracy. Extreme Measures in Higher Dimensions {#sec:ejd_hd} ===================================== Let us now generalize the main result, Theorem \[thm:ejd\_2d\], discussed in Section \[chiu\_sec:em\_monot\_2d\]. We consider a random vector $\vec{X}=(X_1,\dots,X_J)$ on a positive quadrant of the $J$-dimensional lattice, ${{\mathbb{Z}}}^{(J)}_+$. Each coordinate of $\vec{X}$ has a discrete distribution with the support $\mathbb{Z}_+$. We also assume that each random variable $X_k$, $k=1,2,\dots,J$, has finite second moment and its variance is positive. In this case, the correlation coefficients, $\rho_{k,l}={\text{corr}}(X_k, X_l)$, are defined for all $1\le k\le l\le J$. We denote the marginal distribution of the r.v. $X_k$ by $F_k$: $$F_k(i)=\mathbb{P}(X_k\le i), \quad i\in \mathbb{Z}_+; \quad k=1, 2, \dots, J.$$ Let us now define the extreme measures on the $J$-dimensional lattice. If $J=2$, the extreme measures are described by the joint distribution maximizing and minimizing the correlation coefficient of $X_1$ and $X_2$; the corresponding probability density functions satisfy Theorem \[thm:ejd\_2d\]. If the number of components $J\ge 3$, the definition of the extreme measure is less obvious. Denote the (joint) distribution function of $\vec{X}$ by $F(\vec{i})$: $F(i_1, i_2, \dots, i_J)={{\mathbb{P}}}(X_1\le i_1, X_2\le i_2, \dots, X_J\le i_J)$ and the corresponding probability density function by $p(\vec{i})$. By $p_{k,l}(i_k, i_l)$ we denote the probability density function of the $2$-dimensional projection, $(X_k, X_l)$ of $\vec{X}$, $(1\le k < l\le J)$: $$p_{k,l}(i_k,i_l) = P(X_k = i_k, X_l = i_l)$$ We say that the density $p(\vec{i})$, $$p(i_1, \dots, i_J)=\mathbb{P}(X_1= i_1, \dots, X_J= i_J), \quad i_k\in \mathbb{Z}_+, \quad k=1, 2, \dots, J$$ determines an extreme measure on the $J$-dimensional lattice if and only if for all $k$ and $l$, $(1\le k\le l\le J)$, the associated density $p_{k,l}$ determines an extreme measure on ${{\mathbb{Z}}}^{(2)}_{+}$ in the sense of Definition \[def:chiu\_extrem\_measure\_2d\]. \[def:extr\_meas\_gen\] Our goal is to describe the extreme measures given the marginal distributions, $F_k$, and compute the associated extreme correlation matrices, $\mathbf{\rho}=[\rho_{k,l}]$. Let us first find the number of extreme measures. \[lem:numb\_ed\] For any given set of marginal distributions, $F_k$, on $\mathbb{Z_+}$ $(k=1, 2, \dots, J)$ the number of extreme measures is $N=2^{J-1}$. The proof of Lemma \[lem:numb\_ed\] for $J=2$ is obvious. Let us prove it for $J\ge 3$. For each $2$-dimensional projection $(X_k, X_l)$, the corresponding joint distribution should be either comonotone or antimonotone. Take the first r.v, $X_1$, and form the first group of random variables from the set $X_2$, $X_3$, $\dots, X_J$, that are comonotone with $X_1$. Denote the number of comonotone r.v., by $J_c$. The number of r.v. antimonotone with $X_1$, satisfies $$J_a=J-1 - J_c.$$ The total number of partitions of the number $J-1$ in the additive form, $ J-1= J_a + J_c$, is $N=2^{J-1}$. Clearly, $N$ does not depend on the choice of the first r.v. Let us now introduce the monotonicity structure of the extreme measures. Take the first r.v., $X_1$ and consider the r.v. $X_2$, $X_3$, $\dots X_J$. Define the vector of binary variables $\vec{e^n}=(e_1, e_2, \dots, e_J)$ such that $e_1=0$, and for $j=2, 3, \dots, J$, $n=1,\dots,N$. $$e_j= \begin{cases} 1,&\text{if $X_1$ and $X_j$ are antimonotone,}\cr 0,&\text{if $X_1$ and $X_j$ are comonotone.}\cr \end{cases}$$ We call $\vec{e^n}$ the monotonicity vector corresponding to the $n$-th extreme measure; its components are called monotonicity indicators. Figure \[chiu\_fig:extremal\_structure\] illustrates this concept. In this example, all coordinates but the last are comonotone with the first r.v., $X_1$. The last coordinate, $X_J$ is antimonotone. The monotonicity indicators in this case are $e_k=0$, for $k=1, 2, \dots, J-1$, and $e_J=1$. \(a) at (0,0) [$X_1$: 0]{}; (b) at (4,0) [1]{}; \(c) at (0,-0.8) [$X_2$: 0]{}; (d) at (4,-0.8) [1]{}; \(e) at (0,-2.5) [$X_J$: 0]{}; (f) at (4,-2.5) [1]{}; (a2) at (6,0) [$X'_1$: 0]{}; (b2) at (10,0) [1]{}; (c2) at (6,-0.8) [$X'_2$: 0]{}; (d2) at (10,-0.8) [1]{}; (e2) at (6,-2.5) [$X'_J$: 0]{}; (f2) at (10,-2.5) [1]{}; (2.3,-1.5) circle (1pt); (2.3,-1.7) circle (1pt); (2.3,-1.9) circle (1pt); (8.3,-1.5) circle (1pt); (8.3,-1.7) circle (1pt); (8.3,-1.9) circle (1pt); ; ; ; ; ; ; Optimization Problem: $J\ge 3$. {#optimization-problem-jge-3. .unnumbered} ------------------------------- Since each $2$-dimensional projection of the random vector $\vec{X}$ is associated with an extreme measure, the optimization problem in this case is multiobjective. The number of optimization criteria is $M=J(J-1)/2$, one for each pair of r.v.s $(X_i,X_j), \,\,\, 1 \leq i < j \leq J.$ The number of constraints is equal to the number of marginal distributions, $J$. The variables in this problem are the probabilities $$p(\vec{i})={{\mathbb{P}}}(X_1=i_1, X_2=i_2, \dots, X_J=i_J), \quad i_j\in \mathbb{Z_+},$$ and, therefore, must satisfy the inequalities $0\le p(\vec{i})\le 1$. Let us define the set of integers $$\mathcal{I}_{k}=\{j: 1\le j\le J, \quad j\neq k, \}$$ and $$\mathcal{I}_{k,l}=\{j: 1\le j\le J, \quad j\neq k, \quad j\neq l\}.$$ Then the marginal probabilities, $P_k(i_k)$, can be written as $$p_k(i_k) = \mathbb{P}(X_k = i_k) = \sum_{j\in \mathcal{I}_k} \sum_{i_j=0}^\infty p(i_1, \dots, i_J), \quad i_k\in {{\mathbb{Z}}}_+.$$ The probabilities of the $2$-dimensional projections $$p_{k, l}(i_k, i_l)={{\mathbb{P}}}(X_k=i_k, X_l=i_l), \quad k,l= 1,2,\dots, J, ~ k\neq l, ~ k,l\in {{\mathbb{Z}}}_+,$$ are computed as $$p_{k, l}(i_k, i_l)= \sum_{j\in \mathcal{I}_{k,l}} \sum_{i_j=0}^\infty p(i_1, \dots, i_J).$$ Similarly, the objective functions, $h_{k,l}(p)=\mathbb{E}[X_k X_l]$, take the form$$h_{k,l}(p)= \sum_{i_k=1}^\infty \sum_{i_l=1}^\infty i_k i_l p_{k,l}(i_k, i_l), \quad 1\le k<l\le J.$$ The optimization problem can then be written as $$\begin{aligned} & h_{k,l}(p) \rightarrow \,\, \textrm{extr} \quad 1\leq k<l \leq {J}, \label{eq:multi_d_prob}\\ \text{subject to} \nonumber \\ & \sum_{j\in \mathcal{I}_k} \sum_{i_j=0}^\infty p(i_1, \dots, i_J), = P_k(i_k) \quad i_k \in {{\mathbb{Z}}}_+, \quad k=1,\dots,J \nonumber\\ & {p\,(i_1,\dots,i_{J})}\geq 0 \nonumber\end{aligned}$$ where $P_j(\cdot)$ are given marginal probabilities $(j=1, 2, \dots, J)$. The main theorem {#the-main-theorem .unnumbered} ---------------- Let us now formulate the main result of the paper. It is convenient to introduce the following notation. $$\tilde{F}_j(i_j, e_j) = \begin{cases} F_j(i_j) \quad & \text{if} \,\,\, e_j = 0 \\ 1 - F_j(i_j) \quad & \text{if} \,\,\, e_j = 1 \end{cases}$$ where the marginal distributions, $F_j(\cdot)$, satisfy $$F_j(i_l) = \sum_{i_k=0}^{i_l} P_j(i_k)$$ \[thm:ejd\_nd\] Given marginal distributions $F_1$, $F_2$, $\dots F_J$ on ${{\mathbb{Z}}}_+$ and a binary vector $\vec{e^n}$, the extreme measure with the monotonicity structure $\vec{e^n}$ is defined by the probabilities $$\begin{aligned} p^{\vec{e^n}}(\vec{i}) = \big[ \min(\tilde{F}_1 & (i_1- e_{1};e_{1}),\dots, \tilde{F}_J(i_J-e_{{J}};e_{{J}})) \label{eq:ejd_neq_dim} \\ & - \max(\tilde{F}_1(i_1+(e_{1}-1);e_{1}),\dots, \tilde{F}_J(i_J+ (e_{{J}}-1);e_{{J}})) \big]^+ \nonumber \end{aligned}$$ We give a sketch of the proof here for the general case $J \geq 2$. A more complete proof for the case $J=2$ is given in [@kreinin]. Let us first show that, if $J=2$, then Equation (\[eq:ejd\_neq\_dim\]) is equivalent to (\[eq:mr\_J\]), in the case of maximal correlation, and to (\[eq:max\_negat\]), in the case of minimal correlation. Indeed, in the first case, the distributions of $X_1$ and $X_2$ must be comonotone. Hence, $e_1=e_2=0$ and $\tilde F_k(i, e_k)= F_k(i)$ for $k=1$ and 2 and all $i \geq 0$. In the antimonotone case, $e_1 = 0$, but $e_2=1$. Thus, $\tilde F_1(i,e_1) = F_1(i)$, but $\tilde F_2(i, e_2)= 1-F_2(i-1)$ for all $i \geq 0$. Therefore, Equation (\[eq:ejd\_neq\_dim\]) is equivalent to (\[eq:mr\_J\]) and (\[eq:max\_negat\]). Let us now consider the general case, $J\ge 3$. There are two groups of the coordinates of $\vec{X}$: comonotone and antimonotone. Denote their indices by $$\mathcal{I_C} = \{j: e_j=0\}\,\,\, \text{and}\,\,\, \mathcal{I_A} = \{j: e_j=1\}.$$ Let us now generate a large sample from the distribution $p^{\vec{e}}$ and sort them in the ascending order with respect to the first coordinate. It was shown in [@kreinin] that, after sorting, the comonotone coordinates of $\vec{X}$ will be permuted in the ascending order while the antimonotone coordinates will be permuted in the descending order. Suppose that the indices $1 = k_1 < k_2< k_3<\dots<k_C$ belong to $ \mathcal{I_C}$ and the complimentary set of indices is $\mathcal{I_A} =\{l_1, l_2, \dots, l_A\}$. A permuted sample is represented in (\[chiu\_eq:sample\_vectors\_marginals\]). $$\begin{aligned} X_1 &: \overbrace{0,\dots,0}^{N_{1}(0)},\dots,\overbrace{i-1,\dots,i-1}^{N_{1}(i-1)},\,\,\,\overbrace{i,i,\dots,i}^{N_{1}(i)},\dots\overbrace{k,k\dots,k}^{N_{1}(k)},\dots \nonumber \\ X_{\,k_2} &: \underbrace{0,0,\dots,0}_{N_{k_2}(0)},\dots, \underbrace{i-1,\dots,i-1}_{N_{k_2}(i-1)},\,\,\underbrace{i,\dots,i}_{N_{k_2}(i)},\,\,\dots, \nonumber \\ & \hspace{4.25cm} \vdots \label{chiu_eq:sample_vectors_marginals} \\ X_{\,l_A} & : \dots\underbrace{k,k,\dots,k,}_{N_{\,l_A}(k)}\underbrace{k-1,\dots,k-1}_{N_{\,l_A}(k-1)},\dots\underbrace{2,2,2,\dots2}_{N_{\,l_A}(2)},\dots \nonumber\end{aligned}$$ where $N_k(m)$ denotes the number of realizations of $m$ in the $k$th coordinate, $X_k$, of $\vec{X}$. The first position, $I_{k}^{\,C}(m)$, where the number $m$ appears in the sorted sample of the r.v. $X_{k}$ is $$I_{k}^{\,C}(m)= 1 + \sum_{i=0}^{m-1} N_{k}(i), \quad k \in \mathcal{I_C}.$$ The last position, $E_{k}^{\,C}(m)$, where the number $m$ appears in the sorted sample of the r.v. $X_{k}$ is $$E_{k}^{\,C}(m)=\sum_{i=0}^{m} N_{k}(i), \quad k\in \mathcal{I_C}.$$ As the sample size $N_S\to\infty$, we have $$\lim_{N_S\to\infty} \frac{N_k(m)}{N_S} = p_k(m) \quad \textbf{a.s.} \label{eq_Nkm_as}$$ Therefore, for $k\in \mathcal{I_C}$ $$\lim_{N_S\to\infty} \frac{I_k^{\,C}(m)}{N_S} = F_k(m-1) \quad \textbf{a.s.}. \label{eq_Ikm_as}$$ and $$\lim_{N_S\to\infty} \frac{E_k^{\,C}(m)}{N_S} = F_k(m) \quad \textbf{a.s.}. \label{eq_Ekm_as}$$ In the case of the group of antimonotone coordinates, $l\in \mathcal{I_A}$, the first index, $I_{l}^{\,A}(m)$, where a number $m$ appears in the sorted sample of the r.v. $X_{l}$ is $$I_{l}^{\,A}(m)= 1 + N_S - \sum_{i=0}^{m} N_{l}(i), \quad l \in \mathcal{I_A}.$$ The last position, $E_{l}^{\,A}(m)$, where a number $m$ appears in the sorted sample of the r.v. $X_{l}$ is $$E_{l}^{\,A}(m)= N_S - \sum_{i=0}^{m-1} N_{l}(i), \quad l\in \mathcal{I_A}.$$ As $N_S\to\infty$, we have for $l\in \mathcal{I_A}$ $$\lim_{N_S\to\infty} \frac{I_l^{\,A}(m)}{N_S} = 1- F_l(m) \quad \textbf{a.s.}. \label{eq_Ilm_as}$$ and $$\lim_{N_S\to\infty} \frac{E_l^{\,A}(m)}{N_S} = 1-F_l(m-1) \quad \textbf{a.s.}. \label{eq_Elm_as} The empirical measure of the event$$ $$\{\vec{X}=\vec{i}\} =\Big\{ \bigcap_{k\in \mathcal{I_C}} \{X_k=i_k\} \Big\} \, \bigcap \Big\{ \bigcap_{l\in \mathcal{I_A}} \{X_l=i_l\}\Big\}$$ is $\mathbf{m_{N_S}}(\{\vec{X}=\vec{i}\})$, which coincides with that of the intersection of the intervals $$\Big\{\bigcap_{k\in \mathcal{I_C}} [I_{k}^{\, C}(i_k), E_{k}^{\,C}(i_k)]\Big\} \bigcap \Big\{\bigcap_{l\in \mathcal{I_A}} [I_{l}^{\,A}(i_l), E_{l}^{\,A}(i_l)]\Big\}$$ The latter can be written as follows. The right end of the intersection of the intervals is $$\mathcal{R}=\min\Big( \min_{k\in \mathcal{I_C}}( E_{k}^{\,C}(i_k) ), \min_{l\in \mathcal{I_A}}( E_{l}^{\,C}(i_l) ) \Big)$$ and the left end is $$\mathcal{L} = \max\Big( \max_{k\in \mathcal{I_C}} (I_{k}^{\,C}(i_k)), \max_{l\in \mathcal{I_A}} (I_{l}^{\,C}(i_l)) \Big)$$ Then we obtain $$\begin{aligned} & &\mathbf{\mu_{N_S}}(\{\vec{X}=\vec{i}\} = \frac{(\mathcal{R}-\mathcal{L})^+}{N_S} \label{eq_edf} \end{aligned}$$ Note that the length of the intersection of intervals is $0$ in the case $\mathcal{R}\le \mathcal{L}$. As $N_S\to\infty$, we obtain from Equations (\[eq\_Ikm\_as\])–(\[eq\_Elm\_as\]) $$\begin{aligned} \lim_{N_S\to\infty} \mathbf{\mu_{N_S}}(\{\vec{X}=\vec{i}\} = \big[ \min( & \tilde{F}_1 (i_1- e_{1};e_{1}),\dots, \tilde{F}_J(i_J-e_{J};e_{J})) \\ - & \max(\tilde{F}_1(i_1+(e_{1}-1);e_{1}),\dots, \tilde{F}_J(i_J+ (e_{J}-1)\big]^+.\end{aligned}$$ Finally, note $$\lim_{\, N_S\to\infty} \mathbf{\mu_{\,N_S}}(\{\vec{X}=\vec{i}\}=p^{\,\vec{e}}(\vec{X}=\vec{i}) \quad \textbf{a.s.}$$ Thus (\[eq:ejd\_neq\_dim\]) is derived and the theorem is proved. EJD Algorithm in Higher Dimensions {#sec:algor} ================================== ### Approximation of Extreme Distributions {#approximation-of-extreme-distributions .unnumbered} In practice, the marginal distributions $F_j(k), \, (j=1,\dots,J)$ must be truncated, i.e., approximated by distributions $ \tilde{F}_j(k)$ with finite support, $k\in [0, I_*]$, such that $$\max_{i\leq I_*} | F_j(i) - \tilde{F}_j(i) | \leq \epsilon, \quad 1 - F_j({I_*}) \leq \varepsilon, \quad \text{and for $k>I_*, \tilde{F}_j(k)=1$,}$$ where $F_j(n) = \sum^n_{i=0}\, p_j(i)$ and $\tilde F_j(n) = \sum^n_{i=0}\, \tilde p_j(i)$. It follows from Theorem (\[thm:ejd\_nd\]) that $\tilde{p}^{\,\vec{e^n}}(\vec{i})$ satisfies $$\sup_{i_1 \geq 0,\dots, i_J \geq 0} | \, p^{\,\vec{e^n}}(\vec{i}) - \tilde{p}^{\,\vec{e^n}}(\vec{i})\,| \leq \varepsilon. \label{ineq_p}$$ Moreover, if the second moments of the marginal distributions are finite then for any pair of indices, $l$ and $m$, $(1\le l\le m\le J)$, the covariance $\operatorname{Cov}(X_l, X_m)$ will also be approximated $$\sup_{l, m }\Big\vert \, \sum i_ l i_m \Bigl( p^{\,\vec{e^n}}(\vec{i}) - \tilde{p}^{\,\vec{e^n}}(\vec{i})\Bigr) \, \Big\vert \leq 3\varepsilon. \label{ineq_cpv}$$ Inequalities (\[ineq\_p\]) and (\[ineq\_cpv\]) were derived in [@kreinin], in the case $J=2$, where we also explained how to choose $I_*$ given $\varepsilon$ and the second moment of the marginal distributions. The same line of arguments from [@kreinin] can easily be extended to the general case $J\ge 3$. These inequalities are used in the numerical example illustrating the computation of the joint probabilities of the $3$-dimensional Poisson process. Let us now describe the Extreme Joint Distribution (EJD) algorithm, an efficient algorithm for the computation of the probabilities $p^{\vec{e^n}}(\vec{i})$ for $J\geq2$. A simpler version of this algorithm was given in [@kreinin] for $J=2$. The preliminary step, the truncation of the marginal distributions by distributions with finite support is identical to that in [@kreinin]. The main step is the recursive computation of the probabilities $p^{\,\vec{e^n}}(\vec{i})$, which can be done as described in the algorithm below. Note that, in the algorithm, $p^{\vec{e}}(\vec{x})$ is assigned a value (in Step 5) only if $\vec{x}$ is in the support of $p^{\vec{e}}$ and the support point $\vec{x}$ is saved (in Step 3). If $\vec{x}$ is not a saved support point (i.e., not saved in Step 3), then $p^{\vec{e}}(\vec{x}) = 0$. To simplify the description of the algorithm below, we assume that all the marginal probabilities are positive. = Step 0a. Set $k=0$\ Step 0b. For each $j=1:J$\ If $e_{j} = 1$,\ Set $F_j(i) = 1 - F_j(i)$\ Set $\Delta_j$ = -1 and $x_j^0 = \max \{i : P_j(i)\geq 0 \}$\ else\ Set $\Delta_j$ = 1 and $x_j^0 = 0$\ Step 0c. Set $z_0 = \min(F_1(0),\dots,F_{J}(0))$ and $p^{\, \vec{e}}(x_1^0,\dots,x_J^0)=z_0$\ Step 1. Set $k=k+1$\ Step 2. For each $j = 1:J$\ If $z_{k-1} = F_j(i_j)$ for some $i_j$,\ Set $x_j^k = i_j + \Delta_j$\ else\ Set $x_j^k = x_j^{k-1}$\ Step 3. Save the $k$-th support point $\vec{x}_k = (x_1^k,\dots,x_{J}^k)$\ Step 4. Set $z_k = \min(F_1(x_1^k),\dots,F_{J}(x_{J}^k))$\ Step 5. Set $p^{\,\vec{e}}(x_1^k,\dots,x_{J}^k) = z_k - z_{k-1}$\ Step 6. Go to Step 1 ### Numerical Example {#numerical-example .unnumbered} We consider an example illustrating the computation of extreme measures with Poisson marginal distributions in the case $J=3$. We explore their support, joint-probabilities, and resulting correlations. Henceforth, we shall refer to the extreme measures of a $3$-dimensional Poisson process with intensities $\boldsymbol\mu = (\mu_1, \mu_2,\mu_3) = (3,5,7)$ as the “extreme measure example”. We note that the tolerance level for the marginal distributions is $\varepsilon=0.01$. We begin with the support of the distributions. As in the case $J=2$, the support of an extreme measure looks like a staircase and is sparse. Figure \[chiu\_fig:multi\_d\_support\] illustrates the supports of all four extreme measures of the example, where the associated monotonicity structures of the extreme measures are $\vec{e}^{\, 1}$, $\vec{e}^{\, 2}$, $\vec{e}^{\, 3}$, $\vec{e}^{\, 4}$: 1. $\vec{e}^{\, 1} = (0,0,0)$ corresponds to the extreme measure in which all component exhibit extreme positive correlation 2. $\vec{e}^{\, 2} = (0,1,0)$ corresponds to the extreme measure in which the second component has extreme negative correlation with the other coordinates 3. $\vec{e}^{\, 3} = (0,0,1)$ corresponds to the extreme measure in which the third component has extreme negative correlation with the other coordinates 4. $\vec{e}^{\, 4} = (0,1,1)$ corresponds to the extreme measure in which the first component has extreme negative correlation with the other coordinates Recall that the number of extreme measures for a given dimension $J$ is $N = 2^{J-1} = 4$ in this case (Lemma \[lem:numb\_ed\]). We also refer to extreme measures as extreme points. We display the N=4 extreme measures in blue in Figure 2. To highlight the monotonicity of the support of each extreme measure, we also show in Figure 2, its 2-dimensional projections onto the x-y, x-z and y-z planes. ![The blue curve in each graph is the support of an extreme measure in the case $J=3$, with Poisson marginal distributions. The red, teal, and green curves represent the projection of the 3D support onto the x-y, x-z, and y-z planes. These four graphs completely describe the support of the extreme measures in the case $J=3$.[]{data-label="chiu_fig:multi_d_support"}](supports.eps){width="1.3\linewidth" height="10cm"} The resulting extreme correlation matrices are as follows: $${\mathbf C^{\vec{e}^{\, 1}}}= \begin{pmatrix} 1.0 & 0.93688 & 0.931861 \\ 0.93688 & 1.0 & 0.967188 \\ 0.931861 & 0.967188& 1.0 \\ \end{pmatrix}$$ $${\mathbf C^{\vec{e}^{\, 2}}}= \begin{pmatrix} 1.0 & -0.81193 & 0.931861 \\ -0.81193 & 1.0 & -0.90135 \\ 0.931861 & -0.90135 & 1.0 \\ \end{pmatrix}$$ $${\mathbf C^{\vec{e}^{\, 3}}}= \begin{pmatrix} 1.0 & 0.93688 & -0.84624 \\ 0.93688 & 1.0 & -0.90135 \\ -0.84624 & -0.90135 & 1.0 \\ \end{pmatrix}$$ $${\mathbf C^{\vec{e}^{\, 4}}}= \begin{pmatrix} 1.0 & -0.81193 & -0.84624 \\ -0.81193 & 1.0 & 0.967188 \\ -0.84624 & 0.967188& 1.0 \\ \end{pmatrix}$$ where $C^{\vec{e}^{\, i}}$ is the correlation matrix corresponding to the monotonicity structure defined by the vector $\vec{e}^{\, i}, (i=1,2,3,4)$. ----------------- ----------------------------------- ----------------- ----------------------------------- $(i_1,i_2,i_3)$ $p^{\vec{e}^{\, 1}}(i_1,i_2,i_3)$ $(i_1,i_2,i_3)$ $p^{\vec{e}^{\, 2}}(i_1,i_2,i_3)$ (0,0,0) 0.0009 (0,10,0) 0.0000 (0,0,1) 0.0058 (0,9,0) 0.0002 (0,1,1) 0.0006 (0,8,0) 0.0009 (0,1,2) 0.0223 (0,7,0) 0.0034 (0,1,3) 0.0108 (0,6,0) 0.0120 (0,2,3) 0.0094 (0,5,0) 0.0332 (1,2,3) 0.0320 (0,5,1) 0.0029 (1,2,4) 0.0429 (0,4,1) 0.0902 (1,3,4) 0.0483 (0,3,1) 0.0563 (1,3,5) 0.0262 (0,3,2) 0.1242 (2,3,5) 0.0659 (0,2,2) 0.0446 (2,4,5) 0.0357 (1,2,2) 0.0553 (2,4,6) 0.1225 (1,2,3) 0.1708 (3,4,6) 0.0173 (1,1,3) 0.0532 (3,5,6) 0.0092 (1,1,4) 0.0885 (3,5,7) 0.1490 (2,1,4) 0.0795 (3,5,8) 0.0172 (2,1,5) 0.0494 (3,6,8) 0.0313 (2,0,5) 0.0514 (4,6,8) 0.0819 (2,0,6) 0.0036 (4,6,9) 0.0331 (3,0,6) 0.0468 (4,7,9) 0.0531 (3,0,7) 0.0145 (5,7,9) 0.0152 (4,0,7) 0.0071 (5,7,10) 0.0361 (4,0,8) 0.0081 (5,8,10) 0.0349 (4,0,9) 0.0001 (5,8,11) 0.0146 (5,0,9) 0.0026 (6,8,11) 0.0158 (5,0,10) 0.0005 (6,9,11) 0.0147 (6,0,10) 0.0003 (6,9,12) 0.0198 (6,0,11) 0.0002 (7,9,12) 0.0017 (7,0,11) 0.0000 (7,10,12) 0.0048 (7,0,12) 0.0001 ----------------- ----------------------------------- ----------------- ----------------------------------- : Support and joint probabilities of the extreme measure corresponding to monotonicity structures $\vec{e}^{\, 1}$ and $\vec{e}^{\, 2}$[]{data-label="table:3d_ex_1"} ----------------- ----------------------------------- ----------------- ----------------------------------- $(i_1,i_2,i_3)$ $p^{\vec{e}^{\, 3}}(i_1,i_2,i_3)$ $(i_1,i_2,i_3)$ $p^{\vec{e}^{\, 4}}(i_1,i_2,i_3)$ (0,0,13) 0.0000 (0,10,13) 0.0000 (0,0,12) 0.0001 (0,10,12) 0.0000 (0,0,11) 0.0002 (0,9,12) 0.0000 (0,0,10) 0.0008 (0,9,11) 0.0002 (0,0,9) 0.0027 (0,8,11) 0.0001 (0,0,8) 0.0081 (0,8,10) 0.0008 (0,0,7) 0.0216 (0,7,10) 0.0000 (0,0,6) 0.0504 (0,7,9) 0.0027 (0,0,5) 0.0514 (0,7,8) 0.0007 (0,1,5) 0.0494 (0,6,8) 0.0074 (0,1,4) 0.1680 (0,6,7) 0.0047 (0,1,3) 0.0151 (0,5,7) 0.0169 (1,1,3) 0.0381 (0,5,6) 0.0191 (1,2,3) 0.1708 (0,4,6) 0.0313 (1,2,2) 0.0999 (0,4,5) 0.0590 (1,3,2) 0.0591 (0,3,5) 0.0419 (2,3,2) 0.0651 (0,3,4) 0.1386 (2,3,1) 0.0563 (0,2,4) 0.0294 (2,4,1) 0.0626 (0,2,3) 0.0151 (3,4,1) 0.0276 (1,2,3) 0.2089 (3,5,1) 0.0029 (1,2,2) 0.0172 (3,5,0) 0.0308 (1,1,2) 0.1418 (4,5,0) 0.0024 (2,1,2) 0.0651 (4,6,0) 0.0120 (2,1,1) 0.0638 (4,7,0) 0.0009 (2,0,1) 0.0550 (5,7,0) 0.0026 (3,0,1) 0.0305 (5,8,0) 0.0005 (3,0,0) 0.0308 (6,8,0) 0.0004 (4,0,0) 0.0153 (6,9,0) 0.0002 (5,0,0) 0.0031 (7,9,0) 0.0000 (6,0,0) 0.0005 ----------------- ----------------------------------- ----------------- ----------------------------------- : Extreme measures corresponding to monotonicity structures $\vec{e}^{\, 3}$ and $\vec{e}^{\, 4}$[]{data-label="table:3d_ex_2"} In Tables \[table:3d\_ex\_1\] & \[table:3d\_ex\_2\], we list the values of the joint probabilities $p^{\vec{e}}(\vec{i})$ for the extreme measures. Each table contains 2 of the 4 extreme measures. The columns are grouped such that they display the support and the corresponding joint probabilities corresponding to each example extreme measure. Calibration of Correlations {#sec:calibration} =========================== In the case $J = 2$, given a correlation coefficient $\rho$ in the admissible correlation range $[\rho^*, \rho^{**}]$, we can use the following approach to find a probability measure $p$ having correlation $\rho$ and satisfying the marginal constraints. The approach is as follows. First find the unique $w \in [0,1]$ such that $$\rho = w \rho^* + (1-w) \rho^{**}$$ Then set $p = w\, p^* + (1 - w)\, p^{**}$, where $p^*$ and $p^{**}$ are the extreme measures with correlations $\rho^*$ and $\rho^{**}$, respectively. Note that $p$ has correlation $\rho$ and that it also satisfies the marginal constraints, as it is a convex combination of $p^*$ and $p^{**}$, both of which also satisfy the marginal constraints. Note also that, if $\rho$ is not in the admissible correlation range $[\rho^*, \rho^{**}]$, then it cannot be the correlation of a probability measure $p$ satisfying the marginal constraints. If $J>2$, the same idea is applicable. However, we have instead, a system of equations with $N_w$ weights to solve for a given correlation matrix $$\mathcal{C}_g = w_1\,\mathcal{C}_1 + \dots + w_{N_w}\,\mathcal{C}_{N_w}, \label{eq:calibration_n_dim_corr}$$ where the $\mathcal{C}_n$ are correlation matrices associated with the extreme distributions, $w_n \geq 0$ and $\sum_{n=1}^{N_w} w_n = 1$. Taking the extreme measures with the same set of marginal distributions, we construct the convex combination $$\label{eq:calibration_n_dim_measure} p^w = w_1 p^{e_1} \, + \dots + w_{N_w} p^{e_{N_w}}$$ where $p^w$ has correlation matrix $\mathcal{C}_g$ and satisfies the marginal constraints. The calibration problem is now reduced to finding a minimal $N_w$ to form a convex combination of extreme measures. Indeed, the number of extreme measures is $2^{J-1}$ and the number of correlation coefficients is $M = J(J-1)/2$. In matrix form (\[eq:calibration\_n\_dim\_corr\]) can be written as $$Aw = \hat{\mathcal{C}}_g \\$$ where $A$ is of dimension ${M}$-by-${N}$, the $i^{\mathrm{th}}$ column of $A$ is a vectorized version of the upper triangular part of the extreme correlation matrix $\mathcal{C}_i$ and $\hat{\mathcal{C}}_g$ is a vectorized version of the matrix $\mathcal{C}_g$. As the dimensionality of the multivariate Poisson process $J$ increases, $A$ becomes increasingly underdetermined. To find the weights, $w_j$, one can solve the following constrained system of equations $$\begin{aligned} \label{eq:calibration_problem} & Aw = \hat{\mathcal{C}}_g \\ & \textbf{1}^T w = 1 \nonumber \\ & w_n \geq 0 \qquad n=1, 2, \dots, N. \nonumber\end{aligned}$$ An approach to solving (\[eq:calibration\_problem\]) is outlined on pages 376-379 of [@nocedal]. If (\[eq:calibration\_problem\]) does not have a solution, this implies that the correlation matrix $\mathcal{C}_g$ cannot be generated from a multivariate Poisson process with the prescribed marginal distributions. Once we have found a $w$ satisfying the constraints (\[eq:calibration\_problem\]), we can reduce the number of nonzero components in $w$ to $N_w \leq M+1$ using, for example a technique similar to that often used in the proof of Carathéodory’s theorem, to obtain a vector of $N_w$ nonzero weights satisfying (\[eq:calibration\_n\_dim\_corr\]) and the positivity constraints on $w$. A matrix $\mathcal{C}$ is called admissible if it is a symmetric, positive semi-definite (PSD) matrix with ones on the diagonal and each entry satisfies $\rho_{ij}^* \leq c_{ij} \leq \rho_{ij}^{**}$, where $\rho_{ij}^*$ and $ \rho_{ij}^{**}$ are extreme correlations for the $2$-dimensional problem for $(X_i, X_j)$. Notice that the correlation matrices corresponding to the extreme measures are admissible. \[thm:convex\_combo\_valid\_corr\_matrix\] A convex combination of admissible correlation matrices is also an admissible correlation matrix. This fact readily follows from the observation that a convex combination of PSD matrices is a PSD matrix and, if all the matrices satisfy the correlation constraints so will the the convex combination of matrices. The probabilities $p^{\vec{e}}(\vec{i} )$ describing the extreme measures and their supports are very different from the case of independent r.v.’s $X_j$. In particular, if $\rho=0$, the support of the measure $p^w$ is the union of the supports of $p^{\vec{e}}$. By adding an additional edge point $p^0$ corresponding to the case of independent components of $\vec{X}$, one can obtain a more general solution. We do not discuss this problem further in this paper. Example Calibration {#example-calibration .unnumbered} ------------------- We continue with the example extreme measure (c.f. Section \[sec:algor\]) and attempt to calibrate to a target correlation matrix, $C_\ast$, given by $$C_{\ast}= \begin{pmatrix} 1.0 & -0.8 & -0.5 \\ -0.8 & 1.0 & 0.5 \\ -0.5 & 0.5& 1.0 \\ \end{pmatrix}$$ Recall that in our example $J = 3$, the Poisson marginal distributions have intensities $\boldsymbol\mu = (\mu_1,\mu_2,\mu_3) = (3,5,7)$ and $N = 2^{J-1} = 4$ extreme points. In this case, the constrained system corresponding to can be constructed from the unique entries of the correlation matrix corresponding to each extreme point of the example extreme measure given in Section \[sec:algor\] and takes the following form: $$\begin{pmatrix} 0.93688 & -0.81193 & 0.93688 & -0.81193 \\ 0.931861 & 0.931861 & -0.84624 & -0.84624 \\ 0.967188 & -0.90135 & -0.90135 & 0.967188 \\ 1 & 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} w_1 \\ w_2 \\ w_3 \\ w_4 \end{pmatrix} = \begin{pmatrix} -0.8 \\ -0.5 \\ 0.5 \\ 1 \end{pmatrix}$$ A unique solution to this is $w$ = (0.0287993, 0.205588, 0.0436342, 0.721979). Now let $$p_* = w_1 \cdot p^{\vec{e}^1} + w_2 \cdot p^{\vec{e}^2} + w_3 \cdot p^{\vec{e}^3} + w_4 \cdot p^{\vec{e}^4}$$ where $p^{\vec{e}^i}$ is the extreme measure associate with the extreme correlation matrix $C^{\vec{e}^i}$, $i = 1,2,3,4$, listed in Section \[sec:algor\]. Note that $p_*$ has correlation matrix $C_*$ and $p_*$ also satisfies the marginal constraints, since each of $p^{\vec{e}^i}$, $i = 1,2,3,4$, satisfies the marginal constraints. Simulation {#sec:simulation} ========== Up until this point, we have discussed the computation of the multivariate Poisson distribution at some terminal time $T$ via the EJD algorithm. That allows us to achieve extreme correlations between the components of the multivariate Poisson process at time $T$. We also obtain bounds on the elements of the admissible correlation matrix. The computation of the extreme measures allows us to construct any admissible multivariate Poisson process.[^4] We briefly discuss the BS approach, which allows us to simulate the correlated multivariate Poisson processes on $[0,T]$ having an admissible correlation matrix at time $T$. Finally, we introduce the Forward continuation of the BS method. This extension allows us to construct sample paths of the multivariate Poisson process on the whole time axis. Backward Simulation {#backward-simulation .unnumbered} ------------------- There are two general approaches to simulation of the sample paths of multivariate Poisson processes—a Forward approach and a Backward approach. Under the Forward simulation approach, the Frechet-Hoeffding theorem can be used to generate the inter-arrival times of the components. The correlation boundaries for the components of the multivariate Poisson process are tighter than the correlation boundaries attained using the BS approach. Furthermore, the time structure of correlation is richer in the Backward case. See [@kreinin] for a more detailed comparison. The Backward approach relies on the conditional uniformity of the arrival times of the Poisson processes. More precisely, the conditional distribution of the (unordered) arrival moments, $T_i$, of a Poisson process in the interval $[0,T]$, conditional on the number of events in the interval is uniform [@Feig]. The converse statement characterizing, the class of Poisson processes, is the foundation of the BS method [@kreinin]. Consider a process $N_t, (t\ge 0)$ defined as $$N_t = \sum^{N_*}_{i=1} \mathbbm{1}(T_i \leq t), \quad 0 \leq t \leq T,$$ where $T_i$ are independent, identically distributed random variables uniformly distributed in the interval $[0, T]$. Notice that $N_T = N_*$. \[thm\_1d\] Let $N_*$ have a Poisson distribution with parameter $\lambda T$. Then $N_t$ is a Poisson process with intensity $\lambda$ in the interval $[0,T]$. Let us now formulate the generalization of Theorem \[thm\_1d\]. Suppose that coordinates of the random vector $$N_\ast =\bigl( N^{(1)} , \dots, N^{(J)} \bigr)$$ have Poisson distribution, $N^{(j)}_\ast \sim\Pois(\lambda_j T)$. Denote the correlation coefficient of $N^{(i)}_\ast$ and $N^{(j)}_\ast$ by $\rho_{ij}$. Consider the processes $$N^{(j)}_t = \sum_{i=1}^{N^{(j)}_\ast} {\mathbbm{1}}{\bigl(T_i^{(j)}\le t\bigr)}, \,\,j=1, 2, \dots, J,$$ where the random variables, $T_i^{(j)}, (i=1, 2, \dots, N^{(j)}_\ast)$, are mutually independent, uniformly distributed in the interval $[0, T]$. Then $\mathbf{N_t} =\bigl( N^{(1)}_t , \dots, N^{(J)}_t \bigr) $ is a multivariate Poisson processes in the interval $[0, T]$ and $${\text{corr}}(N^{(i)}_t, N^{(j)}_t) = \rho_{ij} t T^{-1}, \quad 0\le t\le T. \label{eq_corr_t}$$ The proof can be found in [@kreinin]. Let us now formulate the BS method: - Given a finite vector of weights, $w_n$, $(n=1, 2, \dots, N_w)$, satisfying the conditions $w_n\ge 0$, $\sum_{n=1}^{N_w} w_n=1$, generate an index, $n$ by sampling from the probability distribution defined by $w$ to choose an extreme measure, $p^{\vec{e_n}}$. - Generate a random vector $\mathbf{N_T}=(N_T(1), \dots N_T(J))$ from the extreme measure $p^{\, e_n}$. - Generate arrival moments of the multivariate process $\mathbf{N_t}, (0\le t\le T)$. This can be accomplished via straightforward simulation of the uniform distribution and ordering in the ascending order of the resulting samples of the random variables $T_j$. Forward Continuation of the Backward Simulation {#forward-continuation-of-the-backward-simulation .unnumbered} ----------------------------------------------- The BS technique allows for the construction of sample paths of a multivariate Poisson process in an interval, $[0, T]$. In this section we consider an extension of the technique, which we call Forward-Backward simulation. We outline this approach for $J=2$. Consider a sequence of time intervals $[0, T)$, $[T, 2T)$, $\dots, [mT, (m+1)T]$. Suppose that a bivariate Poisson process, $(X_t, Y_t)$, has already been simulated in the interval $[0, T)$ using the BS technique. For any $\tau$, $0\le\tau < T$, the increments $X_{T+\tau}- X_T$ are independent of $X_T$ and $Y_{T+\tau}-Y_T$ are independent of $Y_T$. Let us define the joint distribution of the increments as $$(X_{T+\tau} - X_T, Y_{T+\tau}- Y_T) \eqod (\hat X_\tau, \hat Y_\tau), \quad 0<\tau\le T,$$ where $\hat X_\tau$ and $\hat Y_\tau$ are independent versions of $X_t$ and $Y_t$, respectively, $\hat X_\tau \eqod X_t$ and $\hat Y_\tau \eqod Y_t$. Then we find $$\operatorname{Cov}(X_{T+\tau}, Y_{T+\tau}) = \operatorname{Cov}(X_{T}, Y_{T}) + \operatorname{Cov}(X_{\tau}, Y_{\tau}).$$ Taking into account that $$\operatorname{Cov}(X_{\tau}, Y_{\tau})=\operatorname{Cov}(X_T, Y_T) \cdot \frac{\tau^2}{T^2},$$ we obtain $$\rho(T+\tau)=\rho(T) \frac{T^2+\tau^2 }{T(T+\tau) }.$$ In particular, we have $\rho(2T)=\rho(T)$ and $\operatorname{Cov}(X_{2T}, Y_{2T})=2\operatorname{Cov}(X_{T}, Y_{T})$. Suppose now that $\rho(t)$ is defined for all $t\le nT$. Consider now the case $t=nT+\tau\in [nT, (n+1)T)$. We have $$\operatorname{Cov}(X_{nT}, Y_{nT})=n\operatorname{Cov}(X_{T}, Y_{T})$$ and $$\operatorname{Cov}(X_{nT+\tau}, Y_{nT+\tau})=\operatorname{Cov}(X_{T}, Y_{T}) \cdot \Big(n +\frac{\tau^2}{T^2}\Big).$$ This latter relation implies $$\rho(nT+\tau)=\rho(T) \frac{n+\tau^2\cdot T^{-2} }{n+\tau T^{-1} },$$ and we obtain asymptotic stationarity of the correlation coefficient: $$\lim_{n\to\infty} \rho(nT+\tau)=\rho(T) \quad \text{for all } \tau\in[0, T].$$ Thus, the processes $X_t$ and $Y_t$ exhibit asymptotically stationary correlations as $t\rightarrow\infty$. An illustration of this is shown in Figure \[chiu\_fig:fwd\_cont\], where maximal (red line) and minimal (blue line) values of the correlation coefficient, ${\text{corr}}(X_t, Y_t)$ are depicted. It would be interesting to generalize this result for the class of mixed Poisson processes. The main difficulty is that the increments of the mixed Poisson processes are not independent. ![Forward Continuation of Backward Simulation: ${\text{corr}}(X_t, Y_t)$, $\mu_1 = 3$, $\mu_2 = 5$. []{data-label="chiu_fig:fwd_cont"}](bwdfwd_combined){width="1\linewidth" height="9cm"} Final remarks {#sec:chiu_future_work} ============= We presented an approach to the solution to the problem of simulation of multivariate Poisson processes in the case the dimension of the problem is $J > 2$ and we described the admissible parameters for the calibration problem. We also extended the BS approach with the introduction of the Forward Continuation of BS. There are several directions for future research. One is to extend the EJD approach to more general processes such as the Mixed Poisson processes and even to multivariate jump-diffusion processes. Another avenue of future research may be concerned with the efficient solutions of the multivariate calibration problem. It would also be interesting to study the interplay between the optimization problem and the EJD algorithm for computing the probabilities of the extreme measures and find the interpretation of this algorithm in terms of the Optimal transport problem. Exploring the synthesis of Forward and Backward simulation for more general processes is also worthwhile. Aue, F. and Kalkbrener, M. (2006). at work: Deutsche bank’s approach to quantifying operational risk. , 1(4):49–93. Bae, T. and Kreinin, A. (2017). A backward construction and simulation of correlated poisson processes. , 87(8):1593–1607. B[ö]{}cker, K. and Kl[ü]{}ppelberg, C. (2010). Multivariate models for operational risk. , 10(8):855–869. Chavez-Demoulin, V., Embrechts, P., and Ne[š]{}lehov[á]{}, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. , 30(10):2635–2658. Duch, K., Jiang, Y., and Kreinin, A. (2014). New approaches to operational risk modeling. , 58(4):3–1. Embrechts, P. and Puccetti, G. (2006). Aggregating risk capital, with an application to operational risk. , 31(2):71–90. Feigin, P. D. (1979). On the characterization of point processes with the order statistic property. , 16(2):297–304. Fr[é]{}chet, M. (1960). Sur les tableaux dont les marges et des bornes sont donn[é]{}es. , pages 10–32. Kreinin, A. (2016). Correlated poisson processes and their applications in financial modeling. , pages 191–232. Lindskog, F. and McNeil, A. (2001). Poisson shock models: applications to insurance and credit risk modeling. , pages 1280–1289. Nocedal, J. and Wright, S. J. (2006). . Springer. Panjer, H. H. (2006). , volume 620. John Wiley & Sons. Powojowski, M. R., Reynolds, D., and Tuenter, H. J. (2002). Dependent events and operational risk. , 5(2):65–73. Rachev, S. T. and R[ü]{}schendorf, L. (1998a). , volume 1. Springer Science & Business Media. Rachev, S. T. and R[ü]{}schendorf, L. (1998b). , volume 2. Springer Science & Business Media. Shevchenko, P. V. (2011). . Springer Science & Business Media. Villani, C. (2008). , volume 338. Springer Science & Business Media. [^1]: This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada [^2]: Corresponding author. [^3]: This result motivates and is used in the proof of Theorem 2.2 in [@kreinin] and provides an explanation as to why one coordinate of the support always increases (decreases) in the comonotone (antimonotone) case. [^4]: That is a multivariate Poisson process with correlations between the components satisfying the admissible correlation bounds.
--- abstract: | The two-fragment electrodisintegration of $^4$He into proton and triton is calculated in Plane Wave Impulse Approximation (PWIA). The three- and four-nucleon wave functions involved are obtained by solving the Alt-Grassberger-Sandhas (AGS) integral equations, with the Malfliet-Tjon potential as the underlying NN-interaction. Our results are in remarkable agreement with the experimental data and, in contrast to alternative approaches, do not exhibit any dip in the five-fold differential cross section at a missing momentum of $\sim$ 450MeV/$c$.\ address: - 'Physics Department, University of South Africa, P.O.Box 392,Pretoria 0003, South Africa' - 'Physikalisches Institut, Universität Bonn, D-53115 Bonn, Germany' author: - 'S. A. Sofianos, G. Ellerkmann' - 'W. Sandhas' title: 'Integral Equation Results for the $^4$He(e,e$^\prime$p)$^3$H Reaction at High Missing Momenta' --- The two-fragment electrodisintegration process $^4$He(e,e$^\prime$p)$^3$H has been the subject of several experimental investigations for various kinematics (see, for example, [@Brand88; @Brand91; @Goff; @Leeuth; @Leeuwe98]). On the theoretical side quite some effort has been devoted to calculating this process. However, the exact treatment of four-nucleon electrodisintegration observables is computationally very demanding and, thus, has usually been simplified by approximations and model assumptions  [@Leeuwe98; @Schia90; @Laget94; @Howell97]. In Plane-Wave Impulse Approximation (PWIA) all these calculations exhibit a characteristic dip, actually zero, in the five-fold differential cross-section around a missing momentum of $\sim$ 450MeV/$c$, which does not show up in the experimental data. Laget [@Laget94] performed calculations including final state interaction (FSI) effects and meson exchange currents (MEC) by means of a Feynman diagrammatic approach. Although this resulted in a partial filling of the dip, these investigations also underestimate the data considerably in this region. Similar results were obtained when FSI was taken into account via an effective nucleon-trinucleon interaction [@Schia90; @Howell97; @Sand98]. In a completely different approach Nagorny [*et al.*]{}[@Nagorny] included the electromagnetic field within the strongly interacting system in a relativistic gauge invariant way, the FSI being incorporated via the pole contribution of the p$^3$H$\rightarrow$ p$^3$H scattering matrix [@Zay]. The agreement with the data is again fairly satisfactory, but the zero is exhibited as well [@Leeuwe98]. In detail, at missing momenta less than 300MeV/$c$ all calculations show a good agreement with the data [@Brand91; @Leeuwe98; @Howell97]. Surprisingly, the PWIA performs reasonably well in this region where the FSI could be expected to be more important than in the higher missing momenta region. In contrast, in the region $300$MeV/$c$$<Q<600$MeV/$c$ the results strongly depend on the way the FSI effects are included. For example, as pointed out in [@Leeuwe98] the Laget results underestimate the cross section by a factor of 4 and those of Schiavilla by a factor of 2. At missing momenta above 600MeV/$c$, where the MEC contribution is becoming important, the agreement with the data is again fair. We, therefore, conclude that the zero in the PWIA cross section is not necessarily a manifestation of strong FSI or MEC effects. Instead, one should look for other explanations, such as the dependence of the results on the model used, the NN forces employed, the determination of the bound state wave functions etc. In the present work the wave functions involved were calculated within the exact three- and four-nucleon AGS formalism [@ags3; @ags4]. We mention that the same wave functions have successfully been employed already in calculations of the two-fragment photodisintegration of the $\alpha$–particle [@ell1; @ell2]. We consider the two-fragment reaction in which the scattered electron and the ejected nucleon are measured in coincidence. The corresponding electron-proton coincidence cross section is given by $$\frac{{\rm d}^5\sigma}{{\rm d}E_f d\Omega_p {\rm d}\Omega_e} = \frac{\sigma_{\rm M}}{(\hbar c)^3 (2\pi)^3} \frac{\rho_f}{4E_iE_f\cos^2 \displaystyle{\frac{\theta}{2}}} |{\cal M}({\bf q})|^2 \label{tcross}$$ where $\sigma_{\rm M}$ is the Mott differential cross section, $$\sigma_{\rm M} = \frac{e^4\cos^2 \displaystyle{\frac{\theta}{2}}}{4 E_i^2 \sin^4 \displaystyle{\frac{\theta}{2}}}\,.$$ $E_i(E_f)$ is the energy of the incoming (outgoing) electron and $\rho_f$ is the relativistic density of states. The transition matrix, properly antisymmetrized with respect to the four nucleons [@boe80], is given by $${\cal M}({\bf q}) = 2 \; ^{(-)}\langle{\bf q};\Psi_{III}|H|\Psi_{IV}\rangle\,, \label{tmatrix}$$ where $H$ is the Hamiltonian describing the interaction between the electron and the nucleons. The ejected proton moves away with momentum ${\bf q}$ with respect to the residual three-nucleon bound state $|\Psi_{III}\rangle$. The kinematics of this process is shown in Fig. 1. The Hamiltonian for the interaction between an electron and four nucleons is that of McVoy and van Hove [@McVoy], which has been previously employed in the electrodisintegration of the trinucleon system by Lehman and collaborators [@Lehman_I; @Lehman] and by Epp and Griffy [@Epp]. This Hamiltonian, correct to the order of $\hbar^2 Q^2/M^2c^2$, is $$\begin{aligned} H & = & -\frac{4\pi e^2}{q_\mu^2}\langle v_f|\sum_{j=1}^4 \left\{F_{1N}(q_\mu^2)~e^{-i {\bf Q} \cdot {\bf x}_j} \phantom{\frac{q^2}{8M}} \right.\nonumber \\ & - & \frac{F_{1N}(q_\mu^2)}{2M} [({\bf p}_j \cdot \mbox{\boldmath $\alpha$})~ e^{-i {\bf Q} \cdot {\bf x}_j} + e^{-i{\bf Q} \cdot {\bf x}_j}~ ({\bf p}_j \cdot \mbox{\boldmath $\alpha$})] \nonumber\\ & - & i \left[ \frac{F_{1N}(q_\mu^2) + \kappa F_{2N}(q_\mu^2)}{2M}\right] \mbox{\boldmath $\sigma$}_j\cdot ({\bf x}_j \times \mbox{\boldmath $\alpha$})~ e^{-i {\bf Q} \cdot {\bf x}_j} \nonumber\\ & + & \left. \frac{q_\mu^2}{8M^2}~[F_{1N}(q_\mu^2) + 2\kappa F_{2N}(q_\mu^2)]~ e^{-i {\bf Q} \cdot {\bf x}_j} \right \} |u_i\rangle\,. \label{hamiltonian} \end{aligned}$$ Here ${\bf x}_j$ and ${\bf p}_j$ are the position and momentum operators of the $j$-th nucleon, ${\mbox{\boldmath $\sigma$}}_j$ is the nucleon spin operator, ${\mbox{\boldmath $\alpha$}}$ is the Dirac matrix acting on the free electron spinors $|v_i\rangle $ and $|v_f\rangle$, while $q_\mu^2$ is the exchanged four-momentum squared. $F_{1N}$ and $F_{2N}$ are the form factors of the nucleon, $\kappa$ is the anomalous moment of the nucleon in nuclear magnetons, and $M$ is the nucleon mass. For proton knock-out, the transition matrix Eq. (\[tmatrix\]) reads $${\cal M} = - \langle v_f| v_i\rangle {\cal M}_Q + \langle v_f|{\mbox{\boldmath $\alpha$}}|v_i\rangle \cdot \left ( {\bf M}_{{\rm el}} +{\bf M}_{{\rm mag}}\right)\,,$$ where $$\begin{aligned} {\cal M}_Q & = &2 \,{}^{(-)}\langle {\bf q}; \Psi_{III}| {\cal H}_Q |\Psi_{IV} \rangle\,, \label{Mq}\\ {\bf M}_{\rm el} & = &2 {}^{(-)}\langle {\bf q}; \Psi_{III}| {\bf H}_{{\rm el}} |\Psi_{IV} \rangle \,, \label{Mel} \\ {\bf M}_{\rm mag} & = &2 {}^{(-)}\langle {\bf q}; \Psi_{III}| {\bf H}_{{\rm mag}}|\Psi_{IV} \rangle \,.\label{Magn} \end{aligned}$$ The Hamiltonians ${\cal H}_Q$, ${\bf H}_{{\rm el}}$, and ${\bf H}_{{\rm mag}}$ are given by $$\begin{aligned} \label{h1} {\cal H}_Q & = & F_{\rm ch}^p(1+q_\mu^2/8M^2) \, \sum_{j=1}^4 e^{-i {\bf Q} \cdot {\bf x}_j}\,\lambda_j \,,\\ \label{hel} {\bf H}_{{\rm el}} & = & (F_{\rm ch}^p/2M) \, \sum_{j=1}^4 ({\bf p}_j e^{-i {\bf Q} \cdot {\bf x}_j}+e^{-i {\bf Q} \cdot {\bf x}_j} {\bf p}_j) \,\lambda_j\,, \\ \label{hmag} {\bf H}_{\rm mag} & = & (i/2M)F_{\rm mag}^p \sum_{j=1}^4 \, e^{-i {\bf Q} \cdot {\bf x}_j} \mbox{\boldmath $\sigma$}_j\times{\bf Q} \lambda_j\,, \label{jel} \end{aligned}$$ Here the superscript $p$ refers to the proton and $\lambda_j=(1+\tau_z^j)/2$ is the isospin operator for nucleon $j$ while $F_{\rm ch}^p$ and $F_{\rm mag}^p$ are the charge and magnetic form factors of the proton defined by $$\begin{aligned} F_{\rm ch}^p & = & F_{1p} + (q_\mu^2/4M^2)\kappa_p F_{2p} \\ F_{\rm mag}^p & = & F_{1p} + \kappa_p F_{2p}\,. \end{aligned}$$ The analytical fit to the proton form factors $F_{1p}$ and $F_{2p}$ given by Janssens [*et al.*]{} [@Janssens] is used in the calculations. Squaring the matrix element, summing and averaging over the electron spin, and inserting the resulting expression in Eq. (\[tcross\]), we obtain $$\begin{aligned} \frac{d^5\sigma}{dE_f\,d\Omega_p\,d\Omega_e} & = &\frac{\sigma_{\rm M}}{(\hbar c)^3 (2\pi)^3}~ \frac{|{\bf p}_p|E_p}{1 - \displaystyle \frac{E_p}{E_{^3 H}} \frac{{\bf p}_p \cdot {\bf p}_{^3 H}}{|{\bf p}_p|^2}} \nonumber \\ \left\{|{\cal M}_Q|^2 \right. &-& \frac{1}{2} \sec^2 \frac{\theta}{2} ({\cal M}_Q^* {\bf J}+{\bf J}^*{\cal M}_Q) \cdot(\hat{k}_i + \hat{k}_f) \nonumber \\ &+& \frac{1}{2} \sec^2 \frac{\theta}{2} ({\bf J}\cdot \hat{k}_i {\bf J}^* \cdot \hat{k}_f + {\bf J}\cdot \hat{k}_f {\bf J}^* \cdot \hat{k}_i) \nonumber \\ &+& \left. |{\bf J}|^2 \tan^2 \frac{\theta}{2}\right\}\,, \label{xsec} \end{aligned}$$ where ${\bf J}={\bf M}_{{\rm el}} +{\bf M}_{{\rm mag}} $. The determination of the coincidence cross section is thus reduced to the determination of the nuclear matrix elements ${\cal M}_Q$ and ${\bf J}$. In this work we use the dominant electric operators (\[h1\]) and (\[hel\]). In PWIA the nuclear matrix elements (\[Mq\]) and (\[Mel\]) read $${\cal B}_Q({\bf q}) = 2 \, \langle {\bf q}|\langle \Psi_{III}| \sum_{j=1}^4 \, \exp(-i {\bf Q} \cdot{\bf x}_j)\,\lambda_j |\Psi_{IV}\rangle \label{bornq}$$ and $$\begin{aligned} {\bf B}_{{\rm el}} ({\bf q}) &=& 2 \, \langle{\bf q}|\langle \Psi_{III}| \sum_{j=1}^4 \,\left( {\bf p}_j \exp(-i {\bf Q}\cdot{\bf x}_j) \right. \nonumber\\ & + & \left. \exp(-i {\bf Q}\cdot{\bf x}_j) {\bf p}_j\right) \,\lambda_j |\Psi_{IV}\rangle \ , \label{bornj} \end{aligned}$$ where ${\bf q} = ({\bf p}_1 + {\bf p}_2 + {\bf p}_3 - 3 {\bf p}_4)/4$. The operators appearing in Eqs. (\[bornq\]) and (\[bornj\]) are the same as those of Eqs. (\[h1\]) and (\[hel\]), except that the nucleonic form factors $F_{\rm ch}^p$ and $F_{\rm mag}^p$ are not noted here. To proceed we express the operators ${\bf x}_j$ and ${\bf p}_j$ in Jacobi coordinates and neglect, as in the photodisintegration case [@ell1; @ell2], those acting within $|\Psi_{III}\rangle$. The remaining terms, containing ${\bf q}$ and its canonically conjugate counterpart, are treated without further approximation. A straightforward calculation then reduces (\[bornq\]) to $$\begin{aligned} {\cal B}^{\prime}_Q({\bf q})& = & 2 \,\langle {\bf q} + \frac{1}{4} {\bf Q}|\langle \Psi_{III}| (\lambda_1 + \lambda_2 + \lambda_3) |\Psi_{IV}\rangle \nonumber \\ & + & 2 \, \langle {\bf q} -\frac{3}{4} {\bf Q}|\langle \Psi_{III}| \lambda_4 |\Psi_{IV}\rangle , \label{bornprexq} \end{aligned}$$ whereas (\[bornj\]) is replaced by $$\begin{aligned} {\bf B}^{\prime}_{{\rm el}} ({\bf q})& = & \frac{4}{3} \, {\bf q} \, \langle {\bf q}+\frac{1}{4} {\bf Q}|\langle \Psi_{III}| (\lambda_1 + \lambda_2 + \lambda_3) |\Psi_{IV}\rangle \nonumber \\ & - & 4 \, {\bf q} \, \langle {\bf q}-\frac{3}{4} {\bf Q}|\langle \Psi_{III}| \lambda_4 |\Psi_{IV}\rangle + \, {\bf Q} \, {\cal B}^{\prime}_Q({\bf q}). \label{bornprexj} \end{aligned}$$ The construction of the above matrix elements requires the knowledge of the bound states $|\Psi_{III}\rangle$ and $|\Psi_{IV}\rangle$. For their calculation the exact three- and four-nucleon AGS integral equations are employed [@ags3; @ags4]. The latter consist the coupled set of (before antisymmetrization) 18x18 four-body AGS equations. They contain in their kernel all subsystem information via the two-body T-matrices, the three- and (2+2)-body AGS transition operators. By this approach, the full coupling and the corresponding interference of (2+2)- and (3+1)-channels in the four-body system is taken into account explicitly, and thus exactly and completely. In order to reduce the original three- and four-body relations to (one-dimensional) integral equations, the W-matrix method [@wm86] and the energy-dependent pole approximation (EDPA) [@edpe78] are used. In the purely nuclear case these approximations have led to very accurate results (see e.g. [@wmc1; @wmc2; @edpe]). Furthermore, they have been successfully used in calculations of the photodisintegration of $^3$H, $^3$He [@Sand98; @Schadow] and $^4$He [@ell1; @ell2; @Sand98]. The graphical representation of matrix elements like Eqs. (\[bornq\]) and (\[bornj\]), adapted to the four-body AGS formalism, can be found in [@boe80]. As in [@ell1; @ell2], the Malfliet-Tjon potential I and III [@mt6970] is chosen, as it is both sufficiently realistic and simple enough to be employed in four-nucleon calculations. This property is of particular importance in our calculations where the computation of the matrix elements, despite the approximations used, is still tedious and of considerable numerical complexity. The corresponding binding energies are 8.595 MeV for $^3$H and 30.1 MeV for $^4$He [@ell1]. The results obtained within the AGS formalism for the $^4$He(e,e$^\prime$p)$^3$H five-fold differential cross section as a function of the missing momentum ${\bf Q}$ are shown in Fig. 2. The kinematics and the experimental data are those of [@Leeuth; @Leeuwe98] for the $\omega=215$MeV case. For comparison we also included in the figure the PWIA results of [@Howell97], obtained for wave functions constructed via the integrodifferential equation approach (IDEA) of Ref. [@IDEA]. The PWIA results of Laget (see Refs. [@Leeuth; @Leeuwe98; @Laget94]) are also shown, being obtained for the Urbana potential and for wave functions constructed with the variational Monte Carlo (MC) method. The agreement of our AGS calculations with the experimental data, especially in the region where the PWIA results of the other two methods show their characteristic dip, is remarkable. This holds true also in comparison with the other results reported in [@Leeuwe98]. Fig. 3 shows the five-fold differential cross section for the [$^4$He(e,e$^\prime$p)$^3$H]{} reaction for the Saclay kinematics [@Goff]. For comparison the Laget results [@Laget94] are also plotted. The agreement of our results with experiment is again remarkable. The overall small discrepancies may be reduced by using a better NN force, further improvements in the PWIA matrix elements, and inclusion of the FSI in a rigorous way. A particular advantage of the AGS-type approach in this respect lies in the fact that the incorporation of the FSI is exactly the same for the four-nucleon scattering, the photodisintegration of $^4$He, and the electrodisintegration of $^4$He (see e.g. [@Sand98] and Refs. therein). Most important: in this approach the underlying integral equations explicitly incorporate the (2+2)-channels, not fully included in other approaches, and their coupling to the (3+1)-channels. That underlying interference of competing channels is the most relevant feature of four-body theory as compared to three-body theory. In other words, the complexity of four-body rearrangement processes is fully taken into account. In conclusion, our results show that already in PWIA quite a good description of the experimental data can be achieved. The main reason for this agreement appears to be the use of wave functions obtained from the AGS integral equations with their complete coupling scheme. Another reason is the way of calculating the nuclear matrix elements. Namely, those parts of the electromagnetic operators (\[h1\]) and (\[hel\]), which act between the relative motion of the two outgoing nuclear fragments, are taken into account exactly. The sensitivity to the input NN-potential and to the above-mentioned 2+2 rearrangement terms are under investigation. Financial support from the University of South Africa and the Foundation for Research Development of South Africa is appreciated. J. F. J. van den Brand [*et al.*]{}, Phys. Rev. Lett. [**60**]{}, 2006 (1988). J. F. J. van den Brand [*et al.*]{}, Phys. Rev. Lett. [**66**]{}, 409 (1991); Nucl. Phys. [**A 534**]{}, 637 (1991). J. M. Le Goff [*et al.*]{}, Phys. Rev. C [**50**]{}, 2278 (1994). J. J. van Leeuwe, Ph.D. thesis, University of Utrecht, ISBN 90-393-1204-4, 1996 and Refs. therein. J. J. van Leeuwe [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 2543 (1998). R. Schiavilla, Phys. Rev. Lett. [**65**]{}, 835 (1990). J. M. Laget, Nucl. Phys. [**A579**]{}, 333 (1994). L. L. Howell, Ph.D. thesis, University of South Africa, 1997, (unpublished); M. Braun, L. L. Howell, S. A. Sofianos, and W. Sandhas, Phys. Rev. C 59, 2396 (1999). W. Sandhas, W. Schadow, G. Ellerkmann, L. L. Howell, and S. A. Sofianos, Nucl. Phys. [**A631**]{}, 210c (1998). S. I. Nagorny, Yu. A. Kasatkin, E. V. Inopin, and I. K. Kirichenko, Sov. J. Nucl. Phys. [**49**]{}, 465 (1989). A. A. Zayatz, V. A. Zolenko, Yu. A. Kasatkin, and S. I. Nagorny, Sov. J. Nucl. Phys. [**55**]{}, 178 (1992). E. O. Alt, P. Grassberger, and W. Sandhas, Nucl. Phys. [**B2**]{}, 167 (1967) P. Grassberger and W. Sandhas, Nucl. Phys. [**B2**]{}, 181 (1967). G. Ellerkmann, W. Sandhas, S. A. Sofianos, and H. Fiedeldey, Phys. Rev. C [**53**]{}, 2638 (1996). G. Ellerkmann, Ph.D. thesis, University of Bonn, 1995 (unpublished). W. Böttger, A. Casel, and W. Sandhas, Phys. Lett. [**92B**]{}, 11 (1980). K. M. McVoy and L. Van Hove, Phys. Rev. [**125**]{}, 1034 (1962). D. R. Lehman, Phys. Rev. Lett. [**23**]{}, 1339 (1969); Phys. Rev. C [**3**]{}, 1827 (1971). C. R. Heimbach, D. R. Lehman, and J. S. O’Connell, Phys. Rev. C [**16**]{}, 2135 (1977). C. D. Epp and T. A. Griffy, Phys. Rev. C [**1**]{}, 1633 (1970). T. Janssens [*et al.*]{}, Phys. Rev. [**142**]{}, 922 (1966). E. A. Bartnik, H. Haberzettl, and W. Sandhas, Phys. Rev. C [**34**]{}, 1520 (1986). S. A. Sofianos, N. J. McGurk, and H. Fiedeldey, Nucl. Phys. [**A318**]{}, 295 (1978). E. A. Bartnik, H. Haberzettl, Th. Januschke, U. Kerwath, and W. Sandhas, Phys. Rev. C [**36**]{}, 1678 (1987). T. N. Frank, H. Haberzettl, Th. Januschke, U. Kerwath, and W. Sandhas, Phys. Rev. C [**38**]{}, 1112 (1988). S. A. Sofianos, H. Fiedeldey, H. Haberzettl, and W. Sandhas, Phys. Rev. C [**26**]{}, 228 (1982). W. Schadow and W. Sandhas, Phys. Rev. C [**55**]{}, 1074 (1997); Nucl. Phys. [**A631**]{}, 588c (1998). R. A. Malfliet and J. A. Tjon, Nucl. Phys. [**A127**]{}, 161 (1969); Ann. Phys. (N.Y.) [**61**]{}, 425 (1970). M. Fabre de la Ripelle, H. Fiedeldey, S. A. Sofianos, Phys. Rev. C [**38**]{}, 449 (1988).
--- abstract: 'In this note, we discuss a closed-form necessary and sufficient condition for any two-qubit state to show hidden nonlocality w.r.t the Bell-CHSH inequality. This is then used to numerically compute the relative volume of states showing hidden Bell-CHSH non-locality , among all two-qubit states with one-sided reduction maximally mixed.' author: - Rajarshi Pal - Sibasish Ghosh bibliography: - 'qip.bib' title: 'A closed-form necessary and sufficient condition for any two-qubit state to show hidden nonlocality w.r.t the Bell-CHSH inequality' --- Introduction: ============= Nonlocality, other than being one of the most characteristic features of quantum mechanics has also been established as a resource for quantum information processing([@nonlocality-review]). Particularly, in recent years [*device independent quantum information processing*]{} has emerged where quantum nonlocality is considered to be the main resource as opposed to entanglement[@nonlocality-review]. The characterization and quantification of quantum non-locality is thus of prime importance from an information theoretic point of view. Nonlocality of certain quantum states can be [*revealed*]{} by post-selection through local filters before performing a standard Bell-test. This phenomenon(called ‘hidden nonlocality’) has received widespread attention([@nonlocality-review],[@BHQ13]) in the study of quantum non-locality and its interrelation with entanglement ever since the first examples of it were produced in [@POP95], [@Gis96].However in spite of the progress made so far it is not known for any Bell inequality(in a closed-form), what are the necessary and sufficient conditions for a quantum state to show hidden non-locality. In this work we fill this gap by providing a closed-form necessary-sufficient condition for any two-qubit state to show hidden nonlocality with a single copy w.r.t the Bell-CHSH inequality.Or main result is given by Theorem 1 in the next section. Hidden Bell-CHSH nonlocality ============================ [**Defn.**]{} Consider a local filtering transformation taking any two-qubit state $\rho$ to another two-qubit state $$\rho'=\frac{(A \otimes B)\rho (A^{\dagger} \otimes B^{\dagger})}{Tr(A^{\dagger}A \otimes B^{\dagger}B\rho) } \label{local-filtering-eq}$$ Then, $\rho$ is said to show hidden non-locality w.r.t the Bell-CHSH inequality iff $\rho'$ violates the Bell-CHSH inequality [@CHSH] for at least one choice of $A$,$B$. Let $R$ be the real $4 \times 4$ matrix with $R_{ij}=Tr(\rho \sigma_i \otimes \sigma_j), i,j=0,1,2,3$ (where $\sigma_0=I_2 $). Further let $C_{\rho}=MRMR^T$ with $M=\mbox{diag}(1,-1,-1,-1)$. Let $\lambda_i(C_{\rho}),(i=0,1,2,3)$ denote the eigenvalues of $C_{\rho}$ in descending order for an arbitrary two-qubit state $\rho$. Then, $\rho$ shows hidden nonlocality w.r.t the Bell-CHSH inequality iff $$\lambda_1(C_{\rho}) + \lambda_2(C_{\rho}) > \lambda_0(C_{\rho}). \label{hid-nlc-cond}$$ The maximum Bell violation obtained from the optimal filtered (or quasi-distilled) Bell-diagonal state being $2\sqrt{\frac{(\lambda_1(C_{\rho}) + \lambda_2(C_{\rho}))}{\lambda_0(C_{\rho})}}$. : Under a local filtering transformation taking any two-qubit state $\rho$ to an unnormalized state $$\rho'=(A \otimes B)\rho (A^{\dagger} \otimes B^{\dagger}), \label{local-filtering-eq0}$$ the real $4 \times 4$ matrix $R$ transforms as [@Ver01] $$\label{snlb-1} R'\equiv Tr(\rho'\sigma_i \otimes \sigma_j)=L_A R L_B^T |det(A)||det(B)|$$ with the Lorentz transformations $L_A$ and $L_B$ being given by, $$\begin{aligned} L_A= \frac{T(A \otimes A^*)T^{\dagger}}{|det(A)|}, \nonumber \\ L_B= \frac{T(B \otimes B^*)T^{\dagger}}{|det(B)|},\end{aligned}$$ and $ T= \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & i & -i & 0 \\ 1 & 0 & 0 & -1 \end{bmatrix}$, with the normalisation factor $R'_{00}=Tr(\rho')= Tr(A^{\dagger}A \otimes B^{\dagger}B\rho)$. [**Remark:**]{} Note that the filters $A$, $B$ must be of full rank for the Lorentz transformations to be finite. It was shown in [@Ver01] and [@Ver-lor] that by suitably choosing $A$ and $B$ and hence proper orthochronous Lorentz transformations $L_A$ , $L_B$ for [*any*]{} $\rho$ we can have $R'$ to be either diagonal corresponding to a Bell-diagonal state $\rho'$ or of the form, $$R'=R_{\rho'} = \begin{bmatrix} a & 0 & 0 & b \\ 0 & d & 0 & 0 \\ 0 & 0 & d & 0 \\ c & 0 & 0 & (b+c-a) \end{bmatrix}$$ with the corresponding $\rho'$(unnormalized) being $$\label{rho-after-local-filtering} \rho'= \frac{1}{2 }\begin{bmatrix} b+c & 0 & 0 & 0 \\ 0 & a-b & d & 0 \\ 0 & d & (a-c) & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} .$$ The possible sets of real values of $b$, $c$ and $d$ are given by, $$\begin{aligned} &({\rm i})& b=c=\frac{a}{2}, \nonumber \\ &({\rm ii})& (d=0=c) \mbox{ and } (b=a), \nonumber \\ &({\rm iii})& (d=0=b) \mbox{ and } (c=a), \nonumber \\ &({\rm iv})& (d=0) \mbox{ and } (a=b=c) . \label{value-cases}\end{aligned}$$ Case (i) corrsponds to rank three or two states while the other cases corrrespond to either the product states $|00\rangle \langle 00|$ or the state $|0\rangle \langle 0| \otimes \frac{I}{2}$. From eqn. (\[snlb-1\]) it follows that the spectrum of $MR'MR'^T$ is given by $$\label{c-matrix-transformation} \lambda(MR'MR'^T) = |det(A)|^2 |det(B)|^2 \lambda(ML_ARL_B^TML_BR^TL_A^T) =|det(A)|^2 |det(B)|^2 \lambda(MRMR^T),$$ where we have used $L_A^TML_A=M=L_B^TML_B$. Now as the filters $A$ and $B$ are of full rank i.e, $det(A),det(B) \ne 0$ we have for each for each $i \in \{0,1,2,3\}$ $$\frac{\lambda_i({C_{\rho'}})}{\lambda_0(C_{\rho'})} = \frac{\lambda_i({C_{\rho}})}{\lambda_0(C_{\rho})}. \label{eigfrac}$$ Let us consider the following cases now. \(a) $R'=\mbox{diag}(s_0,s_1,s_2,s_3)$. $\rho'$ corresponds to a Bell-diagonal state which in turn violates the Bell-CHSH inequality ([@Hor95]) after normalization provided, $$1< \frac{s_1^2}{s_0^2} + \frac{s_2^2}{s_0^2}= \frac{\lambda_1(C_{\rho'}) + \lambda_2(C_{\rho'})}{\lambda_0(C_{\rho'})}=\frac{\lambda_1(C_{\rho}) + \lambda_2(C_{\rho})}{\lambda_0(C_{\rho})} \nonumber.$$ (by eqn. (\[eigfrac\])). This proves Thoerem 1 for this case. \(b) $\rho'$ is of the non Bell-diagonal form with $d \ne 0$ in eqn. (\[rho-after-local-filtering\]) (case (i) of eqn. (\[value-cases\])) . It is easy to see by partial transposition that $\rho'$ must be entangled . Further by using filters of the form of $A=\mbox{diag}(\sqrt{\frac{(a-c)}{(a-b)}}\frac{1}{n},1)$ and $B=\mbox{diag}(\frac{1}{n},1)$ we have, $$\begin{aligned} \rho_1 &=& (A \otimes B) \rho (A^{\dagger} \otimes B^{\dagger}) \nonumber \\ &=& \frac{1}{2} \left(\frac{(b+c)(a-c)}{(a-b)n^4}| 00 \rangle \langle 00| + \frac{(a-c)}{n^2} (|01 \rangle \langle 01| + |10 \rangle \langle 10|) + \frac{d\sqrt{(a-c)}}{n^2\sqrt{(a-b)}} (|01 \rangle \langle 10| + |10 \rangle \langle 01|) \right) . \end{aligned}$$ By taking a very large positive no. $n$ , $\rho_2 = \frac{\rho_1}{Tr(\rho_1)}$ can be made to approach arbitrarily close to the Bell-diagonal state $$\begin{aligned} \rho_3 &=& \frac{1}{2} ((|01 \rangle \langle 01| + |10 \rangle \langle 10|) + \frac{d}{\sqrt{(a-b)(a-c)}}(|01 \rangle \langle 10| + |10 \rangle \langle 01|)) \nonumber \\ &=& \frac{1}{4} ( I \otimes I + \frac{d}{\sqrt{(a-c)(a-b)}} \sigma_1 \otimes \sigma_1 + \frac{d}{\sqrt{(a-c)(a-b)}} \sigma_2 \otimes \sigma_2 - \sigma_3 \otimes \sigma_3 ).\end{aligned}$$ Now, from eqn. (\[rho-after-local-filtering\]) we have $\lambda(C_{\rho'})= [(a-b)(a-c), (a-b)(a-c), d^2, d^2 ]$. From theorem 3 of ref. [@VW02] we also know that the optimal Bell-violation among the states connected to $\rho$ by local filtering transformations is obtained from the ‘quasi-distilled’ state $\rho_3$. Hence by using eqn. (\[eigfrac\]) we obtain an optimal Bell violation of amount $$1 + \frac{d^2}{(a-b)(a-c)} = \frac{\lambda_1(C_{\rho'}) + \lambda_2(C_{\rho'})}{\lambda_0(C_{\rho'})} = \frac{\lambda_1(C_{\rho}) + \lambda_2(C_{\rho})}{\lambda_0(C_{\rho})} > 1$$ (note that $(a-b)(a-c) \geq d^2$ by virtue of positivity of $\rho'$) Thus states for which $\rho'$ is not Bell-diagonal ($d \ne 0$ case ) will [*always*]{} violate the Bell-CHSH inequality after suitable local filtering transformation. \(c) $\rho'$ is of the non Bell-diagonal form with $d = 0$ in eqn. (\[rho-after-local-filtering\]) (cases (ii), (iii) and (iv) of eqn. (\[value-cases\])) . These states being of the product form must come from a separable $\rho$ (local filtering with full rank filters being invertible) and from eqns. (\[rho-after-local-filtering\]) and (\[c-matrix-transformation\]) we have $\lambda_i(C_{\rho})=\lambda_i(C_{\rho'})=0$ for all $i$. Thus Theorem 1 holds. Conversely, when eqn. (\[hid-nlc-cond\]) is satisfied we can either filter or quasi-distill $\rho$ to a Bell-diagonal state with optimal Bell-violation $2\sqrt{\frac{(\lambda_1(C_{\rho}) + \lambda_2(C_{\rho}))}{\lambda_0(C_{\rho})}}$. $\square$ Applications ============ Using theorem 1 we have numerically computed the relative volume of states showing hidden Bell-CHSH non-locality , among all two-qubit states with one-sided reduction maximally mixed. The latter form a six parameter family isomorphic to the set of all qubit channels. The relative volumes of states which do [*not*]{} show hidden Bell-CHSH non-locality and separable states turn out to be about $0.39$ and $0.24$ respectively, while that of states which satisfy the Bell-CHSH inequality [*without*]{} post-selection through local filters is about $0.81$ . Thus the post-selection restriction considerably reduces the difference between entangled and non-local states and it will be interesting to see how far more it is reduced as one considers more inequalities like $I_{3322}$ [@Collins-Gisin-2003]. Conclusion ========== In this note we have described a closed-form necessary and sufficient condition for any two-qubit state to show hidden nonlocality w.r.t the Bell-CHSH inequality.We believe this is a useful step in the quantification of nonlocality and will aid in further studies of quantum non-locality as a resource and in its comparison with entanglement.
--- abstract: | This article studies the volume of compact quotients of reductive homogeneous spaces. Let $G/H$ be a reductive homogeneous space and $\Gamma$ a discrete subgroup of $G$ acting properly discontinuously and cocompactly on $G/H$. We prove that the volume of $\Gamma \backslash G/H$ is the integral, over a certain homology class of $\Gamma$, of a $G$-invariant form on $G/K$ (where $K$ is a maximal compact subgroup of $G$). As a corollary, we obtain a large class of homogeneous spaces the compact quotients of which have rational volume. For instance, compact quotients of pseudo-Riemannian spaces of constant curvature $-1$ and odd dimension have rational volume. This contrasts with the Riemannian case. We also derive a new obstruction to the existence of compact Clifford–Klein forms for certain homogeneous spaces. In particular, we obtain that $\SO(p,q+1)/SO(p,q)$ does not admit compact quotients when $p$ is odd, and that $\SL(n,\R)/\SL(m,\R)$ does not admit compact quotients when $m$ is even. address: | University of Luxembourg, Campus Kirchberg\ 6, rue Richard Coudenhove-Kalergi\ L-1359 Luxembourg author: - Nicolas Tholozan bibliography: - 'biblio.bib' title: 'Volume and non-existence of compact Clifford–Klein forms' --- Introduction {#introduction .unnumbered} ============ The problem of understanding compact quotients of homogeneous spaces has a long history that can be traced back to the “Erlangen program” of Felix Klein [@Klein1872]. In the second half of the last century, the breakthroughs of Borel [@Borel63], Mostow [@Mostow68], Margulis [@Margulis91] and many others lead to a rather good understanding of quotients of *Riemannian* homogeneous spaces. Comparatively, little is known about the non-Riemannian case, and in particular about quotients of pseudo-Riemannian homogeneous spaces. In this paper we will mainly focus on reductive homogeneous spaces, i.e. quotients of a semi-simple Lie group $G$ by a closed reductive subgroup $H$. The $G$-homogeneous space $X=G/H$ carries a natural $G$-invariant pseudo-Riemannian metric (induced by the Killing metric of $G$) and therefore (up to taking a covering of degree $2$) a $G$-invariant volume form $\vol_X$. A quotient of $X$ by a discrete subgroup $\Gamma$ of $G$ acting properly discontinuously and cocompactly is called a *compact Clifford–Klein form* of $X$, or (when it does not lead to any confusion) a *compact quotient* of $X$. The study of compact reductive Clifford–Klein forms was initiated in the 80’s by Kulkarni [@Kulkarni81] and Kobayashi [@Kobayashi89]. A lot of things remain to be understood, despite the significant works of Benoist [@Benoist96], Kobayashi [@Kobayashi89; @Kobayashi92; @Kobayashi93; @Kobayashi96; @Kobayashi98], Labourie [@BenoistLabourie92], Mozes and Zimmer [@LMZ95], Margulis [@Margulis97], and more recently the works of Kassel [@Kassel08; @Kassel10], Guéritaud [@GueritaudKassel], Guichard and Wienhard [@GGKW].\ In this paper we will address the following two questions, to which no general answer is known: Which reductive homogeneous spaces admit compact Clifford–Klein forms? Let $G/H$ be a reductive homogeneous space and $\Gamma$ a discrete subgroup of $G$ acting properly discontinuously and cocompactly on $G/H$. Is the volume of $\Gamma \backslash G/H$ rational (up to a scaling constant independent of $\Gamma$)? A particularly interesting family of homogeneous spaces are the *pseudo-Riemannian homogeneous spaces of constant curvature*, a unified definition of which was given by Wolf in [@Wolf62]. Recall that the pseudo-Riemannian homogeneous space of signature $(p,q)$ and constant negative curvature is the space $$\H^{p,q} = \SO_0(p,q+1)/\SO_0(p,q)~.$$ In this setting our results are summarized in the following: Let $p$ and $q$ be positive integers. Then: - If $p$ is odd, then $\H^{p,q}$ does not admit any compact Clifford–Klein form. - If $p$ is even, then the volume of any compact Clifford–Klein form of $\H^{p,q}$ is a rational multiple of the volume of the sphere of dimension $p+q$. Prior to this work, the first point was only known when both $p$ and $q$ are odd [@Kulkarni81], as well as when $p \leq q$ [@Wolf62]. The second point follows from the Chern–Gauss–Bonnet formula when $p+q$ is even but is new when $p$ is even and $q$ is odd.\ Let us now give a more detailed overview of the results contained in this paper. Volume of Compact Clifford–Klein forms {#volume-of-compact-cliffordklein-forms .unnumbered} -------------------------------------- It is well-known that the volume of a closed hyperbolic manifold of dimension $2n$ is essentially an integer, due to the Chern–Gauss–Bonnet formula. This argument generalizes to compact quotients of a reductive homogeneous space $G/H$ whenever one can show that the volume is a *Chern–Weil class* associated to the canonical principal $H$-bundle over $G/H$ (see Section \[s:Rigidity\]). If $G/H$ is a symmetric space, this is known to happen if and only if $G$ and $H$ have the same complex rank. This argument has no chance to work for homogeneous spaces of odd dimension (because Chern–Weil classes have even degree), nor for homogeneous spaces of the form $H\! \times \! H/\Delta(H)$ (where $\Delta(H)$ denotes the diagonal embedding of $H$), for which all the Chern–Weil invariants are trivial. It is known for instance that the volume of a closed hyperbolic $3$-manifold is usually not rational. In contrast, we proved in a recent paper (see [@Tholozan5]) that the volume of a closed *anti-de Sitter* $3$-manifold (i.e. a compact quotient of $\H^{2,1}$) is a rational multiple of $\frac{\pi^2}{2}$, answering a question that was raised in [@QuestionsAdS]. The anti-de Sitter space $\H^{2,1}$ can be seen as the *group space* $\SO_0(2,1)$ (i.e. the Lie group $\SO_0(2,1)$ with the action of $\SO_0(2,1) \times \SO_0(2,1)$ by left and right multiplication, see Definition \[d:GroupSpace\]). Its compact Clifford–Klein are known to exist and to have a rich deformation space (see [@Salein00], [@KasselThese] or [@Tholozan3]). Kulkarni and Raymond proved in [@KulkarniRaymond85] that these compact Clifford–Klein forms have the form $$j\!\times\!\rho(\Gamma) \backslash \SO_0(2,1)~,$$ where $\Gamma$ is a cocompact lattice in $\SO_0(2,1)$, $j$ the inclusion and $\rho$ another representation of $\Gamma$ into $\SO_0(2,1)$. Moreover, Guéritaud and Kassel proved in [@GueritaudKassel] that these quotients have the structure of a $\SO(2)$-bundle over $\Gamma \backslash \H^2$ (see Theorem \[t:FibrationGK\]). In [@Tholozan5], we proved the following formula: $$\label{eq:VolSO(n,1)} \Vol \left( j\!\times\!\rho(\Gamma) \backslash \SO_0(2,1)\right) = \frac{\pi^2}{2}\left(\euler(j) + \euler(\rho)\right)~,$$ where $\euler$ denotes the Euler class. This formula was later recovered by Alessandrini–Li [@AlessandriniLi15] and Labourie [@LabouriePrivate] using different methods. It may seem surprising that a “Chern–Weil-like” invariant such as the Euler class appears when computing the volume of a $3$-manifold. The first aim of this paper is to explain better this phenomenon an generalize it to a much broader setting. The main issue is that we don’t have a structure theorem similar to the one of Guéritaud–Kassel in general (see Theorem \[t:FibrationGK\] and the conjecture that follows). We will overcome this problem with the following argument: denoting $L$ a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$, we see that $\Gamma\backslash G/H$ is homotopically equivalent to $\Gamma \backslash G/L$, which is a $K/L$-bundle over $\Gamma \backslash G/K$. Let $q$ be the dimension of $K/L$ and $p+q$ the dimension of $G/H$. A classical use of spectral sequences shows that $\Gamma$ has homological dimension $p$ and that $\HH_p(\Gamma, \Z)$ is generated by an element $[\Gamma]$ (Proposition \[p:HomDimGamma\]). Since $G/K$ is contractible, $\HH_p(\Gamma, \Z)$ is naturally isomorphic to $\HH_p(\Gamma\backslash G/K, \Z)$ and $[\Gamma]$ can thus be realized as a singular $p$-cycle in $\Gamma\backslash G/K$. We will prove the following: \[t:VolumeCliffordKlein\] Let $G/H$ be a reductive homogeneous space, with $G$ and $H$ connected and of finite center. Let $L$ be a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$. Set $p = \dim G/H - \dim K/L$. Then there exists a $G$-invariant $p$-form $\omega_{G,H}$ on $G/K$ such that, for any torsion-free discrete subgroup $\Gamma \subset G$ acting properly discontinuously and cocompactly on $G/H$, we have $$\Vol\left( \Gamma \backslash G/H\right) = \left| \int_{[\Gamma]} \omega_{G,H} \right|~.$$ It turns out that, in many cases, the form $\omega_{G,H}$ is a “Chern–Weil form”, though the volume form of $G/H$ is not (see Section \[s:InclusionSymSpaces\]). This implies that the volume of any compact quotient of $G/H$ is a rational multiple of the volume of $G_U/H_U$, where $G_U$ and $H_U$ respectively denote the compact Lie groups dual to $G$ and $H$ (see Section \[s:Rigidity\]). In particular, we will obtain the following: \[t:RationalityVolume\] For the following pairs $(G,H)$, the volume of compact quotients of $G/H$ is a rational multiple of the volume of $G_U/H_U$: - $G = \SO(p,q+1)$, $H = \SO(p,q)$, $p$ even, $q>0$. - $G = \SL(2n,\R)$, $H = \SL(2n-1,\R)$, $n > 0$. - $G$ a Hermitian Lie group, $H$ any semi-simple subgroup. Cases $(1)$ and $(2)$ concern families of symmetric spaces that have attracted a lot of interest. However, they potentially carry no information. Indeed the symmetric space $\SL(2n,\R)/\SL(2n-1,\R)$ is conjectured not to admit any compact quotient (see next subsection), and the only known compact quotients of $\H^{p,q} = \SO(p,q+1)/\SO(p,q)$ for $p\geq 3$ are the so-called *standard* quotients constructed by Kulkarni in [@Kulkarni81], for which the theorem reduces to a classical statement about volumes of quotients of Riemannian symmetric spaces. Non standard quotients are only known in the case of $\H^{2,1}$, which was treated in [@Tholozan5] (see Equation ) and [@AlessandriniLi15]. Case $(3)$, on the other side, shows in particular that the volume of a compact quotient of the group space $\SU(d,1)$ is a rational multiple of the volume of $\SU(d+1)$. These compact quotients are known to exist and some of them have rich deformation spaces, as was proven by Kobayashi \cite{}, Kassel [@Kassel10], and Guéritaud–Guichard–Kassel–Wienhard [@GGKW]. Like quotients of $\SO_0(2,1)$, they are known to have (up to a finite cover) the form $$j\!\times\!\rho(\Gamma) \backslash \SU(d,1)~,$$ where $\Gamma$ is a uniform lattice in $\SU(d,1)$, $j: \Gamma \to \SU(d,1)$ is the inclusion and $\rho:\Gamma \to \SU(d,1)$ is another representation (see Theorem \[t:QuotientsSU(d,1)\] for a more precise statement). For such Clifford–Klein forms, we will actually give a more precise formula. Recall that $\SU(d,1)$ acts transitively on the complex hyperbolic space $\H^d_\C$ and preserves a Kähler form $\omega$. If $\Gamma$ is a uniform lattice in $\SU(d,1)$ and $\rho:\Gamma \to \SU(d,1)$ a representation, we define $$\tau_k(\rho) = \int_{\Gamma \backslash \H^d_\C} \omega^{d-k} \wedge f^*\omega^k~,$$ where $f: \H^d_\C \to \H^d_\C$ is any smooth $\rho$-equivariant map. \[t:VolumeQuotientsSU(d,1)Intro\] Let $\Gamma$ be a lattice in $\SU(d,1)$, $j:\Gamma \to \SU(d,1)$ the inclusion and $\rho: \Gamma \to \SU(d,1)$ another representation such that $j\! \times\! \rho(\Gamma)$ acts properly discontinuously and cocompactly on $\SU(d,1)$. Then $$\Vol\left( j\!\times \! \rho(\Gamma) \backslash \SU(d+1)\right) = \Vol(\SU(d+1)) \sum_{k=0}^d \tau_k(\rho)~.$$ A new obstruction to the existence of compact quotients {#a-new-obstruction-to-the-existence-of-compact-quotients .unnumbered} ------------------------------------------------------- Contrary to the Riemannian setting, compact pseudo-Riemannian Clifford–Klein forms do not always exist, and it is a long standing problem to characterize which reductive homogeneous spaces admit compact quotients. This question lead to many important works of Kulkarni [@Kulkarni81], Kobayashi [@Kobayashi89; @Kobayashi92; @Kobayashi96], Benoist [@Benoist96], Labourie, [@BenoistLabourie92], Mozes, Zimmer [@LMZ95; @LabourieZimmer95], Margulis [@Margulis97] or Shalom [@Shalom00]. We refer to [@KobayashiYoshino05] or [@Constantine12] for a more thorough survey. Let us recall here two famous conjectures that emerged from these works. The homogeneous space $\H^{p,q} = \SO_0(p,q+1)/\SO_0(p,q)$ ($p,q>0$) admits a compact Clifford–Klein form if and only if one of the following holds: - $p$ is even and $q=1$, - $p$ is a multiple of $4$ and $q=3$, - $p=8$ and $q=7$. The homogeneous space $\SL(n,\R)/\SL(m,\R)$ ($1<m<n$) never admits a compact Clifford–Klein form. In this paper, we obtain a powerful cohomological obstruction, allowing us to do significant advances toward these conjectures. In Section \[s:NonExistence\], we prove that in many cases the form $\omega_{G,H}$ of Theorem \[t:VolumeCliffordKlein\] vanishes, directly implying that the reductive homogeneous space $G/H$ does not admit a compact Clifford–Klein form. In particular, we obtain the following: \[t:AdvanceKobayashiConj\] For the following pairs $(G,H)$, the homogeneous space $G/H$ does not have any compact Clifford–Klein form. - $G= \SO_0(p,q+r)$, $H= \SO_0(p,q)$, $p,q,r>0$, $p$ odd; - $G= \SL(n,\R)$, $H= \SL(m,\R)$, $1<m<n$, $m$ even; - $G = \SL(p+q,\C)$, $H = \SU(p,q)$, $p,q>0$; - $G = \Sp(2(p+q),\C)$, $H = \Sp(p,q)$; - $G = \SO(2n,\C)$, $H = \SO^*(2n)$; - $G = \SL(p+q,\R)$, $H = \SO_0(p,q)$, $p,q>1$; - $G = \SL(p+q, \mathcal{H})$, $H = \Sp(p,q)$, $p,q>1$. (Here $\mathcal{H}$ denotes the field of quaternions.) All of these cases are partly new. They were obtained independently by Morita in [@Morita16]. We give more details about how these results relate to earlier works in Section \[ss:EarlierResults\] and to Yosuke Morita’s work in Section \[ss:Morita\]. Finally, our obstruction will allow us to prove the following theorem, which was conjectured by Kobayashi (see [@Kobayashi96 Conjecture 4.15]): \[t:KobayashiRankConj\] Let $G$ be a connected semi-simple Lie group, $H$ a connected semi-simple subgroup of $G$, $L$ a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$. If $$\rk(G) - \rk(K) < \rk(H) - \rk(L)~$$ (where $\rk$ denotes the complex rank), then $G/H$ does not have a compact Clifford–Klein form. Note that Morita [@MoritaPreprint] independently proved that this theorem is implied by a previous result of his [@Morita15]. Organization of the paper {#organization-of-the-paper .unnumbered} ------------------------- In Section \[s:HomologicalFibration\], we explain why compact reductive Clifford–Klein forms behave like fibrations over an Eilenberg–MacLane space “at the homology level”. In Section \[s:FiberwiseIntegration\] we construct the form $\omega_{G,H}$ as the contraction of a $(p+q)$-form on $G/L$ along the fibers $gK/L$ and we prove Theorem \[t:VolumeCliffordKlein\]. In Section \[s:CompactDual\], we study the form corresponding to $\omega_{G,H}$ on the compact dual symmetric space $G_U/K$ and show that this form is “Poincaré-dual” to the inclusion of $H_U/L$ in $G_U/K$. In Section \[s:InclusionSymSpaces\], we derive a condition under which the form $\omega_{G,H}$ vanishes and a condition under which it is a “Chern–Weil” class. In Section \[s:Rigidity\], we explain why, when $\omega_{G,H}$ is a Chern–Weil class, the volume of compact Clifford–Klein forms is rational, concluding the proof of Theorem \[t:RationalityVolume\]. In Section \[s:GroupSpaces\], we describe the form $\omega_{G,H}$ in the case of group spaces and deduce Theorem \[t:VolumeQuotientsSU(d,1)Intro\]. In Section \[s:NonExistence\] we give three different ways of proving the vanishing of the form $\omega_{G,H}$, leading to Theorems \[t:AdvanceKobayashiConj\] and \[t:KobayashiRankConj\]. Finally in Section \[s:LocalFibrations\], we prove that the vanishing of the form $\omega_{G,H}$ is also an obstruction to the existence of certain local foliations of $G/H$ by compact homogeneous subspaces, and we formulate a conjecture about the geometry of compact reductive Clifford–Klein forms. Acknowledgements {#acknowledgements .unnumbered} ---------------- I am very thankful to Gabriele Mondello and Gregory Ginot for helping me understand spectral sequences, to Bertrand Deroin for suggesting the use of Thom’s representation theorem in the proof of Theorem \[t:VolumeCliffordKlein\], to Yosuke Morita for many insightful discussions about our respective works, to Toshiyuki Kobayashi for remarks on a previous version of this article, and to Yves Benoist for encouraging me to improve this previous version. Clifford–Klein forms are fibrations at the homology level {#s:HomologicalFibration} ========================================================= In all this paper, $G$ will denote a connected Lie group and $H$ a closed connected subgroup of $G$. We will also fix $L$ a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$. According to the Cartan–Iwasawa–Malcev theorem, $L$ and $K$ are well-defined up to conjugation. We denote respectively by $\g$, $\h$, $\k$ and $\l$ the Lie algebras of $G$, $H$, $K$ and $L$. We will assume that the action of $G$ on the homogeneous space $X=G/H$ preserves a volume form. Recall that this is equivalent to requiring that $$\Det(G)_{|H} = \Det(H)~,$$ where $\Det(G)$ and $\Det(H)$ denote respectively the *modular functions* of $G$ and $H$. Starting from Section \[s:CompactDual\], we will assume $G$ and $H$ to be reductive and therefore unimodular, in which case this condition is automatically satisfied. A *compact Clifford–Klein form* of $X$ is a quotient of $X$ by a discrete subgroup $\Gamma$ of $G$ acting properly discontinuously and cocompactly. The $G$-invariant volume form $\vol_X$ then descends to a volume form on $\Gamma \backslash X$ (that we still denote by $\vol_X$) and we can define the *volume* of $\Gamma \backslash X$ by $$\Vol\left( \Gamma \backslash X \right) = \left | \int_{\Gamma \backslash X} \vol_X \right|~. \\$$ Recall that, since $K$ and $L$ are maximal compact subgroups of $G$ and $H$ respectively, the homogeneous spaces $G/K$ and $H/L$ are contractible. Let us fix a torsion-free discrete subgroup $\Gamma$ of $G$ acting properly discontinuously and cocompactly on $G/H$, and denote by $M$ the Clifford–Klein form $$M= \Gamma \backslash G/H~.$$ We introduce two auxiliary Clifford–Klein forms: $$E = \Gamma \backslash G/L$$ and $$B = \Gamma \backslash G/K~.$$ ($E$ and $B$ are smooth manifolds since $\Gamma$ is discrete and torsion-free.) We remark the following facts: - $E$ fibers over $M$ with fibers isomorphic to $H/L$. Since $H/L$ is contractible, this fibration is a homotopy equivalence. - $E$ also fibers over $B$ with fibers isomorphic to $K/L$. - Since $G/K$ is contractible, $B$ is a *classifying space* for $\Gamma$. From the first point, we deduce in particular that the homology of $M$ is the same as the homology of $E$. The third point implies that the homology of $B$ is the homology of $\Gamma$. Finally, $(ii)$ implies that the homologies of $B$, $E$ and $K/L$ are linked (in an elaborate way) by the *Leray–Serre spectral sequence*. We will use the following classical consequence: \[p:HomDimGamma\] Let $q$ denote the dimension of $K/L$ and $p+q$ the dimension of $G/H$. Then the group $\Gamma$ has homological dimension $p$ and $$\HH_p(\Gamma, \Z) \simeq \HH_{p+q}(M,\Z) \simeq \Z~.$$ Let $p'$, $q'$ and $r'$ denote respectively the homological dimensions of $B$, $K/L$ and $E$. By Serre’s theorem, the spectral sequence given by $$\E_{k,l}^2 = \HH_k \left(B, \HH_l(K/L,\Z)\right)$$ converges to $\HH_{k+l}(E,\Z)$. A classical consequence is that $$r' = p' + q'$$ and that $$\label{eq:SpectralSequence} \HH_{p'+q'}(E,\Z) \simeq \HH_{p'}\left(B, \HH_{q'}(K/L, \Z) \right)~.$$ Since $K/L$ is a closed oriented manifold of dimension $q$, we have $q'=q$ and $\HH_q(K/L,\Z) \simeq \Z$. Since $E$ is homotopy equivalent to $M$ which is a closed oriented manifold of dimension $p+q$, we also have $r'= p+q$. Therefore $p'= p$. Moreover, since $L$ is connected, the action of $\Gamma$ on $G/L$ preserves an orientation of the fibers of the fibration $$G/L \to G/K$$ and $\Gamma$ thus acts trivially on $\HH_q(K/L, \Z)$. From , we obtain $$\Z \simeq \HH_{p+q}(E,\Z) \simeq \HH_p(B,\Z)~.$$ The proposition follows since $E$ is homotopy equivalent to $M$ and $B$ is a classifying space for $\Gamma$. To go further, we need to explicitly describe the isomorphism $\HH_{p+q}(E,\Z) \simeq \HH_p(B,\Z)$. Let $[\Gamma]$ denote a generator of $\HH_p(B,\Z) \simeq \HH_p(\Gamma,\Z)$, and $\pi$ the fibration of $E$ over $B$. Roughly speaking, if one thinks of $[\Gamma]$ as a closed submanifold of $B$ of dimension $p$, then the isomorphism $\HH_p(B,\Z) \overset{~}{\to} \HH_{p+q}(E,\Z)$ maps $[\Gamma]$ to $\pi^{-1}([\Gamma])$, which is a submanifold of $E$ of dimension $p+q$. However, we don’t know whether $[\Gamma]$ can be represented by a submanifold. One way to overcome this difficulty would be to work with simplicial complexes. However, since we will use differential geometry later, it is more convenient to use Thom’s realization theorem: There exists a closed oriented $p$-manifold $B'$, a smooth map $\phi:B' \to B$ and an integer $k$ such that $$k [\Gamma] = \phi_*[B']~,$$ where $[B']$ denotes the fundamental class of $B'$. Let $\pi': E'\to B'$ be the pull-back of the fibration $\pi:E \to B$ by $\phi$ and $\hat{\phi}: E' \to E$ the lift of $\phi$. The total space of the fibration $E'$ is a closed orientable $(p+q)$-manifold. Let $[E]$ denote a generator of $\HH_{p+q}(E)$ and $[E']$ denote the fundamental class of $E'$. Then, up to switching the orientation of $E'$, we have $$k [E] = \hat{\phi}_*[E']~.$$ The Leray–Serre spectral sequence shows that the fibrations $\pi$ and $\pi'$ respectively induce isomorphisms $$\pi^*: \HH_p(B) \to \HH_{p+q}(E)$$ and $${\pi'}^*: \HH_p(B') \to \HH_{p+q}(E')~.$$ By naturality of the Serre spectral sequence, we have the following commuting diagram: $$\xymatrix{ \HH_p(B') \ar[d]_{{\pi'}^*} \ar[r]^{\phi_*} & \HH_p(B) \ar[d]^{\pi^*} \\ \HH_{p+q}(E') \ar[r]^{\hat{\phi}_*} & \HH_{p+q}(E)~. }$$ Now, $B'$ and $E'$ are closed oriented manifolds of dimension $p$ and $p+q$ respectively. Since ${\pi'}^*$ is an isomorphism, it maps the fundamental class of $B'$ to the fundamental class of $E'$ (up to switching the orientation of $E'$). Since $\phi_*[B'] = k [\Gamma]$, we thus have $$\hat{\phi}_*[E'] = k [E]~.$$ To summarize, we proved that the rational homology of $E$ in dimension $p+q$ is generated by a cycle that “fibers” over a $p$-cycle of $B$. Fiberwise integration of the volume form {#s:FiberwiseIntegration} ======================================== Let $E'$, $B'$, $\phi$, $\hat{\phi}$ and $\pi$, $\pi'$ be as in the previous section. Denote by $\psi$ the projection from $E$ to $M$. Recall that the volume form $\vol_X$ on $X=G/H$ induces a volume form on $M$ that we still denote by $\vol_X$. Since $\psi$ is a homotopy equivalence, we have $$\Vol(M) = \left| \int_M \vol_X \right| = \left| \int_{[E]} \psi^*\vol_X \right|~.$$ Since $k[E] = \hat{\phi}_*[E']$, we have $$\left| \int_{[E]} \psi^*\vol_X \right| = \frac{1}{k}\left| \int_{E'} \hat{\phi}^*\psi^*\vol_X \right|~.$$ Now, since $E'$ fibers over $B'$, we can “average” the form $\hat{\phi}^*\psi^*\vol_X$ along the fibers to obtain a $p$-form on $B'$ whose integral will give the volume of $M$. Let $x$ be a point in $G/K$ and let $F$ denote the fiber $\pi^{-1}(x)$. Choose some volume form $\vol_F$ on $F$ and let $\xi$ denote the section of $\Lambda^q TF$ such that $\vol_F(\xi)=1$. At every point $y$ of $F$, the $p$-form obtained by contracting $\psi^*\vol_X$ with $\xi$ has $T_y F$ in its kernel and therefore induces a $p$-form $\omega_y$ on $T_x G/K$. \[d:OmegaH\] The form $\omega_{G,H}$ on $G/K$ is defined at the point $x$ by $$(\omega_{G,H})_x = \int_F \omega_y\ \d \vol_F(y)~.$$ One easily checks that this definition does not depend on the choice of $\vol_F$. Since the maps $\psi$ and $\pi$ are equivariant with respect to the actions of $G$, the volume forms $\psi^* \vol_X$ and $\omega_{G,H}$ are $G$-invariant. By a slight abuse of notation, we still denote by $\omega_{G,H}$ the induced $p$-form on $B = \Gamma \backslash G/K$. \[p:FiberIntegration\] For any submanifold $V$ of dimension $p$ in $G/K$, we have $$\int_V \omega_{G,H} = \int_{\pi^{-1}(V)} \psi^*\vol_X~.$$ This is presumably a classical result of differential geometry. Let $U$ be an open subset of $V$ over which the fibration $\pi$ is trivial. Let us identify $\pi^{-1}(U)$ with $K/L \times U$. We can locally write the form $\psi^*\vol_X$ as $f(y,x) \vol_F \wedge \vol_U$ for some function $f$ on $K/L\times U$ and some volume forms $\vol_F$ and $\vol_U$ on $K/L$ and $U$ respectively. Let $\xi$ be the section of $\Lambda^q T K/L$ such that $\vol_F(\xi) = 1$. The contraction of $\psi^*\vol_X$ with $\xi$ is thus $f(y,x) \vol_U$. By construction, we thus have $$(\omega_{G,H})_x = \left(\int_F f(x,y) \d \vol_F(y)\right) \vol_U~,$$ and therefore $$\begin{aligned} \int_{\pi^{-1}(U)} \psi^*\vol_X & = & \int_{F\times U} f(y,x) \d \vol_F(y) \d \vol_U(x) \\ \ & = & \int_U \omega_{G,H}~.\end{aligned}$$ In particular, if $V$ is a sphere of dimension $p$ in $G/K$ that can be homotoped to a point $p$, then $\pi^{-1}(V)$ can be homotoped to the fiber $\pi^{-1}(p)$. We thus have $$\int_V \omega_{G,H} = \int_{\pi^{-1}(V)} \psi^*\vol_X = 0~.$$ Since $\psi^*\vol_X$ is closed. This shows that $\omega_{G,H}$ is closed. In the following, we will assume that $G$ is semi-simple, in which case any $G$-invariant form on $G/K$ is closed, according to a well-known theorem of Cartan. We can now conclude the proof of Theorem \[t:VolumeCliffordKlein\]. Indeed, we have $$\begin{aligned} \Vol(M) & = & \frac{1}{k} \left| \int_{E'} \hat{\phi}^* \psi^* \vol_X \right|\\ \ & = & \frac{1}{k} \left| \int_{B'} \phi^* \omega_{G,H} \right| \quad \textrm{by Proposition \eqref{p:FiberIntegration}}\\ \ & = & \left| \int_{[\Gamma]} \omega_{G,H} \right|~.\\\end{aligned}$$ Let us conclude this section by giving a more explicit way to compute the form $\omega_{G,H}$ when $G$ is a connected semi-simple Lie group with finite center. Recall that in that case, the tangent space of $G/K$ at the point $x_0 = K$ can be identified with the orthogonal of $\h$ in $\g$ with respect to the Killing form of $\g$. Moreover, the form $\omega_{G,H}$ is uniquely determined by its restriction to $T_{x_0}G/K$. If $\frak{v}$ is a subspace of $\g$ of dimension $d$ in restriction to which the Killing form $\Kill_G$ is non degenerate, we denote by $\omega_{\frak{v}}$ the $d$-form on $\g$ given by composing the orthogonal projection on $\frak{v}$ with the volume form on $\frak{v}$ induced by the restriction of the Killing form. Finally, let us provide $K/L$ with the left invariant volume form $\omega_{K/L}$ induced by the restriction of the metric on $G/H$. \[l:ComputationOmegaGH\] The form $\omega_{G,H}$ at the point $x_0$ is given by $$(\omega_{G,H})_{x_0} = \int_{K/L} \Ad_u^* \omega_{\k^\perp \cap \h^\perp}\ \d \omega_{K/L}(u)~.$$ In the construction of $\omega_{G,H}$ (Definition \[d:OmegaH\]), we choose $\omega_{K/L}$ as our volume form on $F_{x_0} = K/L$. Let $\xi$ be the $q$-vector on $\omega_{K/L}$ such that $\omega_{K/L}(\xi) = 1$. At $y_0 = L$, the pull-back of $\vol_X$ by the projection $\psi : G/L \to G/H$ identifies with the form $\omega_{\h^\perp}$ on $\g$. Since the $q$-vector $\xi$ at $y_0$ is given by $e_1\wedge \ldots \wedge e_q$, where $(e_1,\ldots , e_q)$ is an orthonormal frame of $\k \cap \h^\perp$, we have $$(i_\xi\omega_{\h^\perp})_{y_0} = \omega_{\k^\perp \cap \h^\perp}~.$$ By left invariance, we also have $$(i_\xi\psi^*\vol_X)_{u\cdot y_0} = u_*\omega_{\k^\perp \cap \h^\perp}~.$$ Now, identifying $T_{u\cdot y_0}G/L$ with $u_* \l^\perp$, the differential of $\pi: G/L \to G/K$ is given at $u\cdot y_0$ by $$\begin{aligned} \d \pi_{u\cdot y_0}(u_* v) & = & \dt \pi(u \exp(t v) \cdot y_0) \\ & = & \dt \pi \left( \exp(t \Ad_u(v)) u \cdot y_0 \right) \\ & = & \dt \exp(t \Ad_u(v)) \cdot \pi(u \cdot y_0)\\ & = & \dt \exp(t \Ad_u(v)) \cdot x_0\\ & = & p_{\k^\perp}\Ad_u(v)~,\end{aligned}$$ where $p_{\k^\perp}$ denotes the orthogonal projection on $\k^\perp$. Therefore, the form $(i_\xi\psi^*\vol_X)$ at $u\cdot y_0$, whose kernel contains $u_* \k$, induces by projection the form ${\Ad_u}_* \omega_{\k^\perp \cap \h^\perp}$ at $x_0$. By construction of the form $\omega_{G,H}$, we thus obtain $$(\omega_{G,H})_{x_0} = \int_{K/L} {\Ad_u}_* \omega_{\k^\perp \cap \h^\perp}\ \d \omega_{K/L}(u)~.$$ The corresponding form on the compact dual {#s:CompactDual} ========================================== From now on, we assume that $G$ is a connected semi-simple Lie group with finite center and that $H$ is a reductive subgroup. In this section we investigate the form $\omega_{G,H}^U$ corresponding to $\omega_{G,H}$ on the *compact dual* of $G/K$.\ Write $$\g = \k \oplus \p~,$$ where $\p$ is the orthogonal of $\k$ with respect to the Killing form. Then $\k \oplus i \p$ is a Lie subalgebra of the complexification $\g^\C$ of $\g$, generating a compact Lie group $G_U$ containing $K$, called the *compact dual* of $G$. The compact symmetric space $G_U/K$ is the *compact dual* of the symmetric space $G/K$. By construction, the tangent spaces at the base point $x_0 = K$ in $G/K$ and $G_U/K$ are isomorphic as representations of $K$. This induces an isomorphism between the exterior algebras of invariant forms on $G/K$ and $G_U/K$. If $\alpha$ is a $G$-invariant form on $G/K$, the image of $\alpha$ by this isomorphism will be called the *form corresponding to $\alpha$ on the compact dual* and will be denoted $\alpha^U$. The group $G_U$ contains the compact dual $H_U$ of $H$, and one can define a map $\iota: H_U/L \to G_U/K$. This map may not be injective, but it is a covering of finite degree onto its image, since $L$ is a finite index subgroup of $H_U\cap K$. We denote by $[H_U/L]$ the fundamental class of $H_U/L$. Let $N$ be a closed oriented manifold of dimension $d$ and $[c]$ a rational homology class of degree $k$ on $N$. Let $$\vee: \HH_k(N,\Q) \times \HH_{n-k}(N,\Q)\to \Q$$ denote the intersection pairing. The cohomology class $[\alpha] \in \HH^{d-k}(N,\Q)$ is called *Poincaré-dual* to $[c]$ if for any $[c'] \in \HH_{d-k}(N,\Q)$, one has $$\int_{[c']} [\alpha] = [c] \vee [c']~.$$ According to Poincaré’s duality theorem, every rational homology class of a closed oriented manifold has a unique Poincaré-dual cohomology class. \[t:PoincareDual\] The cohomology class of the form $$\frac{1}{\Vol(G_U/H_U)}\omega_{G,H}^U \in \HH^\bullet(G_U/K,\Q)$$ is Poincaré-dual to the homology class $\iota_*[H_U/L]$. Let $x_0$ denote the point $K$ in $G_U/K$. By Lemma \[l:ComputationOmegaGH\], we have $$(\omega_{G,H}^U)_{x_0} = \int_{K/L} \Ad_u^* \omega_{\k^\perp \cap i\h^\perp}\ \d \omega_{K/L}(u)~.$$ Thus, if $\phi$ denotes the projection from $G_U/L$ to $G_U/H_U$ and $\pi$ the projection from $G_U/L$ to $G_U/K$, then one can reproduce word by word the arguments of the previous section and show that $$\int_C \frac{1}{\Vol(G_U/H_U)}\omega_{G,H}^U = \int_{\pi^{-1}(C)} \frac{1}{\Vol(G_U/H_U)} \phi^*\vol_{G_U/H_U}$$ for any oriented submanifold $C$ of $G_U/K$ of dimension $p$. Now, the form $\frac{1}{\Vol(G_U/H_U)}\vol_{G_U/H_U}$ is Poincaré-dual to the homology class of a point in $G_U/H_U$, and $\phi^*\vol_{G_U/H_U}$ is thus dual to the homology class of the fiber $H_U/L \subset G_U/L$ of the map $\phi$. Therefore, $\int_{\pi^{-1}(C)} \frac{1}{\Vol(G_U/H_U)} \phi^*\vol_{G_U/H_U}$ counts the homological intersection number between $H_U/L$ and $\pi^{-1}(C)$ in $G_U/L$. This is equal to $k$ times the homological intersection number between $\iota(H_U/L)$ and $C$ in $G_U/K$, where $k$ denotes the degree of the covering map $\iota : H_U/L \to H_U/H_U\!\cap\! K$. Hence $\int_{\pi^{-1}(C)} \frac{1}{\Vol(G_U/H_U)} \phi^*\vol_{G_U/H_U}$ is equal to $[C] \vee \iota_*[H_U/L]$. The conclusion follows. Cohomology and inclusion of symmetric spaces {#s:InclusionSymSpaces} ============================================ In this section, we go deeper into the cohomology theory of symmetric spaces in order to find conditions under which the form $\omega_{G,H}^U$ vanishes and conditions under which it is a *Chern–Weil form*. We say that $\omega_{G,H}^U$ is a *Chern–Weil form* if its cohomology class is a Chern–Weil characteristic class of the canonical principal $K$-bundle over $G_U/K$ (see Section \[s:Rigidity\] for details). Our aim is to prove the following theorem: \[t:RankCondition\] Let $\rk$ denote the complex rank of a Lie group. - The form $\omega_{G,H}^U$ vanishes when $$\rk(H_U)- \rk(L) > \rk(G_U) - \rk(K)~.$$ - If $\omega_{G,H}^U$ does not vanish, then it is a Chern–Weil form if and only if $$\rk(H_U)- \rk(L) = \rk(G_U) - \rk(K)~.$$ The cohomology of symmetric spaces has been described by the works of Cartan and Borel in the years 1950 [@Cartan50; @Borel53]. This description is summarized in the following theorem: \[t:CohomologySymSpace\] Let $G_U/K$ be a symmetric space of compact type, with $K$ connected. Then - The cohomology algebra $\HH^\bullet(G_U/K,\Q)$ is isomorphic to a tensor product $$\HH_{even}^\bullet(G_U/K,\Q) \otimes \HH_{odd}^\bullet(G_U/K,\Q)~,$$ - the subalgebra $\HH_{even}^\bullet(G_U/K,\Q)$ is the algebra of Chern–Weil classes of the canonical principal $K$-bundle over $G_U/K$, and is concentrated in even degree, - the subalgebra $\HH_{odd}^\bullet(G_U/K,\Q)$ is isomorphic to $\Lambda^\bullet\left(\Prim(G_U/K,\Q)\right)$ where $\Prim(G_U/K,\Q)$ is a vector subspace of dimension $\rk(G_U) - \rk(K)$ generated by elements of odd degree, The cohomology algebra of a symmetric space thus has the structure of a bi-graded algebra: $$\HH^\bullet(G_U/K,\Q) = \bigoplus_{p,q \geq 0} \HH_{even}^p(G_U/K,\Q) \otimes \HH_{odd}^q(G_U/K,\Q)~.$$ We will say that a cohomology class $\alpha$ has bi-degree $(p,q)$ if it belongs to $\HH_{even}^p(G_U/K,\Q) \otimes \HH_{odd}^q(G_U/K,\Q)$. \[p:PreserveBigrading\] The map $\iota^*: \HH^\bullet(G_U/K,\Q) \to \HH^\bullet(H_U/L,\Q)$ maps $\HH_{even}^p(G_U/K,\Q)$ to $\HH_{even}^p(H_U/L,\Q)$ and $\HH_{odd}^p(G_U/K,\Q)$ to $\HH_{odd}^p(H_U/L,\Q)$, and thus preserves the bi-grading. Moreover, it maps $\Prim(G_U/K,\Q)$ to $\Prim(H_U/L,\Q)$. This proposition is likely to be a straightforward consequence of the proof of Cartan’s theorem. We prove it in the forthcoming paper [@Tholozan8].\ If $G_U/K$ is a symmetric space of compact type, let us denote by $\dimeven(G_U/K)$ and $\dimodd(G_U/K)$ the maximal degree of a non zero cohomology class in $\HH^\bullet_{even}(G_U/K,\Q)$ and $\HH^\bullet_{odd}(G_U/K,\Q)$, respectively. Since $G_U/K$ is compact and orientable, we obtain by Cartan’s theorem that $$\dimeven(G_U/K) + \dimodd(G_U/K) = \dim(G_U/K)$$ and that $$\HH^{\dimeven(G_U/K)}_{even}(G_U/K,\Q) \otimes \HH^{\dimodd(G_U/K)}_{odd}(G_U/K,\Q) = \HH^{\dim(G_U/K)}(G_U/K,\Q)~.$$ Thus, both $\HH^{\dimeven(G_U/K)}_{even}(G_U/K,\Q)$ and $\HH^{\dimodd(G_U/K)}_{odd}(G_U/K,\Q)$ have dimension $1$. \[p:InjectivityOdd\] If $\iota_*[H_U/L]$ does not vanish in $\HH_\bullet (G_U/K, \Q)$, then the homomorphism $$\iota^*: \HH^{\dimeven(H_U/L)}_{even}(G_U/K,\Q) \to \HH^{\dimeven(H_U/L)}_{even}(H_U/L,\Q)$$ is surjective, and the morphism $$\iota^*: \Prim(G_U/K,\Q) \to \Prim(H_U/L,\Q)$$ is surjective. If $\iota_*[H_U/L]$ does not vanish in $\HH_\bullet (G_U/K, \Q)$, then, by Poincaré duality, there exists an element $\alpha \in \HH^{\dim(H_U/L)}(G_U/K,\Q)$ such that $\iota^*\alpha \neq 0$. By Cartan’s theorem, we can write $$\alpha = \sum_{k+l = \dim(H_U/L)} \beta_k \otimes \gamma_l~,$$ with $\beta_k \in \HH^k_{even}(G_U/K,\Q)$ and $\gamma_l \in \HH^{l}_{odd}(G_U/K,\Q)$. Since $\iota^*\beta_k = 0$ for $k> \dimeven(H_U/L)$ and $\iota^*\gamma_l = 0$ for $l> \dimodd(H_U/L)$, we get that $$\iota^*\alpha = \iota^*\beta_{\dimeven(H_U/L)} \otimes \iota^*\gamma_{\dimodd(H_U/L)}\neq 0~,$$ which implies that both $i^*\beta_{\dimeven(H_U/L)}$ and $\iota^*\gamma_{\dimodd(H_U/L)}$ do not vanish. Since $\HH^{\dimeven(H_U/L)}(H_U/L,\Q)$ and $\HH^{\dimodd(H_U/L)}(H_U/L,\Q)$ are one dimensional, we conclude that $$\iota^*: \HH^{\dimeven(H_U/L)}_{even}(G_U/K,\Q) \to \HH^{\dimeven(H_U/L)}_{even}(H_U/L,\Q)$$ and $$\iota^*: \HH^{\dimodd(H_U/L)}_{odd}(G_U/K,\Q) \to \HH^{\dimodd(H_U/L)}_{odd}(H_U/L,\Q)$$ are surjective. Now, by Cartan’s theorem, $\HH^\bullet_{odd}(H_U/L, \Q) = \Lambda^\bullet \Prim(H_U/L,\Q)$. If $\iota^*:\Prim(G_U/K,\Q)\to \Prim(H_U/L,\Q)$ were not surjective, then $\iota^*\left(\HH^\bullet_{odd}(G_U/K,\Q)\right)$ would be included in $\Lambda^\bullet F$ for a proper subspace $F$ of $\Prim(H_U/L,\Q)$, and it would not contain any form of top degree. Since $$\iota^*: \HH^{\dimodd(H_U/L)}_{odd}(G_U/K,\Q) \to \HH^{\dimodd(H_U/L)}_{odd}(H_U/L,\Q)$$ is surjective, we conclude that $\iota^*:\Prim(G_U/K,\Q)\to \Prim(H_U/L,\Q)$ is surjective. We can now prove Theorem \[t:RankCondition\]. Assume that $\omega_{G,H}^U$ does not vanish. Then, by Proposition \[t:PoincareDual\], $\iota_*[H_U/L]$ does not vanish in $\HH_\bullet(G_U/K,\Q)$. By Proposition \[p:InjectivityOdd\], the map $\iota^*: \Prim(G_U/K,\Q) \to \Prim(H_U/L,\Q)$ is surjective, which implies that $$\rk(H_U)-\rk(L) = \dim \Prim(H_U/L,\Q) \leq \dim \Prim(G_U/K,\Q) = \rk(G_U) - \rk(K)~.$$ This proves the first point.\ Now, since $\omega_{G,H}^U$ is Poincaré dual to $i_*[H_U/L]$, we have $$\int_{H_U/L} i^*\alpha = \int_{G_U/K} \alpha \wedge \omega_{G,H}^U$$ for all $\alpha \in \HH^{\dim(H_U/L)}(G_U/K,\Q)$. In particular, for all $(k,l)$ such that $k+l=\dim(H_U/L)$ and for all $\alpha \in \HH^{k+l}(G_U/K,\Q)$ of bi-degree $(k,l)$, we have $\int_{G_U/K} \alpha \wedge \omega_{G,H}^U = 0$ unless $$(k,l) = \left(\dimeven(H_U/L), \dimodd(H_U/L)\right)~.$$ This implies that $\omega_{G,H}^U$ has bi-degree $$\left(\dimeven(G_U/K) - \dimeven(H_U/L), \dimodd(G_U/K) - \dimodd(H_U/L)\right)~.$$ Therefore, $[\omega_{G,H}^U]$ belongs to $\HH^\bullet_{even}(G_U/K,\Q)$ if and only if $$\label{eq:EqualityOddDim} \dimodd(G_U/K) = \dimodd(H_U/L)~.$$ Since $\iota^*: \Prim(G_U/K,\Q) \to \Prim(H_U/L, \Q)$ is surjective, Equality happens if and only if it is also injective, which is equivalent to $$\rk(H_U)-\rk(L) = \rk(G_U) - \rk(K)~.$$ This concludes the proof of Theorem \[t:RankCondition\]. Characteristic classes and rationality of the volume {#s:Rigidity} ==================================================== In this section, we explain why, when $\omega_{G,H}^U$ is a Chern–Weil form, the volume of every compact quotient of $G/H$ is a rational multiple of $\Vol(G_U/H_U)$. This is a classical argument which relies on the fact that, by Proposition \[t:PoincareDual\], the form $\frac{1}{\Vol(G_U/H_U)}\omega_{G,H}^U$ represents an *integral* cohomology class. The precise result that we will prove is the following: \[t:RationalVolumePrecise\] Assume that we have the equality: $$\rk(G_U) - \rk(K) = \rk(H_U) - \rk(L)~.$$ Then there exists an integer $d$ such that, for any torsion-free discrete subgroup of $G$ acting properly discontinuously on $G/H$, the volume $\Vol(\Gamma \backslash G/H)$ is an integral multiple of $\frac{1}{d}\Vol(G_U/H_U)$. Note that, given a normalization of the volume form on $G/H$, there is a canonical way to normalize the volume on $G_U/H_U$ accordingly. Thus the statement of Theorem \[t:RationalVolumePrecise\] does not depend on the choice of such a normalization. Let $BK$ be a classifying space for $K$ and $EK \to BK$ be the associated universal principal $K$-bundle. There exists a map $f: G_U/K \to BK$, unique up to homotopy, such that the principal $K$-bundle $G_U$ is isomorphic to $f^*EK$. The map $f$ induces a homomorphism $$f^*: \HH^\bullet(BK,\R) \to \HH^\bullet (G_U/K, \R)~.$$ By Theorem \[t:CohomologySymSpace\] and by definition of Chern–Weil classes, the image of $f^*$ is the subalgebra $\HH^\bullet_{even}(G_U/K,\R)$. It contains as a lattice the $\Z$-module $f^* \HH^\bullet(BK,\Z)$. It follows from Proposition \[t:PoincareDual\] that the form $\frac{1}{\Vol(G_U/H_U)}\omega_{G,H}^U$ represents an integral cohomology class. Moreover, we saw in the previous section that, under the condition $\rk(G_U) - \rk(K) = \rk(H_U) - \rk(L)$, this cohomology class belongs to $\HH^\bullet_{even}(G_U/K,\R)$. Therefore, the cohomology class $\frac{1}{\Vol(G_U/H_U)}[\omega_{G,H}^U]$ belongs to the $\Z$-module $\Lambda = \HH^\bullet_{even}(G_U/K,\R) \cap \HH^\bullet(G_U/K,\Z)$. Since we have $$f^* \HH^\bullet(BK,\Z) \subset \Lambda$$ and since $f^* \HH^\bullet(BK,\Z)$ is a lattice in $\HH^\bullet_{even}(G_U/K,\R)$, we obtain that $f^* \HH^\bullet(BK,\Z)$ has finite index in $\Lambda$. Therefore, there exists an integer $d$ such that $$\frac{d}{\Vol(G_U/H_U)}[\omega_{G,H}^U] \in f^* \HH^\bullet(BK,\Z)~.$$ Let us now denote by $\Sym^\bullet(\k)^K$ the algebra of polynomials on $\k$ invariant by the adjoint action of $K$. The Chern–Weil theory gives the existence of an isomorphism $$\Phi: \HH^\bullet(BK,\R) \to \Sym^\bullet(\k)^K$$ such that, for any smooth map $f$ from a manifold $M$ to $BK$ and for any cohomology class $\alpha$ in $\HH^\bullet(BK,\R)$, the class $f^*\alpha$ in $\HH^\bullet(M,\R)$ is represented by the differential form $\Phi(\alpha)(F_\nabla)$, where $F_\nabla$ is the curvature of any connection on the principal bundle $f^*EK$. We denote by $\Sym_\Z^\bullet(\k)^K$ the image by $\Phi$ of $\HH^\bullet(BK,\Z)$. Let $\nabla$ and $\nabla^U$ denote respectively the connections on the $K$-principal bundles over $G/K$ and $G_U/K$ given by the distribution orthogonal to the fibers (with respect to the Killing metric). These connections (hence their curvature forms) are respectively $G$ and $G_U$-invariant. By the preceeding remarks, there is a polynomial $P \in \Sym_\Z^\bullet(\k)^K$ such that $\frac{d}{\Vol(G_U/H_U)}\omega_{G,H}^U$ and $P(F_{\nabla^U})$ are cohomologous. Since both forms are $G_U$-invariant, we actually have $$\frac{d}{\Vol(G_U/H_U)}\omega_{G,H}^U=P(F_{\nabla^U})~.$$ By duality between the symmetric spaces $G_U/K$ and $G/K$, we then have $$\frac{d}{\Vol(G_U/H_U)}\omega_{G,H} = (-1)^{\deg P}\, P(F_\nabla)~.$$ Let us denote by $\alpha$ the inverse image of $P$ by the Chern–Weil isomorphism $\Phi$. By Theorem \[t:VolumeCliffordKlein\], we have $$\begin{aligned} d\,\frac{\Vol(\Gamma \backslash G/H)}{\Vol(G_U/H_U)} & = & \left | \int_{[\Gamma]} \frac{d}{\Vol(G_U/H_U)}\omega_{G,H} \right|\\ & = & \left|\int_{[\Gamma]} P(F_\nabla)\right| \\ & = & \left| \int_{[\Gamma]} f^*\alpha\right|~,\end{aligned}$$ where $f: \Gamma\backslash G/K \to BK$ is such that the $K$-principal bundle $\Gamma \backslash G$ over $\Gamma\backslash G/K$ is isomorphic to $f^*EK$. Since $\alpha$ belongs to $\HH^\bullet(BK,\Z)$, we obtain that $\frac{d \Vol(\Gamma \backslash G/H)}{\Vol(G_U/H_U)}$ is an integer. This proves Theorem \[t:RationalVolumePrecise\]. Finally, let us conclude the proof of Theorem \[t:RationalityVolume\]. Recall that the complex rank of $\SO(n)$ is $\left \lfloor \frac{n}{2}\right \rfloor$ and that the complex rank $\SL(n,\R)$ is $n-1$. It is then a simple computation to verify that the equality $\rk(H_U) - \rk(L) = \rk(G_U) - \rk(K)$ is satisfied in cases $(1)$ and $(2)$. For case $(3)$, it is a well-known fact that $\rk(G_U)=\rk(K)$ when $G_U/K$ is Hermitian (see [@HartnickOtt12 Proposition 2.3]). In that case, any $G_U$-invariant form is a Chern–Weil form. In particular, $\omega_{G,H}^U$ is a Chern–Weil form (which vanishes if $\rk(H_U) - \rk(L) >0$). The case of group manifolds {#s:GroupSpaces} =========================== In this section, we specify the previous results in the case of compact quotients of *group spaces*. \[d:GroupSpace\] A group space is a semi-simple Lie group $H$ provided with the action of $H\times H$ given by $$(g,h)\cdot x = gxh^{-1}$$ for all $(g,h)\in H\times H$ and all $x\in H$. The group space $H$ can also be presented as the quotient $H\times H/\Delta(H)$, where $\Delta(H)$ denotes the diagonal embedding of $H$ in $H\times H$. Group spaces form a large class of pseudo-Riemannian symmetric spaces (the pseudo-Riemannian metric being the Killing metric on $H$) which is interesting to study for several reasons. First, given a compact Clifford–Klein form $\Gamma \backslash G/H$ of a reductive homogeneous space and a uniform lattice $\Lambda$ in $H$, one can construct the double quotient $$\Gamma \backslash G/\Lambda~,$$ which is a compact Clifford–Klein form of the group space $G$. In order to understand all compact Clifford–Klein forms of reductive homogeneous spaces, it is thus enough (in theory) to understand compact quotients of group spaces. The second motivation for studying group spaces is that, when $H$ has rank one, its compact Clifford–Klein forms are well-understood, thanks to results of Kobayashi [@Kobayashi93; @Kobayashi98], Kassel [@Kassel08], Guéritaud [@GueritaudKassel], Guichard and Wienhard [@GGKW]. Let $\Gamma$ be a uniform lattice in $H$ and $\rho: \Gamma \to H$ a homomorphism. We denote by $\Gamma_\rho$ the graph of $\rho$, i.e. the subgroup of $H\times H$ defined by $$\Gamma_\rho = \{(\gamma, \rho(\gamma)) , \gamma \in \Gamma\}~.$$ The *translation length* of an element $h\in H$ is defined by $$l(h) = \inf_{x\in H/L} d(x,h\cdot x)~,$$ where $d$ is the distance associated to the $H$-invariant symmetric Riemannian metric on $H/L$. We say that the homomorphism $\rho$ is *uniformly contracting* if there exists $\lambda < 1$ such that for any $\gamma \in \Gamma$, $$l(\rho(\gamma)) \leq \lambda l(\gamma)~.$$ \[t:QuotientsSU(d,1)\] Let $H$ be a Lie group of rank $1$. Then every torsion-free discrete subgroup of $H\times H$ acting properly discontinuously and cocompactly on $H$ is equal to $\Gamma_\rho$ for some uniform lattice $\Gamma$ in $H$ an some contracting homomorphism $\rho:\Gamma \to H$. Conversely, Benoist–Kobayashi’s properness criterion [@Benoist96; @Kobayashi96] implies that such a group $\Gamma_\rho$ does act properly discontinuously and cocompactly on $H$.\ The purpose of this section is to express the volume of $\Gamma_\rho \backslash H$ when $H = \SO_0(d,1)$ or $\SU(d,1)$ in terms of classical invariants associated to the representation $\rho$.[^1] In the case of $\SO_0(d,1)$, we will recover the main theorem of [@Tholozan5].\ In order to do so, we first give a general way to compute the form $\omega_{G,H}$ for any group space $H\times H/\Delta(H)$, knowing the algebra of $H$-invariant forms on $H/L$. We thus restrict to the case where $G = H\times H$ acts on $X = H$ by left and right multiplication. To simplify notations, we denote by $\omega_H$ the form $\omega_{H\times H, \Delta(H)}$ constructed in Section \[s:FiberwiseIntegration\] and by $\omega_H^U$ the corresponding form on the compact dual. The forms $\omega_H$ and $\omega_H^U$ are respectively a $H\times H$-invariant form on $H/L \times H/L$ and a $H_U \times H_U$-invariant form on $H_U/L \times H_U/L$. Let $X$ be a compact oriented manifold of dimension $d$. We denote by $\vee$ the homological intersection pairing of $X$ and by $\wedge$ the cohomological product. For $0\leq k \leq d$, let us fix a basis $(e_1^k, \ldots , e_{n_k}^k)$ of the torsion-free part of $\HH_k(X, \Z)$. Let us denote by $({e_1^k}^*, \ldots , {e_{n_k}^k}^*)$ the dual basis for the intersection pairing, i.e. the basis of the torsion-free part of $\HH_{d-k}(X,\Z)$ characterized by $$e_i^k \vee e_j^{d-k} = \delta_{ij}~.$$ Finally, let us denote by $(\alpha_1^k,\ldots , \alpha_{n_k}^k)$ and $({\alpha_1^k}^*, \ldots , {\alpha_{n_k}^k}^*)$ the bases of $\HH^k(X,\Q)$ and $\HH^{d-k}(X,\Q)$ satisfying respectively $$\int_{e_i^k} \alpha_j^k = \delta_{ij}$$ and $$\int_{{e_i^k}^*} {\alpha_j^k}^* = \delta_{ij}~.$$ Recall that the cohomology ring of $X\times X$ is naturally isomorphic to the tensor product $$\HH^\bullet(X,\Q) \otimes \HH^\bullet(X,\Q)~.$$ We call *Lefschetz cohomology class* on $X \times X$ the cohomology class of degree $d$ defined by $$\beta_{Lef} = \sum_{k=0}^d (-1)^{d-k} \sum_{i = 1}^{n_k} \alpha_i^k \otimes {\alpha_i^k}^*~.$$ The Lefschetz cohomology class on $H_U/L \times H_U/L$ can be represented by a unique $H_U\times H_U$-invariant form that we call the *Lefschetz form*. We also call *Lefschetz form* the corresponding $H\times H$-invariant form on the dual symmetric space $H/L\times H/L$.\ The following proposition characterizes the Lefschetz cohomology class and shows in particular that it does not depend on our choice of basis for the homology. \[p:DiagonalDual\] The Lefschetz cohomology class of $X$ is Poincaré-dual to the diagonal embedding of $X$ in $X\times X$. In particular, when integrating the Lefschetz cohomology class on the graph of some map $f:X \to X$, one recovers the Lefschetz trace formula. Hence our choice of terminology.\ Let $\Delta_X$ denote the diagonal embedding of $X$ in $X\times X$. We want to prove that for any $u\in \HH_d(X\times X, \Q)$, the number $\int_u \beta_{Lef}$ equals the homological intersection number between $u$ and $\Delta_X$. Since $$\HH_d(X,\Q) = \bigoplus_{k=0}^d \HH_k(X,\Q) \otimes \HH_{d-k}(X,\Q)~,$$ it is enough to prove it for $u$ of the form $e_i^k \otimes {e_j^k}^*$, for all $0\leq k\leq d$ and all $1\leq i,j\leq n_k$. By definition of $\beta_{Lef}$, we have $$\int_{e_i^k \otimes {e_j^k}^*} \beta_{Lef} = (-1)^{d-k} \delta_{ij}~.$$ On the other side, intersections between (cycles representing) $e_i^k \otimes {e_j^k}^*$ and $\Delta_X$ correspond exactly to intersections between $e_i^k$ and ${e_j^k}^*$. Indeed, $e_i^k$ intersects ${e_j^k}^*$ at a point $x\in X$ if and only if $e_i^k \times e_j^{n-k}$ intersects $\Delta_X$ at $(x,x)$. Taking orientations into account, one checks that a positive intersection between $e_i^k$ and ${e_j^k}^*$ gives an intersection of sign $(-1)^{d-k}$ between $e_i^k \otimes {e_j^k}^*$ and $\Delta_X$. We thus obtain $$\left( e_i^k \otimes {e_j^k}^*\right) \vee \Delta_X = (-1)^{d-k} e_i^k \vee {e_j^k}^*= (-1)^{d-k} \delta_{ij}~.$$ By Proposition \[t:PoincareDual\], the form $\frac{1}{\Vol(H_U)}\omega_H^U$ on $G_U/K = H_U/L\times H_U/L$ is Poincaré dual to the diagonal embedding of $H_U/L$. By Propostion \[p:DiagonalDual\], we thus get: The form $\frac{1}{\Vol(H_U)} \omega_H$ is the Lefschetz form on $H/L\times H/L$.\ Let us now apply this corollary to the case where $H$ is $\SO_0(d,1)$ or $\SU(n,1)$. Let $\vol_{\H^d}$ denote the volume form on the hyperbolic space $\H^d$, which is the symmetric space of $\SO_0(d,1)$. If $\Gamma$ is a uniform lattice in $\SO_0(d,1)$ and $\rho: \Gamma \to \SO_0(d,1)$ a homomorphism, we define the *volume* of $\rho$ by $$\Vol(\rho) = \int_{\H^d/\Gamma} f^*\vol_{\H^d}~,$$ where $f: \H^d \to \H^d$ is any $\rho$-equivariant map. Let $\omega$ denote the Kähler form on the complex hyperbolic space $\H^d_\C$, which is the symmetric space of $\SU(d,1)$. We normalize $\omega$ so that the corresponding form on the compact dual symmetric space $\ProjC{d}$ is a generator of $\HH^2(\ProjC{d},\Z)$. If $\Gamma$ is a uniform lattice in $\SU(d,1)$ and $\rho: \Gamma \to \SU(d,1)$ a homomorphism, we define $$\tau_k(\rho) = \int_{\Gamma \backslash \H^d_\C} f^*\omega^k \wedge \omega^{d-k}~,$$ where $f:\H^d_\C \to \H^d_C$ is any smooth $\rho$-equivariant map. The number $\tau_1(\rho)$ is often called the *Toledo invariant* of $\rho$, while $\tau_d(\rho)$ is the *volume* of the representation $\rho$. [ \ ]{} \[t:VolQuotientsSU(d,1)\] - If $\Gamma$ is a uniform lattice in $\SO_0(d,1)$ and $\rho:\Gamma \to \SO_0(d,1)$ a uniformly contracting representation, then $$\Vol \left(\Gamma_\rho \backslash \SO_0(d,1)\right) = \Vol(\SO(d)) \left | \Vol(\Gamma \backslash \H^d) + (-1)^d \Vol(\rho)\right|~.$$ - If $\Gamma$ is a uniform lattice in $\SU(d,1)$ and $\rho:\Gamma \to \SU(d,1)$ is a uniformly contracting representation, then $$\Vol \left(\Gamma_\rho \backslash \SU(d,1)\right) = \Vol(\SU(d+1)) \left|\sum_{k=0}^d \tau_k(\rho)\right|~.$$ The compact symmetric space dual to $\H^d$ is $\S^d$, whose cohomology ring is generated by $\1$ and the fundamental class. We deduce that the Lefschetz form of $\H^d \times \H^d$ is $$\frac{1}{\Vol(\S^d)} \left(\vol_{\H^d} \otimes \1 + (-1)^d \1 \otimes\vol_{\H^d} \right)~.$$ Clearly, Theorem \[t:VolQuotientsSU(d,1)\] is consistent with taking finite index subgroups. By Selberg’s lemma, we can thus assume that $\Gamma$ is torsion-free. Let $f: \H^d \to \H^d$ be a smooth $\rho$-equivariant map. Then the graph of $f$ is a $\Gamma_\rho$-invariant submanifold of dimension $d$ of $\H^d \times \H^d$ on which $\Gamma_\rho$ acts freely, properly discontinuously and cocompactly. Let us denote by $\Graph(f)$ its quotient by $\Gamma_\rho$: $$\mathrm{Graph}(f) = \Gamma_\rho \backslash \{(x,f(x)), x\in \H^d\} \subset \Gamma_\rho \backslash \H^d\times \H^d~.$$ Then $\Graph(f)$ represents the homology class $[\Gamma_\rho]$ and by Theorem \[t:VolumeCliffordKlein\], we have $$\begin{aligned} \Vol \left(\Gamma_\rho \backslash \SO_0(d,1)\right) & = & \frac{\Vol(\SO(d+1))}{\Vol(\S^d)} \left |\int_{\Graph(f)} \vol_{\H^d} \otimes \1 + (-1)^d \1 \otimes\vol_{\H^d} \right| \\ & = & \Vol(\SO(d)) \left | \int_{\Gamma \backslash \H^d} \vol_{\H^d} \wedge f^* \1 + (-1)^d \1 \wedge f^*\vol_{\H^d} \right | \\ & = & \Vol(\SO(d)) \left | \Vol(\Gamma \backslash \H^d) + (-1)^d \Vol(\rho) \right |~.\end{aligned}$$ Similarly, the integral cohomology ring of $\ProjC{d}$ is generated by the powers of the form symplectic form $\omega^U$. We deduce that the Lefschetz form of $\H^d_\C \times \H^d_\C$ is $$\sum_{k=0}^d \omega^k \otimes \omega^{d-k}~.$$ Let $f:\H^d_\C \to \H^d_\C$ be a smooth $\rho$-equivariant map and define $$\mathrm{Graph}(f) = \Gamma_\rho \backslash \{(x,f(x)), x\in \H^d_\C\} \subset \Gamma_\rho \backslash \H^d_\C\times \H^d_\C~.$$ As in the $\SO_0(d,1)$ case, we have $$\begin{aligned} \Vol \left(\Gamma_\rho \backslash \SU(d,1)\right) & = & \Vol(\SU(d+1)) \left |\int_{\Graph(f)} \sum_{k=0}^d \omega^k \otimes \omega^{d-k} \right| \\ & = & \Vol(\SU(d+1)) \left | \sum_{k=0}^d \int_{\Gamma \backslash \H^d_\C} \omega^k \wedge f^* \omega^{d-k} \right | \\ & = & \Vol(\SU(d+1)) \left | \sum_{k=0}^d \tau_k(\rho) \right |~.\end{aligned}$$ Obstruction to the existence of compact Clifford–Klein forms {#s:NonExistence} ============================================================ In this section, we return to the general case of a reductive homogeneous space $G/H$. Assume that the form $\omega_{G,H}$ (or equivalently, the form $\omega_{G,H}^U$) vanishes. Then Theorem \[t:VolumeCliffordKlein\] implies that the volume of a compact quotient of $G/H$ should be $0$. Therefore, such a compact quotient simply cannot exist. As a first application of this obstruction, one obtains a proof of Kobayashi’s rank conjecture (Theorem \[t:KobayashiRankConj\]), which follows directly from the first point of Theorem \[t:RankCondition\]: If $\rk(G) - \rk(K) < \rk(H) - \rk(L)$, then $G/H$ does not have compact quotients. Unfortunately, this theorem does not provide any new example of homogeneous spaces without compact quotients. Indeed, Morita independently proved in [@MoritaPreprint] that this theorem is implied by the cohomological obstruction he described in [@Morita15].\ In this section, we give three other ways of proving that the form $\omega_{G,H}$ vanishes, leading to the proof of Theorem \[t:AdvanceKobayashiConj\]. \[t:VanishingForm1\] For the following pairs $(G,H)$, the volume form $\omega_{G,H}$ vanishes and $G/H$ does not admit any compact Clifford–Klein form. - $G= \SO_0(p,q+r)$, $H= \SO_0(p,q)$, $p,q,r>0$, $p$ odd; - $G= \SL(n,\R)$, $H= \SL(m,\R)$, $1<m<n$, $m$ even. Recall that, by Lemma \[l:ComputationOmegaGH\], the form $\omega_{G,H}$ at the point $x_0 = K$ is given by $$(\omega_{G,H})_{x_0} = \int_{K/L} \Ad_u^* \omega_{\k^\perp \cap \h^\perp}\ \d \omega_{K/L}(u)~.$$ In both cases, we exhibit an element $\Omega \in K$ whose action on $\g$ stabilizes $\k^\perp \cap \h^\perp$ and whose induced action on $\k^\perp \cap \h^\perp$ has determinant $-1$. It follows that $$\begin{aligned} \omega_{G,H} & = & \int_{K/L} {\Ad_U}_* \omega_{\k^\perp\cap \h^\perp}\ \d \vol_{K/L}(U) \\ \ & = & \int_{K/L} {\Ad_{U\Omega}}_* \omega_{\k^\perp\cap \h^\perp}\ \d \vol_{K/L}(U)\\ \ & = & \int_{K/L} - {\Ad_U}_* \omega_{\k^\perp\cap \h^\perp}\ \d \vol_{K/L}(U)\\ \ & = & - \omega_{G,H}~,\end{aligned}$$ hence $\omega_{G,H} = 0$. For both cases in Theorem \[t:VanishingForm1\], we now describe $\k^\perp \cap \h^\perp$ as a space of matrices and we give a choice of an element $\Omega$. This element $\Omega$ simply multiplies certain coefficients of the matrices in $\k^\perp\cap \h^\perp$ by $-1$ and we leave to the reader the verification that the induced action on $\k^\perp\cap \h^\perp$ has determinant $-1$. - $G= \SO_0(p,q+r)$, $H= \SO_0(p,q)$, $p,q,r>0$, $p$ odd:\ In this case, $K=\SO(p) \times \SO(q+r)$ and $\k^\perp\cap \h^\perp$ is the space of matrices of the form $$\left( \begin{array}{@{}C{2cm}@{}|@{}C{1.5cm}@{}} \Huge{0} & \begin{array}{@{}C{0pt}@{}C{0.7cm}@{}|@{}C{0.8cm}@{}} \rule{0pt}{2cm} & 0 & \transp{A} \\[-4pt] \end{array} \\ \hline \begin{array}{@{}C{0pt}@{}C{2cm}@{}} \rule{0pt}{0.7cm} & 0 \\[-4pt] \hline \rule{0pt}{0.8cm} & A \\[-4pt] \end{array} & \Huge{0} \\ \end{array} \right)~,$$ with $A \in \M_{r,p}(\R)$. We take $\Omega$ to be the diagonal matrix such that $\Omega_{ii} = -1$ when $i = p+q$ or $p+q+1$ and $\Omega_{ii} = 1$ otherwise.\ - $G= \SL(n,\R)$, $H= \SL(m,\R)$, $m$ even:\ In this case, $K=\SO(n)$ and $\k^\perp\cap \h^\perp$ is the space of matrices of the form $$\left( \begin{array}{@{}C{0pt}@{}C{2cm}@{}|@{}C{1.5cm}@{}} \rule{0pt}{2cm} & \lambda \I_{m} & A \\[-4pt] \hline \rule{0pt}{1.5cm} & \transp{A} & B \\[-4pt] \end{array} \right) ~,$$ with $A \in \M_{m,n-m}(\R)$, $B \in \Sym_{n-m}(\R)$ and $\lambda \in \R$ satisfying $\Tr(B)+ m \lambda = 0$. We take $\Omega$ to be the diagonal matrix such that $\Omega_{ii} = -1$ when $i = m$ or $m+1$ and $\Omega_{ii} = 1$ otherwise. We now turn to another way of proving that $\omega_{G,H}$ vanishes. Recall that $\omega_{G,H}$ vanishes if an only if the corresponding form $\omega_{G,H}^U$ on $G_U/K$ vanishes. By Theorem \[t:PoincareDual\], this happens whenever $\iota_*[H_U/L]$ vanishes in $\HH_\bullet(G_U/K, \Q)$. \[t:VanishingForm2\] If $G$ is the complexification of $H$, then the form $\omega_{G,H}$ vanishes if and only if $\HH^\bullet_{even}(H_U/L,\Q) \neq 0$. In particular, for the following pairs $(G,H)$, the space $G/H$ has no compact Clifford–Klein form: - $G= \SO(p+q,\C)$, $H = \SO_0(p,q)$, $p, q>1$ or $p=1$ and $q$ even; - $G = \SL(p+q,\C)$, $H = \SU(p,q)$, $p,q>0$; - $G = \Sp(2(p+q),\C)$, $H = \Sp(p,q)$; - $G = \SO(2n,\C)$, $H = \SO^*(2n)$. Since $G$ is the complexification of $H$, we have $H_U = K$. Since $G$ is a complex Lie group, we have $G_U = K\times K$. It follows that $G_U/K$ is the group space $K$ and that $H_U/L = K/L$ is mapped to $K$ by $$\iota: g \mapsto g\ \theta(g)^{-1}~,$$ where $\theta$ is the involution of $H_U$ whose fixed point set is $L$. By Proposition \[t:PoincareDual\], $\omega_{G,H}$ does not vanish if and only if $\iota_*[H_U/L]$ does not vanish in $\HH_\bullet(H_U,\Q)$, which happens if and only if the image of $\iota^*$ contains a non-zero cohomology class of degree $\dim(H_U/L)$. By the work of Cartan [@Cartan50], the cohomology algebra of $H_U$ is generated by bi-invariant forms of odd degree. Moreover, $\iota^*$ maps $\HH^\bullet(H_U,\Q)$ surjectively to $\HH^\bullet_{odd}(H_U/L,\Q)$. Since $\HH^\bullet(H_U/L,\Q) = \HH^\bullet_{odd}(H_U/L,\Q)\otimes \HH^\bullet_{even}(H_U/L,\Q)$, the image of $\iota^*$ contains a form of top degree if and only if $\HH^\bullet_{even}(H_U/L,\Q) \equiv 0$. Let us now prove $(3)$, $(4)$, $(5)$ and $(6)$. For $H= \SU(p,q)$, $\Sp(p,q)$, $\SO^*(2n)$ or $\SO_0(p,q)$ with $p$ or $q$ even, one actually has $\rk(H_U) = \rk(L)$. Therefore the cohomology of $H_U/L$ is concentrated in even degree and the image of the map $\iota^*$ is trivial. In particular, it does not contain a non-zero class of top degree. It remains to treat the case where $H = \SO_0(p,q)$ with $p$ and $q$ odd. Note that $H_U/L$ is the Grassmannian of $p$-planes in $\R^{p+q}$. In that case, $\rk(H_U) - \rk(L) = 1$ and $\HH^\bullet_{odd}(H_U/L)$ thus has dimension $1$. If $\HH^\bullet_{even}(H_U/L)$ vanished, then the whole cohomology algebra of $H_U/L$ would be one dimensional. This is well-known to be true if and only if $p$ or $q$ equals $1$. \[t:VanishingForm3\] For the following pairs $(G,H)$, the volume form $\omega_{G,H}$ vanishes and $G/H$ does not admit any compact Clifford–Klein form. - $G = \SL(p+q,\R)$, $H = \SO_0(p,q)$, $p,q>1$; - $G = \SL(p+q, \mathcal{H})$, $H = \Sp(p,q)$, $p,q>1$. (Here $\mathcal{H}$ denotes de field of quaternions.) Again, we prove that $\iota_*[H_U/L]$ vanishes in $\HH_\bullet(G_U/K)$, this time by showing that $H_U/L$ is homotopically trivial in $G_U/K$.\ The compact dual to $\SL(p+q,\R)$ is $\SU(p+q)$. Let us set $V = \R^p\times \{0\}$ and $W= \{0 \} \times \R^q$ in $\C^{p+q}$. Then we can identify $K$ with $\Stab(V\oplus W) \subset \SU(p+q)$, $H_U$ with $\Stab(V \oplus i W)$, and $L$ with $\Stab(V) \cap \Stab(W)$. For $t\in [0,1]$, let $g_t$ be the map in $\U(p+q)$ defined by $$g_t(x) = x \textrm{ if } x\in V~,$$ $$g_t(x) = e^{\frac{it\pi}{2}} x \textrm{ if } x\in W~.$$ The conjugation by $g_t$ preserves $L$ and one can thus define $$\function{\phi_t}{H_U/L}{G_U/K}{hL}{g_t h g_t^{-1} K~.}$$ The conjugation by $g_t$ sends $H_U=\Stab(V \oplus i W)$ to $\Stab(V \oplus i e^{\frac{it\pi}{2}} W)$. In particular, $\phi_0$ is the map $\iota: H_U/L \to G/K$, and $\phi_1$ sends $H_U/L$ to a point. Therefore the map $\iota :H_U/L \to G_U/K$ is homotopically trivial, and in particular $\iota_*[H_U/L] = 0$ in $\HH_\bullet(G_U/K)$.\ Case $(8)$ can be treated similarly: set $V = \C^p\times \{0\}$ and $W= \{0 \} \times \C^q$ in $\mathcal{H}^{p+q}$. Then $G_U = \Sp(p+q)$ and one can identify $K$ with $\Stab(V\oplus W)$, $H_U$ with $\Stab(V \oplus j W)$ (where $i,j,k$ denote the three complex structures defining the quaternionic structure of $\mathcal{H}$), and $L$ with $\Stab(V) \cap \Stab(W)$. One obtains the same contradiction as before by conjugating $H_U$ by the linear transformation $g_t$ that is the identity on $V$ and the multiplication by $e^{\frac{t\pi}{2} j}$ on $W$. Relation to earlier works {#ss:EarlierResults} ------------------------- In the past decades, many different works have been devoted to finding various obstructions to the existence of compact Clifford–Klein forms. Let us detail where Theorems \[t:VanishingForm1\], \[t:VanishingForm2\] and \[t:VanishingForm3\] fit in this litterature. - Case $(1)$ of Theorem \[t:VanishingForm1\] extends results of Kulkarni [@Kulkarni81], Kobayashi–Ono [@KobayashiOno90] and their recent improvement by Morita [@Morita15], where both $p$ and $q$ are assumed to be odd. When specified to $r=1$, we obtain in particular that $\H^{p,q} = \SO_0(p,q+1)/\SO_0(p,q)$ does not admit a compact quotient when $p$ is odd. This is an important step toward Kobayashi’s space form conjecture.\ - The case of $\SL(n,\R)/ \SL(m,\R)$ has also been extensively studied. It is conjectured that $\SL(n,\R)/ \SL(m,\R)$ never admits a compact quotient for $1 < m < n$ (see for instance [@KobayashiYoshino05 Conjecture 3.3.10]). Kobayashi proved that such quotients do not exist for $n< \lceil 3/2 m \rceil$ [@Kobayashi92] and Labourie, Mozes and Zimmer extended the result to $m\leq n-3$ with completely different methods ([@Zimmer94], [@LMZ95], [@LabourieZimmer95]). On the other side, Benoist proved that $\SL(2n+1,\R)/\SL(2n,\R)$ does not admit a compact quotient [@Benoist96]. Case $(2)$ of Theorem \[t:VanishingForm1\] recovers Benoist’s result[^2] and also implies that $\SL(2n+2,\R)/\SL(2n,\R)$ does not admit a compact quotient, which was previously known only for $n=1$ [@Shalom00].\ - Theorem \[t:VanishingForm2\] is mostly new. Note that the so-called *Calabi–Markus phenomenon* implies that the symmetric spaces $\SL(n,\C)/\SL(n,\R)$ and $\Sp(2n,\C)/\Sp(2n,\R)$ do not admit compact Clifford–Klein forms. Therefore, the only classical Lie groups $H$ for which $H_\C/H$ might admit a compact Clifford–Klein form are $\SO(p,1)$ with $p$ even and $\SL(n,\mathcal{H})$ (where $\mathcal{H}$ denotes the quaternions). Interestingly, the homogeneous space $\SO(8,\C)/\SO(7,1)$ is known to admit compact Clifford–Klein forms (see [@KobayashiYoshino05 Corollary 3.3.7]).\ - Theorem \[t:VanishingForm3\] improves a recent result of Morita [@Morita15], where $p$ and $q$ are assumed to be odd. It was first proved by Kobayashi when $p=q$ [@Kobayashi96] and by Benoist when $p=q+1$ [@Benoist96]. More precisely, Benoist proved that every discrete group acting properly discontinuously on $\SL(2p+1)/\SO_0(p,p+1)$ is virtually Abelian (in particular, its action is not cocompact). He also constructed proper actions of a free group of rank $2$ as soon as $p\neq q$ or $q+1$.\ The proof of Theorem \[t:VanishingForm1\] can be adapted to show the vanishing of $\omega_{G,H}$ in many other cases that we did not include because the non-existence of compact Clifford–Klein forms was already known. We can prove for instance that $\SL(n,\R)/\SL(m,\R) \times \SL(n-m,\R)$ does not have any compact quotient for $0<m<n$, $n$ odd (see [@Benoist96]), that $\SO(n,\C)/\SO(m,\C) \times \SO(n-m,\C)$ does not have any compact quotient for $1<m<n-1$, $n$ odd (see [@Kobayashi92]), or that $\SO(n,\C)/\SO(m,\C)$ does not have any compact quotient for $1<m<n$, $m$ even (see [@Kobayashi96; @Benoist96]).\ Relation to Yosuke Morita’s work {#ss:Morita} -------------------------------- The first version of this article did not contain Sections \[s:CompactDual\], \[s:InclusionSymSpaces\] and \[s:GroupSpaces\]. Section \[s:Rigidity\] stated a theorem of *local rigidity* of the volume and Section \[s:NonExistence\] contained only a refinded version of Theorem\[t:VanishingForm1\]. After our preprint appeared on arXiv, Yosuke Morita posted a preprint where he uses a cohomological obstruction to prove the non-existence of compact quotients of certain reductive homogeneous spaces. In particular, he obtained Theorems \[t:VanishingForm1\], \[t:VanishingForm2\] and \[t:VanishingForm3\]. This motivated me to find new ways of proving the vanishing of the form $\omega_{G,H}$ and led me to the compact duality argument and theorems \[t:VanishingForm2\] and \[t:VanishingForm3\] which improved significantly this paper. After discussing with Morita, it seems likely, though not obvious, that our two obstructions are in fact equivalent. We hope to prove this equivalence in a future work. Local foliations of $G/H$ and global foliations of $\Gamma \backslash G/H$ {#s:LocalFibrations} ========================================================================== The results of this paper where driven by the idea that compact Clifford–Klein forms $\Gamma \backslash G/H$ should “look like” $(K/L)$-bundles over a classifying space for $\Gamma$. This was suggested by the following theorem: \[t:FibrationGK\] Let $\Gamma$ be a discrete torsion-free subgroup of $\SO_0(d,1) \times \SO_0(d,1)$ acting properly discontinuously and cocompactly on $\SO_0(d,1)$ (by left and right multiplication). Then $\Gamma$ is isomorphic to the fundamental group of a closed hyperbolic $d$-manifold $B$, and $\Gamma \backslash \SO_0(d,1)$ admits a fibration over $B$ with fibers of the form $$g \SO(d) h^{-1}, \quad g,h \in \SO_0(d,1)~.$$ More generally, we conjecture the following: Let $G/H$ be a reductive homogeneous space (with $G$ and $H$ connected), $L$ a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$. Let $\Gamma$ be a torsion free discrete subgroup of $G$ acting properly discontinuously and cocompactly on $G/H$. Then there exists a closed manifold $B$ of dimension $p$ such that - the fundamental group of $B$ is isomorphic to $\Gamma$, - the universal cover of $B$ is contractible, - $\Gamma \backslash G/H$ admits a fibration over $B$ with fibers of the form $g K/L$ for some $g\in G$. To support this conjecture, we note that the vanishing of the form $\omega_{G,H}$ (which implies the non-existence of compact Clifford–Klein forms) is actually an obstruction to the existence of a *local* fibration by copies of $K/L$. \[p:NoLocalFoliation\] Let $G/H$ be a reductive homogeneous space (with $G$ and $H$ connected), $L$ a maximal compact subgroup of $H$ and $K$ a maximal compact subgroup of $G$ containing $L$. If the form $\omega_{G,H}$ on $G/K$ vanishes (and in particular for all the pairs $(G,H)$ in Theorem \[t:AdvanceKobayashiConj\]), then no non-empty open domain of $G/H$ admits a foliation with leaves of the form $g K/L$. The non-existence of such local foliations in certain homogeneous spaces may be quite surprising. For instance, if $G= \SO_0(2n-1,2)$ and $H = \SO_0(2n-1,1)$, then $G/H$ is the *anti-de Sitter space* $\AdS_{2n}$ (for which the non-existence of compact Clifford–Klein forms was proven by Kulkarni [@Kulkarni81]). In that case, $K/L$ is a timelike geodesic and we obtain the following corollary: No open domain of the even dimensional anti-de Sitter space can be foliated by complete timelike geodesics. This leads to the following more general question, that may be of independent interest: Let $G/H$ be a reductive homogeneous space, $G'$ a closed subgroup of $G$ and $H'=G'\cap H$. When does $G/H$ admit an open domain with a foliation by leaves of the form $g G'/H'$? Assume that there exists a non-empty domain $U$ in $X=G/H$ with a foliation by leaves $(F_v)_{v\in V}$ of the form $g_v K/L$. Since the stabilizer in $G$ of $K/L \subset G/H$ is exactly $K$, the space of leaves $V$ can be seen as a submanifold of dimension $p$ in $G/K$. Set $U' = \pi^{-1}(V)$, where $\pi$ is the projection from $G/L$ to $G/K$. Then the projection $\psi$ from $G/L$ to $G/H$ induces a diffeomorphism from $U'$ to $U$. We thus have $$\int_U \vol_X = \int_{U'} \psi^* \vol_X~.$$ On the other hand, by construction of $\omega_{G,H}$, we have $$\int_{U'} \psi^* \vol_X = \int_V \omega_{G,H}~.$$ Since $U$ is non-empty, its volume is non-zero, hence the form $\omega_{G,H}$ cannot vanish. [^1]: The case where $H$ is another Lie group of rank $1$ (namely $\Sp(d,1)$ of $\mathrm{F}_4)$ is not interesting because the representation $\rho$ must be virtually trivial, according to the super-rigidity theorem of Corlette [@Corlette92]. [^2]: Benoist’s result is actually stronger: every discrete group acting properly discontinuously on $\SL(2n+1,\R)/\SL(2n,\R)$ is virtually Abelian.
--- abstract: 'We study the design of interactive clustering algorithms for data sets satisfying natural stability assumptions. Our algorithms start with any initial clustering and only make local changes in each step; both are desirable features in many applications. We show that in this constrained setting one can still design provably efficient algorithms that produce accurate clusterings. We also show that our algorithms perform well on real-world data.' author: - | Pranjal Awasthi pawasthi@cs.cmu.edu\ Department of Computer Science\ Princeton University Maria Florina Balcan ninamf@cs.cmu.edu\ School of Computer Science\ Carnegie Mellon University Konstantin Voevodski kvodski@google.com\ Google, NY, USA bibliography: - 'report.bib' title: Local algorithms for interactive clustering --- Introduction ============ Clustering is usually studied in an unsupervised learning scenario where the goal is to partition the data given pairwise similarity information. Designing provably-good clustering algorithms is challenging because given a similarity function there may be multiple plausible clusterings of the data. Traditional approaches resolve this ambiguity by making assumptions on the data-generation process. For example, there is a large body of work that focuses on clustering data that is generated by a mixture of Gaussians [@AM05; @KSV05; @Sanjoy99; @AK01; @BV08; @KalaiMV10; @MoitraV10; @BelkinS10]. Although this helps define the “right” clustering one should be looking for, real-world data rarely comes from such well-behaved probabilistic models. An alternative approach is to use limited user supervision to help the algorithm reach the desired answer. This approach has been facilitated by the availability of cheap crowd-sourcing tools in recent years. In certain applications such as search and document classification, where users are willing to help a clustering algorithm arrive at their own desired answer with a small amount of additional prodding, interactive algorithms are very useful. Hence, the study of interactive clustering algorithms has become an exciting new area of research. In many practical settings we already start with a fairly good clustering computed with semi-automated techniques. For example, consider an online news portal that maintains a large collection of news articles. The news articles are clustered on the “back-end,” and are used to serve several “front-end” applications such as recommendations and article profiles. For such a system, we do not have the freedom to compute arbitrary clusterings and present them to the user, which has been proposed in prior work. But it is still feasible to get limited feedback and *locally* edit the clustering. In particular, we may only want to change the “bad” portion revealed by the feedback without changing the rest of the clustering. Motivated by these observations, in this paper we study the problem of designing local algorithms for interactive clustering. We propose a theoretical interactive model and provide strong experimental evidence supporting the practical applicability our algorithms. In our model we start with an initial clustering of the data. The algorithm then interacts with the user in stages. In each stage the user provides limited feedback on the current clustering in the form of [*split*]{} and [*merge*]{} requests. The algorithm then makes a [*local*]{} edit to the clustering that is consistent with user feedback. Such edits are aimed at improving the problematic part of the clustering pointed out by the user. The goal of the algorithm is to quickly converge (using as few requests as possible) to a clustering that the user is happy with - we call this clustering the target clustering. In our model the user may request a certain cluster to be [*split*]{} if it is overclustered (intersects two or more clusters in the target clustering). The user may also request to [*merge*]{} two given clusters if they are underclustered (both intersect the same target cluster). Note that the user may not tell the algorithm how to perform the split or the merge; such input is infeasible because it requires a manual analysis of all the objects in the corresponding clusters. We also restrict the algorithm to only make [*local*]{} changes at each step, i.e., in response we may change only the cluster assignments of the points in the corresponding clusters. If the user requests to split a cluster $C_i$, we may change only the cluster assignments of points in $C_i$, and if the user requests to merge $C_i$ and $C_j$ , we may only reassign the points in $C_i$ and $C_j$. The split and merge requests described above are a natural form of feedback. It is easy for users to spot over/underclustering issues and request the corresponding splits/merges (without having to provide any additional information about how to perform the edit). For our model to be practically applicable, we also need to account for noise in the user requests. In particular, if the user requests a merge, only a fraction or a constant number of the points in the two clusters may belong to the same target cluster. Our model (See Section \[sec:notation\]) allows for such noisy user responses. We study the complexity of algorithms in the above model (the number of edits requests needed to find the target clustering) as a function of the error of the initial clustering. The initial error may be evaluated in terms of [*underclustering*]{} error $\delta_u$ and [*overclustering*]{} error $\delta_o$ (See Section \[sec:notation\]). Because the initial error may be fairly small,[^1] we would like to develop algorithms whose complexity depends polynomially on $\delta_u$, $\delta_o$ and only logarithmically on $n$, the number of data points. We show that this is indeed possible given that the target clustering satisfies a natural *stability* property (see Section \[sec:notation\]). We also develop algorithms for the well-known correlation-clustering objective function [@Bansal04], which considers pairs of points that are clustered inconsistently with respect to the target clustering (See Section \[sec:notation\]). As a pre-processing step, our algorithms compute the average-linkage tree of all the points in the data set. Note that if the target clustering $C^{\ast}$ satisfies our *stability* assumption, then the average-linkage tree must be consistent with $C^{\ast}$ (see Section \[sec:eta-merge\]). However, in practice this average-linkage tree is much too large to be directly interpreted by the users. Still, given that the edit requests are somewhat consistent with $C^{\ast}$, we can use this tree to efficiently compute local edits that are consistent with the target clustering. Our analysis then shows that after a limited number of edit requests we must converge to the target clustering. **Our Results**\ In Section \[sec:eta-merge\] we study the $\eta$-merge model. Here we assume that the user may request to split a cluster $C_{i}$ only if $C_{i}$ contains points from several ground-truth clusters. The user may request to merge $C_{i}$ and $C_{j}$ only if an $\eta$-fraction of points in each $C_{i}$ and $C_{j}$ are from the same ground-truth cluster. For this model for $\eta > 0.5$, given an initial clustering with overclustering error $\delta_o$ and underclustering error $\delta_u$, we present an algorithm that requires $\delta_o$ split requests and $2(\delta_u + k) \log_{\frac 1 {1-\eta}} n$ merge requests to find the target clustering, where $n$ is the number of points in the dataset. For $\eta > 2/3$, given an initial clustering with correlation-clustering error $\delta_{cc}$, we present an algorithm that requires at most $\delta_{cc}$ edit requests to find the target clustering. In Section \[sec:unrestricted-merge\] we relax the condition on the merges and allow the user to request a merge even if $C_i$ and $C_j$ only have a single point from the same target cluster. We call this the [*unrestricted-merge*]{} model. Here the requirement on the accuracy of the user response is much weaker and we need to make further assumptions on the nature of the requests. More specifically, we assume that each merge request is chosen uniformly at random from the set of feasible merges. Under this assumption we present an algorithm that with probability at least $1-\epsilon$ requires $\delta_o$ split requests and $O(\log \frac k {\epsilon} {{\delta}^2_u})$ merge requests to find the target clustering. We develop several algorithms for performing the split and merge requests under different assumptions. Each algorithm uses the global average-linkage tree $T_{glob}$ to compute a local clustering edit. Our splitting procedure finds the node in $T_{glob}$ where the corresponding points are first split in two. It is more challenging to develop a correct merge procedure, given that we allow “impure” merges, where one or both clusters have points from another target cluster (other than the one that they both intersect). To perform such merges, in the $\eta$-merge model we develop a procedure to extract the “pure” subsets of the two clusters, which must only contain points from the same target cluster. Our procedure searches for the deepest node in $T_{glob}$ that has enough points from both clusters. In the unrestricted-merge model, we develop another merge procedure that either merges the two clusters or merges them and splits them. This algorithm always makes progress if the proposed merge is “impure,” and makes progress on average if it is “pure” (both clusters are subset of the same target cluster). When the data satisfies stronger assumptions, we present more-scalable split and merge algorithms that do not require any global information. These procedures compute the edit by only considering the points in the user request and the similarities between them. In Section \[sec:experiments\] we demonstrate the effectiveness of our algorithms on real data. We show that for the purposes of splitting known over-clusters, the splitting procedure proposed here computes the best splits, when compared to other well-known techniques. We also test the entire proposed framework on newsgroup documents data, which is quite challenging for traditional unsupervised clustering methods [@Telgarsky12; @HellerG05; @Dasgupta08; @Dai10; @Boulis04; @Zhong05]. Still, we find that our algorithms perform fairly well; for larger settings of $\eta$ we are able find the target clustering after a limited number of edit requests. **Related work**\ Interactive models for clustering studied in previous works [@BalcanB08; @AwasthiZ10] were inspired by an analogous model for learning under feedback [@angluin]. In this model, the algorithm can propose a hypothesis to the user (in this case, a clustering of the data) and get some feedback regarding the correctness of the current hypothesis. As in our model, the feedback considered is split and merge queries. The goal is to design efficient algorithms which use very few queries to the user. A critical limitation in prior work is that the algorithm has the freedom to choose any arbitrary clustering as the starting point and can make arbitrary changes at each step. Hence these algorithms may propose a series of “bad” clusterings to the user to quickly prune the search space and reach the target clustering. Our interactive clustering model is in the context of an initial clustering; we are restricted to only making local changes to this clustering to correct the errors pointed out by the user. This model is well-motivated by several applications, including the Google application described in the experimental section. Basu et al. [@Basu04] study the problem of minimizing the $k$-means objective in the presence of limited supervision. This supervision is in the form of pairwise [*must-link*]{} and [*cannot-link*]{} constraints. They propose a variation of the Lloyd’s method for this problem and show promising experimental results. The split/merge requests that we study are a more natural form of interaction because they capture macroscopic properties of a cluster. Getting pairwise constraints among data points involves much more effort on the part of the user and is unrealistic in many scenarios. The stability property that we consider is a natural generalization of the “stable marriage” property (see Definition \[def:strong-stability\]) that has been studied in a variety of previous works [@BalcanBV08; @Bryant01]. It is the weakest among the stability properties that have been studied recently such as strict separation and strict threshold separation [@BalcanBV08; @Krishnamurthy11]. This property is known to hold for real-world data. In particular,  [@Voevodski11] observed that this property holds for protein sequence data, where similarities are computed with sequence alignment and ground truth clusters correspond to evolutionary-related proteins. Notation and Preliminaries {#sec:notation} ========================== Given a data set $X$ of $n$ points we define $\mathcal{C} = \lbrace C_{1},C_{2}, \ldots C_{k} \rbrace$ to be a $k$-clustering of $X$ where the $C_i$’s represent the individual clusters. Given two clusterings $\mathcal{C}$ and $\mathcal{C}'$, we define the distance between a cluster $C_i \in \mathcal{C}$ and the clustering $\mathcal{C}'$ as: $$\dist(C_i,\mathcal{C}') = \vert \{ C'_j \in \mathcal{C}' : C'_j \cap C_i \ne \emptyset \} \vert - 1.$$ This distance is the number of *additional* clusters in $\mathcal{C}'$ that contain points from $C_i$; it evaluates to 0 when all points in $C_i$ are contained in a single cluster in $\mathcal{C}'$. Naturally, we can then define the distance between $\mathcal{C}$ and $\mathcal{C}'$ as: $ \dist(\mathcal{C},\mathcal{C}') = \sum_{C_i \in \mathcal{C}} \dist(C_i,\mathcal{C}'). $ Notice that this notion of clustering distance is asymmetric: $\dist(\mathcal{C},\mathcal{C}') \ne \dist(\mathcal{C}',\mathcal{C})$. Also note that $\dist(\mathcal{C},\mathcal{C}') = 0$ if and only if $\mathcal{C}$ refines $\mathcal{C}'$. Observe that if $\mathcal{C}$ is the ground-truth clustering, and $\mathcal{C}'$ is a proposed clustering, then $\dist(\mathcal{C},\mathcal{C}')$ can be considered an *underclustering error*, and $\dist(\mathcal{C}',\mathcal{C})$ an *overclustering error*. An underclustering error is an instance of several clusters in a proposed clustering containing points from the same ground-truth cluster; this ground-truth cluster is said to be *underclustered*. Conversely, an overclustering error is an instance of points from several ground-truth clusters contained in the same cluster in a proposed clustering; this proposed cluster is said to be *overclustered*. In the following sections we use $\mathcal{C}^{\ast} = \lbrace C^{\ast}_{1},C^{\ast}_{2}, \ldots C^{\ast}_{k} \rbrace$ to refer to the ground-truth clustering, and use $\mathcal{C}$ to refer to a proposed clustering. We use $\delta_{u}$ to refer to the underclustering error of a proposed clustering, and $\delta_{o}$ to refer to the overclustering error. In other words, we have $\delta_{u} = \dist(\mathcal{C}^{\ast},\mathcal{C})$ and $\delta_{o} = \dist(\mathcal{C},\mathcal{C}^{\ast})$. We also use $\delta$ to denote the sum of the two errors: $\delta = \delta_{u} + \delta_{o}$. We call $\delta$ the *under/overclustering error*, and use the $\delta(\mathcal{C}, \mathcal{C}^{\ast})$ to refer to the error of $\mathcal{C}$ with respect to $\mathcal{C}^{\ast}$. We also observe that we can define the distance between two clusterings using the *correlation-clustering* objective function. Given a proposed clustering $\mathcal{C}$, and a ground-truth clustering $\mathcal{C}^{\ast}$, we define the correlation-clustering error $\delta_{cc}$ as the number of (ordered) pairs of points that are clustered *inconsistently* with $\mathcal{C}^{\ast}$: $$\delta_{cc} = \vert \lbrace (u,v) \in X \times X : c(u,v) \ne c^{\ast}(u,v) \rbrace \vert,$$ where $c(u,v) = 1$ if $u$ and $v$ are in the same cluster in $\mathcal{C}$, and 0 otherwise; $c^{\ast}(u,v) = 1$ if $u$ and $v$ are in the same cluster in $\mathcal{C}^{\ast}$, and 0 otherwise. Note that we may divide the correlation-clustering error $\delta_{cc}$ into overclustering component $\delta_{cco}$ and underclustering component $\delta_{ccu}$: $$\delta_{cco} = \vert \lbrace (u,v) \in X \times X : c(u,v) = 1 \textrm{ and } c^{\ast}(u,v) = 0 \rbrace \vert$$ $$\delta_{ccu} = \vert \lbrace (u,v) \in X \times X : c(u,v) = 0 \textrm{ and } c^{\ast}(u,v) = 1 \rbrace \vert$$ In our formal analysis we model the user as an oracle that provides edit requests. We say that an interactive clustering algorithm is [*local*]{} if in each iteration only the cluster assignments of points involved in the oracle request may be changed. If the oracle requests to split $C_i$, the algorithm may only reassign the points in $C_i$. If the oracle requests to merge $C_i$ and $C_j$, the algorithm may only reassign the points in $C_i \cup C_j$. We next formally define the properties of a clustering that we study in this work. \[def:strong-stability\] Given a clustering $\mathcal{C} = \{C_1, C_2, \cdots C_k\}$ over a domain $X$ and a similarly function $S: X \times X \mapsto \Re$, we say that $\mathcal{C}$ satisfies stability with respect to $S$ if for all $i \ne j$, and for all $A \subset C_i$ and $A' \subseteq C_j$, $S(A, C_i \setminus A) > S(A,A')$, where for any two sets $A,A'$, $S(A,A') = E_{x \in A, y \in A'} S(x,y)$. In our analysis, we assume that the ground-truth clustering satisfies stability, and we have access to the corresponding similarity function. In addition, we also study the following stronger properties of a clustering, which were first introduced in  [@BalcanBV08]. Given a clustering $C = \{C_1, C_2, \cdots C_k\}$ over a domain $X$ and a similarly function $S: X \times X \mapsto \Re$, we say that $C$ satisfies strict separation with respect to $S$ if for all $i \ne j$, $x,y \in C_i$ and $z \in C_j$, $S(x,y) > S(x,z)$. Given a clustering $C = \{C_1, C_2, \cdots C_k\}$ over a domain $X$ and a similarly function $S: X \times X \mapsto \Re$, we say that $C$ satisfies strict threshold separation with respect to $S$ if there exists a threshold $t$ such that, for all $i$, $x,y \in C_i$, $S(x,y) > t$, and, for all $i \ne j$, $x \in C_i, y \in C_j$, $S(x,y) \le t$. Clearly, *strict separation* and *strict threshold separation* imply *stability*. In order for our algorithms to make progress, the oracle requests must be somewhat consistent with the target clustering. In the $\eta$-merge model the oracle requests have the following properties $split(C_i)$: $C_i$ contains points from two or more target clusters. $merge(C_i , C_j)$: At least an $\eta$-fraction of the points in each $C_i$ and $C_j$ belong to the same target cluster. In the unrestricted-merge model the oracle requests have the following properties $split(C_i)$: $C_i$ contains points from two or more target clusters. $merge(C_i , C_j)$: At least $1$ point in each $C_i$ and $C_j$ belongs to the same target cluster. Note that the assumptions about the nature of the split requests are the same in both models. In the $\eta$-merge model, the oracle may request to merge two clusters if both have a *constant fraction* of points from the same target cluster. In the unrestricted-merge model, the oracle may request to merge two clusters if both have *some* points from the same target cluster. Generalized clustering error {#sec:generalized-clustering-error} ---------------------------- We observe that the clustering errors defined in the previous section may be generalized by abstracting their common properties. We define the following properties of a *natural* clustering error, which is any integer-valued error that decreases when we locally improve the proposed clustering. \[def:natural-clustering-error\] We say that a clustering error is *natural* if it satisfies the following properties: - If there exists a cluster $C_{i}$ that contains points from $C^{\ast}_{j}$ and some other ground-truth cluster(s), then splitting this cluster into two clusters $C_{i,1} = C_{i} \cap C^{\ast}_{j}$ (which contains only points from $C^{\ast}_{j}$), and $C_{i,2} = C_{i} - C_{i,1}$ (which contains the other points) must decrease the error. - If there exists two clusters that contain only points from the same target cluster, then merging them into one cluster must decrease the error. - The error is integer-valued. We expect a lot of definitions of clustering error to satisfy the above criteria (especially the first two properties), in addition to other domain-specific criteria. Clearly, the under/overclustering error $\delta = \delta_{u} + \delta_{o}$ and the correlation-clustering error $\delta_{cc}$ are also *natural* clustering errors (Claim \[claim:natural-errors\]). As before, for a *natural* clustering error $\gamma$, a proposed clustering $\mathcal{C}$ and the target clustering $\mathcal{C}^{\ast}$, we will use $\gamma(\mathcal{C}, \mathcal{C}^{\ast})$ to denote the magnitude of the error of $\mathcal{C}$ with respect to $\mathcal{C}^{\ast}$. Moreover, it is easy to see that the under/overclustering error defined in the previous section is the lower-bound on any *natural* clustering error (Theorem \[thm:generalized-clustering-error\]). \[claim:natural-errors\] The under/overclustering error and the correlation clustering error satisfy Definition \[def:natural-clustering-error\] and hence are natural clustering errors. \[thm:generalized-clustering-error\] For any *natural* clustering error $\gamma$, any proposed clustering $\mathcal{C}$, and any target clustering $\mathcal{C}^{\ast}$, $\gamma(\mathcal{C}, \mathcal{C}^{\ast}) \ge \delta(\mathcal{C}, \mathcal{C}^{\ast})$. Given any proposed clustering $\mathcal{C}$, and any target clustering $\mathcal{C}^{\ast}$, we may transform $\mathcal{C}$ into $\mathcal{C}^{\ast}$ via the following sequence of edits. First, we split all over-clustering instances using the following iterative procedure: while there exists a cluster $C_{i}$ that contains points from $C^{\ast}_{j}$ and some other ground-truth cluster(s), we split it into two clusters $C_{i,1} = C_{i} \cap C^{\ast}_{j}$ and $C_{i,2} = C_{i} - C_{i,1}$. Note that this iterative split procedure will require exactly $\delta_{o}$ split edits, where $\delta_{o}$ is the initial overclustering error. Then, when we are left with only “pure” clusters (each intersects exactly one target cluster), we merge all under-clustering instances using the following iterative procedure: while there exist two clusters $C_{i}$ and $C_{j}$ that contain only points from the same target cluster, merge $C_{i}$ and $C_{j}$. Note that this iterative merge procedure will require exactly $\delta_{u}$ merge edits, where $\delta_{u}$ is the initial underclustering error. Let us use $\gamma$ to refer to any *natural* clustering error of $\mathcal{C}$ with respect to $\mathcal{C}^{\ast}$. By the first property of *natural* clustering error, each split must have decreased $\gamma$ by at least one. By the second property, each merge must have decreased $\gamma$ by at least one as well. Given that we performed exactly $\delta = \delta_{o} + \delta_{u}$ edits, it follows that initially $\gamma(\mathcal{C}, \mathcal{C}^{\ast})$ must have been at least $\delta$. For additional discussion about comparing clusterings see [@meila07]. Note that several criteria discussed in [@meila07] satisfy our first two properties (for a similarity measure we may replace “must decrease the error” with “must increase the similarity”). In addition, the Rand and Mirkin criteria discussed in [@meila07] are closely related to the correlation clustering error defined here (all three measures are a function of the number of pairs of points that are clustered incorrectly). The $\eta$-merge model {#sec:eta-merge} ====================== In this section we describe and analyze the algorithms in the $\eta$-merge model. As a pre-processing step for all our algorithms, we first run the hierarchical average-linkage algorithm on all the points in the data set to compute the global average-linkage tree, which we denote by $T_{glob}$. The leaf nodes in this tree contain the individual points, and the root node contains all the points. The tree is computed in a bottom-up fashion: starting with the leafs in each iteration the two most similar nodes are merged, where the similarity between two nodes $N_{1}$ and $N_{2}$ is the average similarity between points in $N_{1}$ and points in $N_{2}$. We assign a label “impure” to each cluster in the initial clustering; these labels are used by the merge procedure. Given a split or merge request, a local clustering edit is computed from the global tree $T_{glob}$ as described in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-relaxed\]. To implement Step 1 in Figure \[fig:split-average-linkage\], we start at the root of $T_{glob}$ and “follow” the points in $C_i$ down one of the branches until we find a node that splits them. In order to implement Step 2 in Figure \[fig:merge-average-linkage-relaxed\], it suffices to start at the root of $T_{glob}$ and perform a post-order traversal, only considering nodes that have “enough” points from both clusters, and return the first output node. The split procedure is fairly intuitive: if the average-linkage tree is consistent with the target clustering, it suffices to find the node in the tree where the corresponding points are first split in two. It is more challenging to develop a correct merge procedure: note that Step 2 in Figure \[fig:merge-average-linkage-relaxed\] is only correct if $\eta > 0.5$, which ensures that if two nodes in the tree have more than an $\eta$-fraction of the points from $C_{i}$ and $C_{j}$, one must be an ancestor of the other. If the average-linkage tree is consistent with the ground-truth, then clearly the node equivalent to the corresponding target cluster (that $C_{i}$ and $C_{j}$ both intersect) will have enough points from $C_{i}$ and $C_{j}$; therefore the node that we find in Step 2 must be this node or one of its descendants. In addition, because our merge procedure replaces two clusters with three, we require pure/impure labels for the merge requests to terminate: “pure” clusters may only have other points added to them, and retain this label throughout the execution of the algorithm. We now state the performance guarantee for these split and merge algorithms. \[thm:strong-stability-relaxed\] Suppose the target clustering satisfies stability, and the initial clustering has overclustering error $\delta_o$ and underclustering error $\delta_u$. In the $\eta$-merge model, for any $\eta > 0.5$, the algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-relaxed\] require at most $\delta_o$ split requests and $2(\delta_u + k) \log_{\frac 1 {1-\eta}} n$ merge requests to find the target clustering. In order to prove the theorem, we must do some preliminary analysis. First, we observe that if the target clustering satisfies stability, then every node of the average-linkage tree must be *laminar* (consistent) with respect to the ground-truth clustering. Informally, each node in a hierarchical clustering tree $T$ is *laminar* (consistent) with respect to the clustering $\mathcal{C}$ if for each cluster $C_{i} \in \mathcal{C}$, the points in $C_{i}$ are first grouped together in $T$ before they are grouped with points from any other cluster $C_{j \ne i}$. We formally state and prove these observations next. A node $N$ is laminar with respect to a clustering $\mathcal{C}$ if for each cluster $C_{i} \in \mathcal{C}$ we have either $N \cap C_{i} = \emptyset$, $N \subseteq C_{i}$, or $C_{i} \subseteq N$. \[lem:laminar-average-linkage\] Suppose the ground-truth clustering $\mathcal{C}^{\ast}$ over a domain $X$ satisfies stability with respect to a similarity function $S$. Let $T$ be the average-linkage tree for $X$ constructed with $S$. Then every node in $T$ is laminar w.r.t. $\mathcal{C}^{\ast}$. The proof of this statement can be found in [@BalcanBV08]. The intuition is that if there is a node in $T$ that is not laminar w.r.t. $C^{\ast}$, then the average-linkage algorithm, at some step, must have merged $A \subset C^{\ast}_i$, with $B \subset C^{\ast}_j$ for some $i \neq j$. However, this will contradict the stability property for the sets $A$ and $B$. It follows that the split computed by the algorithm in Figure \[fig:split-average-linkage\] must also be consistent with the target clustering; we call such splits *clean*. \[def:clean-split\] A partition (split) of a cluster $C_{i}$ into clusters $C_{i,1}$ and $C_{i,2}$ is said to be *clean* if $C_{i,1}$ and $C_{i,2}$ are non-empty, and for each ground-truth cluster $C^{\ast}_{j}$ such that $C^{\ast}_{j} \cap C_{i} \ne \emptyset$, either $C^{\ast}_{j} \cap C_{i} = C^{\ast}_{j} \cap C_{i,1}$ or $C^{\ast}_{j} \cap C_{i} = C^{\ast}_{j} \cap C_{i,2}$. We now prove the correctness of the split/merge procedures. \[lem:relaxed-main-lemma\] If the ground-truth clustering satisfies stability and $\eta > 0.5$ then, - The split procedure in Figure \[fig:split-average-linkage\] always produces a clean split. - The new cluster added in Step 4 in Figure \[fig:merge-average-linkage-relaxed\] must be “pure”, i.e., it must contain points from a single ground-truth cluster. **a.** For purposes of contradiction, suppose the returned split is not clean: $C_{i,1}$ and $C_{i,2}$ contain points from the same ground-truth cluster $C^{\ast}_{j}$. It must be the case that $C_{i}$ contains points from several ground-truth clusters, which implies that w.l.o.g. $C_{i,1}$ contains points from some other ground-truth cluster ${C}^{\ast}_{l \ne j}$. This implies that $N_1$ is not laminar w.r.t. $\mathcal{C}^{\ast}$, which contradicts Lemma \[lem:laminar-average-linkage\]. **b.** By our assumption, at least $\frac 1 2 |C_{i}|$ points from $C_{i}$ and $\frac 1 2 |C_{j}|$ points from $C_{j}$ are from the same ground-truth cluster $C^{\ast}_{l}$. Clearly, the node $N'$ in $T_{glob}$ that is equivalent to $C^{\ast}_{l}$ (which contains all the points in $C^{\ast}_{l}$ and no other points) must contain *enough* points from $C_{i}$ and $C_{j}$, and only ascendants and descendants of $N'$ may contain more than an $\eta > 1/2$ fraction of points from both clusters. Therefore, the node $N$ that we find with a depth-first search must be $N'$ or one of its descendants, and will only contain points from $C^{\ast}_{l}$. Using the above lemma, we can prove the bounds on the split and merge requests stated in Theorem \[thm:strong-stability-relaxed\]. We first give a bound on the number of splits. Observe that each split reduces the overclustering error by exactly 1. To see this, suppose we execute Split($C_{1}$), and call the resulting clusters $C_{2}$ and $C_{3}$. Call $\delta_{1}$ the overclustering error before the split, and $\delta_{2}$ the overclustering error after the split. Let’s use $k_{1}$ to refer to the number of ground-truth clusters that intersect $C_{1}$, and define $k_{2}$ and $k_{3}$ similarly. Due to the *clean split* property, no ground-truth cluster can intersect both $C_{2}$ and $C_{3}$, therefore it must be the case that $k_{2} + k_{3} = k_{1}$. Also, clearly $k_{2}, k_{3} > 0$. Therefore we have: $$\begin{aligned} \delta_{2} & = & \delta_{1} - (k_{1} - 1) + (k_{2} - 1) + (k_{3} - 1)\\ & = & \delta_{1} - k_{1} + (k_{2} + k_{3}) - 1\\ & = & \delta_{1} - 1.\end{aligned}$$ Merges cannot increase overclustering error. Therefore the total number of splits may be at most $\delta_{o}$. We next give the arguments about the number of impure and pure merges. We first argue that we cannot have too many “impure” merges before each cluster in $C$ is marked “pure.” Consider the clustering $P = \{C_{i} \cap C^{\ast}_{j} \ \vert \ C_{i} \textrm{ is marked ``impure'' and } C_{i} \cap C^{\ast}_{j} \ne \emptyset\}$. Clearly, at the start $\vert P \vert = \delta_{u} + k$. A merge does not increase the number of clusters in $P$, and the splits do not change $P$ at all (because of the *clean split* property). Moreover, each impure merge (a merge of two impure clusters or a merge of a pure and an impure cluster) *depletes* some $P_{i} \in P$ by moving $\eta \vert P_{i} \vert$ of its points to a pure cluster. Clearly, we can then have at most $\log_{1/(1-\eta)} n$ merges depleting each $P_{i}$. Since each impure merge must deplete some $P_{i}$, it must be the case that we can have at most $(\delta_{u} + k) \log_{1/(1-\eta)} n$ impure merges in total. Notice that a pure cluster can only be created by an impure merge, and there can be at most one pure cluster created by each impure merge. Clearly, a pure merge removes exactly one pure cluster. Therefore the number of pure merges may be at most the total number of pure clusters that are created, which is at most the total number of impure merges. Therefore the total number of merges must be less than $2(\delta_{u} + k) \log_{1/(1-\eta)} n$. We can also restate the run-time bound in Theorem \[thm:strong-stability-relaxed\] in terms of any *natural* clustering error $\gamma$. The following collorary follows from Theorem \[thm:strong-stability-relaxed\] and Theorem \[thm:generalized-clustering-error\]. \[corr:strong-stability-relaxed\] Suppose the target clustering satisfies stability, and the initial clustering has clustering error $\gamma$, where $\gamma$ is any *natural* clustering error as defined in Definition \[def:natural-clustering-error\]. In the $\eta$-merge model, for any $\eta > 0.5$, the algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-relaxed\] require at most $O(\gamma + k) \log_{\frac 1 {1-\eta}} n$ edit requests to find the target clustering. Algorithms for correlation-clustering error ------------------------------------------- To bound the number of edit requests with respect to the correlation clustering objective, we must use a different merge procedure, which is described in Figure \[fig:merge-average-linkage-cc\]. Here instead of creating a new “pure” cluster, we add these points to the larger of the two clusters in the merge. Notice that the new algorithm is much simpler than the merge algorithm for the under/overclustering error. Using this merge procedure and the split procedure presented earlier gives the following performance guarantee. \[thm:strong-stability-cc-error\] Suppose the target clustering satisfies stability, and the initial clustering has correlation-clustering error of $\delta_{cc}$. In the $\eta$-merge model, for any $\eta > 2/3$, using the split and merge procedures in Figures \[fig:split-average-linkage\] and  \[fig:merge-average-linkage-cc\] requires at most $\delta_{cc}$ edit requests to find the target clustering. Consider the contributions of individual points to $\delta_{cco}$ and $\delta_{ccu}$, which are defined as: $$\delta_{cco}(u) = \vert \lbrace v \in X : c(u,v) = 1 \textrm{ and } c^{\ast}(u,v) = 0 \rbrace \vert$$ $$\delta_{ccu}(u) = \vert \lbrace v \in X : c(u,v) = 0 \textrm{ and } c^{\ast}(u,v) = 1 \rbrace \vert$$ We first argue that a split of a cluster $C_{i}$ must reduce $\delta_{cc}$ by at least 1. Given that the split is *clean*, it is easy to verify that the outcome may not increase $\delta_{ccu}(u)$ for any $u \in C_{i}$. We can also verify that for each $u \in C_{i}$, $\delta_{cco}(u)$ must decrease by at least 1. This completes the argument, given that the correlation-clustering error with respect to all other pairs of points must remain the same. We now argue that if $\eta > 2/3$, each merge of $C_{i}$ and $C_{j}$ must reduce $\delta_{cc}$ by at least 1. Without loss of generality, suppose that $\vert C_{i} \vert \ge \vert C_{j} \vert$, and let us use $P$ to refer to the “pure” subset of $C_{j}$ that is moved to $C_{i}$. We observe that the outcome must remove at least $\delta_{1}$ pairwise correlation-clustering errors, where $\delta_{1}$ satisfies $\delta_{1} \ge 2 \vert P \vert (\eta \vert C_{i} \vert)$. Similarly, we observe that the outcome may add at most $\delta_{2}$ pairwise correlation-clustering errors, where $\delta_{2}$ satisfies: $$\delta_{2} \le 2 \vert P \vert ((1-\eta) \vert C_{i} \vert) + 2 \vert P \vert ((1-\eta) \vert C_{j} \vert) \le 4 \vert P \vert ((1-\eta) \vert C_{i} \vert).$$ It follows that for $\eta > 2/3$, $\delta_{1}$ must exceed $\delta_{2}$; therefore the sum of the pairwise correlation-clustering errors must decrease, giving a lower correlation-clustering error total. Observe that the runtime bound in Theorem \[thm:strong-stability-cc-error\] is tight: in some instances any *local* algorithm requires at least $\delta_{cc}$ edits to find the target clustering. To verify this, suppose the target clustering is composed of $n$ singleton clusters, and the initial clustering contains $n/2$ clusters of size 2. In this instance, the initial correlation clustering error $\delta_{cc} = n/2$, and the oracle must issue at least $n/2$ split requests before we reach the target clustering (no matter how the algorithm reassigns the corresponding points). Algorithms under stronger assumptions ------------------------------------- When the data satisfies stronger stability properties we may simplify the presented algorithms and/or obtain better performance guarantees. In particular, if the data satisfies the *strict separation* property from  [@BalcanBV08], we may change the split and merge algorithms to use the local average-linkage tree, which is constructed from only the points in the edit request. In addition, if the data satisfies *strict threshold separation*, we may remove the restriction on $\eta$ and use a different merge procedure that is correct for any $\eta > 0$. \[thm:strict-separation\] Suppose the target clustering satisfies strict separation, and the initial clustering has overclustering error $\delta_o$ and underclustering error $\delta_u$. In the $\eta$-merge model, for any $\eta > 0.5$, the algorithms in Figure \[fig:local-split-average-linkage\] and Figure \[fig:local-merge-average-linkage-relaxed\] require at most $\delta_o$ split requests and $2(\delta_u + k) \log_{\frac 1 {1-\eta}} n$ merge requests to find the target clustering. Let us use $\mathcal{L}^{\ast}$ to refer to the ground-truth clustering of the points in the split/merge request. If the target clustering satisfies strict separation, it is easy to verify that every node in the local average-linkage tree $T_{loc}$ must be laminar (consistent) w.r.t. $\mathcal{L}^{\ast}$. We can then use this observation to prove the equivalent of Lemma \[lem:relaxed-main-lemma\] for the split procedure in Figure \[fig:local-split-average-linkage\] and the merge procedure in Figure \[fig:local-merge-average-linkage-relaxed\]. The analysis in Theorem \[thm:strong-stability-relaxed\] remains unchanged. \[thm:strict-threshold-separation\] Suppose the target clustering satisfies strict threshold separation, and the initial clustering has overclustering error $\delta_o$ and underclustering error $\delta_u$. In the $\eta$-merge model, for any $\eta > 0$, the algorithms in Figure \[fig:local-split-average-linkage\] and Figure \[fig:merge-strict-threshold-separation\] require at most $\delta_o$ split requests and $2(\delta_u + k) \log_{\frac 1 {1-\eta}} n$ merge requests to find the target clustering. If the target clustering satisfies strict threshold separation, we can verify that the split procedure in Figure \[fig:local-split-average-linkage\] and the merge procedure in Figure \[fig:merge-strict-threshold-separation\] are correct for any $\eta >0$. The analysis in Theorem \[thm:strong-stability-relaxed\] remains unchanged. To verify that the split procedure always produces a clean split, again let us use $\mathcal{L}^{\ast}$ to refer to the ground-truth clustering of the points in the split request. We can again verify that each node in the local average-linkage tree $T_{loc}$ must be laminar (consistent) w.r.t. $\mathcal{L}^{\ast}$. It follows that the split procedure always produces a clean split. Note that clearly this argument does not depend on the setting of $\eta$. We now verify that the new cluster added by the merge procedure Figure \[fig:merge-strict-threshold-separation\] must be “pure” (must contain points from a single target cluster). To see this, observe that in the graph $G$ in Figure \[fig:merge-strict-threshold-separation\], all pairs of points from the same target cluster are connected before any pairs of points from different target clusters. It follows that the first component that contains at least an $\eta$-fraction of points from $C_{i}$ and $C_{j}$ must be “pure”. Note that this argument applies for any $\eta > 0$. Note that the merge procedure in Figure \[fig:merge-strict-threshold-separation\] is correct for $\eta \le 0.5$ only if the target clustering satisfies *strict threshold separation*: there is a single threshold $t$ such that for all $i$, $x,y \in C^{\ast}_i$, $S(x,y) > t$, and, for all $i \ne j$, $x \in C^{\ast}_i, y \in C^{\ast}_j$, $S(x,y) \le t$. When only *strict separation* holds (the threshold for each target cluster may be different), this procedure may first connect points from different target clusters, and for $\eta \le 0.5$ this component may then be large enough to be output. As in Corollary \[corr:strong-stability-relaxed\], we may also restate the run-time bounds in Theorem \[thm:strict-separation\] and Theorem \[thm:strict-threshold-separation\] in terms of any natural clustering error $\gamma$. The following corollaries follow from Theorem \[thm:strict-separation\], Theorem \[thm:strict-threshold-separation\] and Theorem \[thm:generalized-clustering-error\]. \[corr:strict-separation\] Suppose the target clustering satisfies strict separation, and the initial clustering has clustering error $\gamma$, where $\gamma$ is any *natural* clustering error as defined in Definition \[def:natural-clustering-error\]. In the $\eta$-merge model, for any $\eta > 0.5$, the algorithms in Figure \[fig:local-split-average-linkage\] and Figure \[fig:local-merge-average-linkage-relaxed\] require at most $O(\gamma + k) \log_{\frac 1 {1-\eta}} n$ edit requests to find the target clustering. \[corr:strict-threshold-separation\] Suppose the target clustering satisfies strict threshold separation, and the initial clustering has clustering error $\gamma$, where $\gamma$ is any *natural* clustering error as defined in Definition \[def:natural-clustering-error\]. In the $\eta$-merge model, for any $\eta > 0$, the algorithms in Figure \[fig:local-split-average-linkage\] and Figure \[fig:merge-strict-threshold-separation\] require at most $O(\gamma + k) \log_{\frac 1 {1-\eta}} n$ edit requests to find the target clustering. The unrestricted-merge model {#sec:unrestricted-merge} ============================ In this section we further relax the assumptions about the nature of the oracle requests. As before, the oracle may request to split a cluster if it contains points from two or more target clusters. For merges, now the oracle may request to merge $C_i$ and $C_j$ if both clusters contain only a single point from the same ground-truth cluster. We note that this is a minimal set of assumptions for a local algorithm to make progress, otherwise the oracle may always propose irrelevant splits or merges that cannot reduce clustering error. For this model we propose the merge algorithm described in Figure \[fig:merge-average-linkage-unrestricted\]. The split algorithm remains the same as in Figure \[fig:split-average-linkage\]. To provably find the ground-truth clustering in this setting we require that each merge request must be chosen uniformly at random from the set of feasible merges. This assumption is consistent with the observation in [@AwasthiZ10] that in the unrestricted-merge model with arbitrary request sequences, even very simple cases (ex. union of intervals on a line) require a prohibitively large number of requests. We do not make additional assumptions about the nature of the split requests; in each iteration any feasible split may be proposed by the oracle. In this setting our algorithms have the following performance guarantee. \[thm:strict-separation-random\] Suppose the target clustering satisfies stability, and the initial clustering has overclustering error $\delta_o$ and underclustering error $\delta_u$. In the unrestricted-merge model, with probability at least $1-\epsilon$, the algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-unrestricted\] require $\delta_o$ split requests and $O(\log \frac k {\epsilon} {{\delta}^2_u})$ merge requests to find the target clustering. The above theorem is proved in a series of lemmas. We first state a lemma regarding the correctness of the Algorithm in Figure \[fig:merge-average-linkage-unrestricted\]. We argue that if the algorithm merges $C_{i}$ and $C_{j}$, it must be the case that both $C_{i}$ and $C_{j}$ only contain points from the same ground-truth cluster. \[lem:unrestricted-pure-merge\] If the algorithm in Figure \[fig:merge-average-linkage-unrestricted\] merges $C_{i}$ and $C_{j}$ in Step 3, it must be the case that $C_{i} \subset C^{\ast}_{l}$ and $C_{j} \subset C^{\ast}_{l}$ for some ground-truth cluster $C^{\ast}_{l}$. We prove the contrapositive. Suppose $C_{i}$ and $C_{j}$ both contain points from $C^{\ast}_{l}$, and in addition $C_{i} \cup C_{j}$ contains points from some other ground-truth cluster. Let us define $S_{1} = C^{\ast}_{l} \cap C_{i}$ and $S_{2} = C^{\ast}_{l} \cap C_{j}$. Because the clusters $C'_{i}$, $C'_{j}$ result from a *clean* split, it follows that $S_{1}, S_{2} \subseteq C'_{i}$ or $S_{1}, S_{2} \subseteq C'_{j}$. Without loss of generality, assume $S_{1}, S_{2} \subseteq C'_{i}$. Then clearly $C'_{i} \ne C_{i}$ and $C'_{i} \ne C_{j}$, so $C_{i}$ and $C_{j}$ are not merged. The $\delta_{o}$ bound on the number of split requests follows from the observation that each split reduces the overclustering error by exactly 1 (as before), and the fact that the merge procedure does not increase overclustering error. \[lem:unrestricted-overclustering-error\] The merge algorithm in Figure \[fig:merge-average-linkage-unrestricted\] does not increase overclustering error. Suppose $C_{i}$ and $C_{j}$ are not both “pure” (one or both contain elements from several ground-truth clusters), and hence we obtain two new clusters $C'_{i}$, $C'_{j}$. Let us call $\delta_{1}$ the overclustering error before the merge, and $\delta_{2}$ the overclustering error after the merge. Let’s use $k_{1}$ to refer to the number of ground-truth clusters that intersect $C_{i}$, $k_{2}$ to refer to the number of ground-truth clusters that intersect $C_{j}$, and define $k_{1}'$ and $k_{2}'$ similarly. The new clusters $C'_{i}$ and $C'_{j}$ result from a “clean” split, therefore no ground-truth cluster may intersect both of them. It follows that $k_{1}' + k_{2}' \le k_{1} + k_{2}$. Therefore we now have: $$\begin{aligned} \delta_{2} & = & \delta_{1} - (k_{1} - 1) - (k_{2} - 1) + (k_{1}' - 1) + (k_{2}' - 1)\\ & = & \delta_{1} - (k_{1} + k_{2}) + (k_{1}' + k_{2}') \le \delta_{1}.\end{aligned}$$ If $C_{i}$ and $C_{j}$ are both “pure” (both are subsets of the same ground-truth cluster), then clearly the merge operation has no effect on the overclustering error. The following lemmas bound the number of impure and pure merges. Here we call a proposed merge *pure* if both clusters are subsets of the same ground-truth cluster, and *impure* otherwise. \[lem:merge-unrestricted\] The merge algorithm in Figure \[fig:merge-average-linkage-unrestricted\] requires at most $\delta_{u}$ impure merge requests. We argue that the result of each impure merge request must reduce the underclustering error by at least 1. Suppose the oracle requests to merge $C_{i}$ and $C_{j}$, and $C'_{i}$ and $C'_{j}$ are the resulting clusters. Clearly, the local edit has no effect on the underclustering error with respect to target clusters that do not intersect $C_{i}$ or $C_{j}$. In addition, because the new clusters $C'_{i}$ and $C'_{j}$ result from a *clean* split, for target clusters that intersect exactly one of $C_{i}$, $C_{j}$, the underclustering error must stay the same. For target clusters that intersect both $C_{i}$ and $C_{j}$, the underclustering error must decrease by exactly one; the number of such target clusters is at least one. \[lem:merge-prob-unrestricted\] The probability that the algorithm in Figure \[fig:merge-average-linkage-unrestricted\] requires more than $O(\log \frac{k}{\epsilon} {\delta}^2_{u})$ pure merge requests is less than $\epsilon$. We first consider the pure merge requests involving points from some ground-truth cluster $C^{\ast}_{i}$, the total number of pure merge requests (involving any ground-truth cluster) can then be bounded with a union-bound. To facilitate our argument, let us assign an identifier to each cluster containing points from $C^{\ast}_{i}$ in the following manner: 1. Maintain a CLUSTER-ID variable, which is initialized to 1. 2. To assign a “new” identifier to a cluster, set its identifier to CLUSTER-ID, and increment CLUSTER-ID. 3. In the initial clustering, assign a *new* identifier to each cluster containing points from $C^{\ast}_{i}$. 4. When we split a cluster containing points from $C^{\ast}_{i}$, assign its identifier to the newly-formed cluster containing points from $C^{\ast}_{i}$. 5. When we merge two clusters and one or both of them are impure, if one of the clusters contains points from $C^{\ast}_{i}$, assign its identifier to the newly-formed cluster containing points from $C^{\ast}_{i}$. If both clusters contain points from $C^{\ast}_{i}$, assign a *new* identifier to the newly-formed cluster containing points from $C^{\ast}_{i}$. 6. When we merge two clusters $C_{1}$ and $C_{2}$, and both contain only points from $C^{\ast}_{i}$, if the outcome is one new cluster, assign it a *new* identifier. If the outcome is two new clusters, assign them the identifiers of $C_{1}$ and $C_{2}$. Clearly, when clusters containing points from $C^{\ast}_{i}$ are assigned identifiers in this manner, the maximum value of CLUSTER-ID is bounded by $O(\delta_{i})$, where $\delta_{i}$ denotes the underclustering error of the initial clustering with respect to $C^{\ast}_{i}$: $\delta_{i} = \dist(C^{\ast}_{i}, C)$. To verify this, consider that we assign exactly $\delta_{i}+1$ new identifiers in Step-3, and each time we assign a new identifier in Steps 5 and 6, the underclustering error of the edited clustering with respect to $C^{\ast}_{i}$ decreases by one. We say that a *pure* merge request involving points from $C^{\ast}_{i}$ is *original* if the user has never asked us to merge clusters with the given identifiers, otherwise we say that this merge request is *repeated*. Given that the maximum value of CLUSTER-ID is bounded by $O(\delta_{i})$, the total number of *original* merge requests must be $O(\delta_{i}^{2})$. We now argue that if a merge request is not original, we can lower bound the probability that it will result in the merging of the two clusters. For repeated merge request $M_{i} = Merge(C_{1},C_{2})$, let $X_{i}$ be a random variable defined as follows: $$X_{i} = \left\{ \begin{array}{ll} 1 & \textrm{if neither $C_{1}$ nor $C_{2}$ have been involved in}\\ & \textrm{a merge request since the last time a merge of}\\ & \textrm{clusters with these identifiers was proposed.}\\ 0 & \textrm{otherwise.}\\ \end{array} \right.$$ Clearly, when $X_{i} = 1$ it must be the case that $C_{1}$ and $C_{2}$ are merged. We observe that $\textrm{Pr} \lbrack X_{i} = 1 \rbrack > \frac{1}{2 \delta_{i} + 1}$. To verify this, observe that in each step the probability that the user requests to merge $C_{1}$ and $C_{2}$ is $\frac{1}{m}$, and the probability that the user requests to merge $C_{1}$ or $C_{2}$ with some other cluster is less than $\frac{2 \delta_{i}}{m}$, where $m$ is the total number of possible merge requests; we can then bound the probability that the former happens before the latter. We can then use a Chernoff bound to argue that after $t = O(\log \frac{k}{\epsilon} \delta_{i}^{2})$ *repeated* merge requests, the probability that $\sum_{i=1}^{t}X_{i} < \delta_{i}$ (which must be true if we need more *repeated* merge requests) is less than $\epsilon/k$. Therefore, the probability that we need more than $O(\log \frac{k}{\epsilon} \delta_{i}^{2})$ *repeated* merge requests is less than $\epsilon/k$. By the union-bound, the probability that we need more than $O(\log \frac{k}{\epsilon} \delta_{i}^{2})$ *repeated* merge requests for *any* ground-truth cluster $C^{\ast}_{i}$ is less than $k \cdot \epsilon/k = \epsilon$. Therefore with probability at least $1 - \epsilon$ for all ground-truth clusters we need $\sum_{i} O(\log \frac{k}{\epsilon} \delta_{i}^{2}) = O (\log \frac{k}{\epsilon} \sum_{i} \delta_{i}^{2}) = O(\log \frac{k}{\epsilon} \delta_{u}^2)$ *repeated* merge requests, where $\delta_{u}$ is the underclustering error of the original clustering. Similarly, for all ground-truth clusters we need $\sum_{i} O(\delta_{i}^{2}) = O(\delta_{u}^{2})$ *original* merge requests. Adding the two terms together, it follows that with probability at least $1-\epsilon$ we need a total of $O(\log \frac{k}{\epsilon} \delta_{u}^{2})$ pure merge requests. As in the previous section, we also restate the run-time bound in Theorem \[thm:strict-separation-random\] in terms of any *natural* clustering error $\gamma$. The following collorary follows from Theorem \[thm:strict-separation-random\] and Theorem \[thm:generalized-clustering-error\]. \[thm:strict-separation-random-generalized\] Suppose the target clustering satisfies stability, and the initial clustering has clustering error $\gamma$, where $\gamma$ is any *natural* clustering error as defined in Definition \[def:natural-clustering-error\]. In the unrestricted-merge model, with probability at least $1-\epsilon$, the algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-unrestricted\] require $O(\log \frac k {\epsilon} {{\gamma}^2})$ edit requests to find the target clustering. As in the previous section, if the data satisfies *strcit separation*, then instead of the split procedure in Figure \[fig:split-average-linkage\] we can use the procedure in Figure \[fig:local-split-average-linkage\], which uses the local average-linkage tree (constructed from only the points in the user request). We can then obtain the same performance guarantee as in Theorem \[thm:strict-separation-random\] for the algorithms in Figure \[fig:local-split-average-linkage\] and Figure \[fig:merge-average-linkage-unrestricted\]. Experimental Results {#sec:experiments} ==================== We perform two sets of experiments: we first test the proposed split procedure on the clustering of business listings maintained by Google, and also test the proposed framework in its entirety on the much smaller newsgroup documents data set. Clustering business listings ---------------------------- Google maintains a large collection of data records representing businesses. These records are clustered using a similarity function; each cluster should contain records about the same distinct business; each cluster is summarized and served to users online via various front-end applications. Users report bugs such as “you are displaying the name of one business, but the address of another" (caused by over-clustering), or “a particular business is shown multiple times" (caused by under-clustering). These bugs are routed to operators who examine the contents of the corresponding clusters, and request splits/merges accordingly. The clusters involved in these requests may be quite large and usually contain records about several businesses. Therefore automated tools that can perform the requested edits are very helpful. In particular, here we evaluate the effectiveness of our proposed split procedure in computing correct cluster splits. We consider a binary split correct if the two resulting sub-clusters are “clean” using Definition \[def:clean-split\], and consider the split incorrect otherwise. Note that a clean split is sufficient and necessary for reducing the under/overclustering error. To compute the splits, we use the algorithm in Figure \[fig:local-split-average-linkage\], which we refer to as *Clean-Split*. This algorithm is easier to implement and run than the algorithm in Figure \[fig:split-average-linkage\] because we do not need to compute the global average-linkage tree. But it is still provably correct under stronger assumptions on the data (see Theorem \[thm:strict-separation\] and Theorem \[thm:strict-threshold-separation\]). For comparison purposes, we use two well-known techniques for computing binary splits: the optimal 2-median clustering (*2-Median*), and a “sweep” of the second-smallest eigenvector of the corresponding Laplacian matrix. Let $\{v_{1},\ldots,v_{n}\}$ be the order of the vertices when sorted by their eigenvector entries, we compute the partition $\{v_{1},\ldots,v_{i}\}$ and $\{v_{i+1},\ldots,v_{n}\}$ such that its conductance is smallest (*Spectral-Balanced*), and a partition such that the similarity between $v_{i}$ and $v_{i+1}$ is smallest (*Spectral-Gap*). -------------- ---------- -------------- ------------------- Clean-Split 2-Median Spectral-Gap Spectral-Balanced \[0.5ex\] 19 13 12 3 \[0.5ex\] -------------- ---------- -------------- ------------------- : Number of correct (clean) splits \[table:split-known-over-clusters\] We compare the split procedures on 20 over-clusters that were discovered during a clustering-quality evaluation[^2]. The results are presented in Table \[table:split-known-over-clusters\]. We observe that the *Clean-Split* algorithm works best, giving a correct split in 19 out of the 20 cases. The well-known *Spectral-Balanced* technique usually does not give correct splits for this application. The balance constraint usually causes it to put records about the same business on both sides of the partition (especially when all the “clean” splits are not well-balanced), which increases clustering error. As expected, the *Spectral-Gap* technique improves on this limitation (because it does not have a balance constraint), but the result often still increases clustering error. The *2-Median* algorithm performs fairly well, but it may not be the right technique for this problem: the optimal centers may correspond to listings about the same business, and even if they represent distinct businesses, the resulting partition is still sometimes incorrect. -------------- ------------- ---------- Dataset Clean-Split 2-Median \[0.5ex\] 1 -14 -14 \[0.5ex\] 2 -5 -5 \[0.5ex\] 3 -11 -11 \[0.5ex\] 4 -117 -117 \[0.5ex\] 5 -42 +90 \[0.5ex\] 6 -4 -4 \[0.5ex\] 7 -12 -30 \[0.5ex\] 8 -27 -27 \[0.5ex\] 9 -6 -6 \[0.5ex\] 10 -6 -6 \[0.5ex\] 11 +6 -8 \[0.5ex\] 12 -10 +14 \[0.5ex\] 13 -6 -6 \[0.5ex\] 14 -12 -22 \[0.5ex\] 15 -6 -6 \[0.5ex\] 16 -10 +14 \[0.5ex\] 17 -11 -27 \[0.5ex\] 18 -10 -10 \[0.5ex\] 19 -11 -5 \[0.5ex\] 20 -10 -10 \[0.5ex\] -------------- ------------- ---------- : Change in correlation-clustering error \[table:split-known-over-clusters-cc-error\] In addition to using the clean-split criterion, we also evaluate the computed splits using the correlation-clustering (cc) error. We find that using this criterion *Clean-Split* and *2-Median* compute the best splits, while the other two algorithms perform significantly worse. The results for *Clean-Split* and *2-Median* are presented in Table \[table:split-known-over-clusters-cc-error\]. Note that a clean split is sufficient to reduce the correlation-clustering error, but it is not necessary. Our experiments illustrate these observations: *Clean-Split* makes progress in reducing the cc-error in 19 out of 20 cases (when the resulting split is clean), while *2-Median* is able to still reduce the cc-error even when the resulting split is not clean. Overall, in 12 instances the two algorithms give a tie in performance; in 4 instances *Clean-Split* makes more progress in reducing the correlation-clustering error; and in 4 instances *2-Median* makes more progress. Also note that *Clean-Split* fails to reduce the cc-error only once; while *2-Median* fails to reduce the cc-error 4 times. Clustering newsgroup documents ------------------------------ In order to test our entire framework (the iterative application of our algorithms), we perform computational experiments on newsgroup documents data.[^3] The objects in these data sets are posts to twenty different online forums (newsgroups). We sample these data to compute 5 data sets of manageable size (containing 276-301 elements), which are labeled A through E in the figures. Each data set contains some documents from every newsgroup. Each post/document is represented by a term frequency - inverse document frequency (tf-idf) vector [@SB88]. We use cosine similarity to compare these vectors, which gives a similarity measure between 0 and 1 (inclusive). We compute an initial clustering by using the following procedure to perturb the ground-truth: for each document we keep its ground-truth cluster assignment with probability $0.5$, and otherwise reassign it to one of the other clusters, which is chosen uniformly at random. In each iteration, we compute the set of all feasible splits and merges: a split of a cluster is feasible if it contains points from 2 or more ground-truth clusters, and a merge is feasible if at least an $\eta$- fraction of points in each cluster are from the same ground-truth cluster. Then, we choose one of the feasible edits uniformly at random, and ask the algorithm to compute the corresponding edit. We continue this process until we find the ground-truth clustering or we reach 20000 iterations. Note that for the $\eta$-merge model, our theoretical analysis is applicable to any edit-request sequence, but in our experiments for simplicity we still select a feasible edit uniformly at random. Our initial clusterings have over-clustering error of about 100, under-clustering error of about 100; and correlation-clustering error of about 5000. We notice that for newsgroup documents it is difficult to compute average-linkage trees that are very consistent with the ground-truth. This observation was also made in other clustering studies that report that the hierarchical trees constructed from these data have low purity  [@Telgarsky12; @HellerG05]. These observations suggest that these data are quite challenging for clustering algorithms. To test how well our algorithms can perform with better data, we prune the data sets by repeatedly finding the outlier in each target cluster and removing it, where the outlier is the point with minimum sum-similarity to the other points in the target cluster. For each data set, we perform experiments with the original (unpruned) data set, a pruned data set with 2 points removed per target cluster, and a pruned data set with 4 points removed per target cluster, which prunes 40 and 80 points, respectively (given that we have 20 target clusters). ### Experiments in the $\eta$-merge model We first experiment with local clustering algorithms in the $\eta$-restricted merge setting. Here we use the algorithm in Figure \[fig:split-average-linkage\] to perform the splits, and the algorithm in Figure \[fig:merge-average-linkage-relaxed\] to perform the merges. We show the results of running our algorithm on data set A in Figure \[fig:experiments-eta-merge\]. The complete experimental results are in the Apppendix. We find that for larger settings of $\eta$, the number of edit requests (necessary to find the target clustering) is very favorable and is consistent with our theoretical analysis. The results are better for pruned datasets, where we get very good performance regardless of the setting of $\eta$. The results for algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-cc\] (for the correlation-clustering objective) are very favorable as well. ### Experiments in the unrestricted-merge model We also experiment with algorithms in the unrestricted merge model. Here we use the same algorithm to perform the splits, but use the algorithm in Figure \[fig:merge-average-linkage-unrestricted\] to perform the merges. We show the results on dataset A in Figure \[fig:experiments-unrestricted-merge\]. The complete experimental results are in the Apppendix. We find that for larger settings of $\eta$ our results are better than our theoretic analysis (we only show results for $\eta \ge 0.5$), and performance improves further for pruned datasets. Our investigations show that for unpruned datasets and smaller settings of $\eta$, we are still able to quickly get close to the target clustering, but the algorithms are not able to converge to the target due to inconsistencies in the average-linkage tree. We can address some of these inconsistencies by constructing the tree in a more robust way, which indeed gives improved performance for unpruned data sets. ### Experiments with small initial error [ We also consider a setting where the initial clustering is already very accurate. In order to simulate this scenario, when we compute the initial clustering, for each document we keep its ground-truth cluster assignment with probability $0.95$, and otherwise reassign it to one of the other clusters, which is chosen uniformly at random. This procedure usually gives us initial clusterings with over-clustering and under-clustering error between 5 and 20, and correlation-clustering error between 500 and 1000. As expected, in this setting our interactive algorithms perform much better, especially on pruned data sets. Figure \[fig:small-error-experiments\] displays the results; we can see that in these cases it often takes less than one hundred edit requests to find the target clustering in both models.]{} ### Improved performance using a robust average-linkage tree When we investigate the inconsistencies in the average linkage trees, we observe that there are “outlier” points that are attached near the root of the tree, which are incorrectly split off and re-merged by the algorithm without making any progress towards finding the target clustering. We can address these outliers by constructing the average-linkage tree in a more robust way: first find groups (“blobs”) of similar points of some minimum size, compute an average-linkage tree for each group, and then merge these trees using average-linkage. The tree constructed in such fashion may then be used by our algorithms. We tried this approach, using Algorithm 2 from [@BG10] to compute the “blobs”. We find that using the robust average-linkage tree gives better performance for the unpruned data sets, but gives no gains for the pruned data sets. Figure \[fig:tree-construction-experiments\] displays the comparison for the five unpruned data sets. For the pruned data sets, it’s likely that the robust tree and the standard tree are very similar, which explains why there is little difference in performance (results not shown). Discussion ========== In this work we motivated and studied a new framework and algorithms for interactive clustering. Our framework models practical constraints on the algorithms: we start with an initial clustering that we cannot modify arbitrarily, and are only allowed to make local edits consistent with user requests. In this setting, we develop several simple, yet effective algorithms under different assumptions about the nature of the edit requests and the structure of the data. We present theoretical analysis that shows that our algorithms converge to the target clustering after a small number of edit requests. We also present experimental evidence that shows that our algorithms work well in practice. Several directions come out of this work. It would be interesting to relax the condition on $\eta$ in the $\eta$-merge model, and the assumption about the request sequences in the unrestricted-merge model. It is important to study additional properties of an interactive clustering algorithm. In particular, it is often desirable that the algorithm never increase the error of the current clustering. Our algorithms in Figures \[fig:split-average-linkage\],  \[fig:merge-average-linkage-cc\] and  \[fig:merge-average-linkage-unrestricted\] have this property, but the algorithm in Figure \[fig:merge-average-linkage-relaxed\] does not. Complete Experimental Results ============================= The following figures show the complete experimental results for all the algorithms. Figure \[fig:experimental-results-1-app\] and Figure \[fig:experimental-results-2-app\] give the results in the $\eta$-merge model. Figure \[fig:experimental-results-cc-1-app\] and Figure \[fig:experimental-results-cc-2-app\] give the results in the $\eta$-merge model for the algorithms in Figure \[fig:split-average-linkage\] and Figure \[fig:merge-average-linkage-cc\] (for the correlation-clustering objective). Figure \[fig:experimental-results-unrestricted-1-app\] and Figure \[fig:experimental-results-unrestricted-2-app\] give the results in the unrestricted-merge model. [^1]: Given 2 different $k$ clusterings, $\delta_u$ and $\delta_o$ is atmost $k^2$. [^2]: the data set is available at [voevodski.org/data/businessListingsDatasets/description.html](http://voevodski.org/data/businessListingsDatasets/description.html). [^3]: http://people.csail.mit.edu/jrennie/20Newsgroups/
--- author: - 'Jui-Lin Kuo' - Massimiliano Lattanzi - Kingman Cheung - 'José W. F. Valle' bibliography: - 'bibliography.bib' title: Decaying warm dark matter and structure formation --- Introduction {#Sec:Intro} ============ So far we have failed to identify the nature of what makes up most of the matter present in the Universe, only a small fraction of which is the baryonic stuff found in stellar objects and intergalactic medium. The existence of a “dark matter" component on all scales is inferred mainly from the gravitational effect it seems to have on visible matter. No particle of the [[standard model ]{}]{}can play the role of dark matter, hence it must be new physics. For several decades already, there seems to be a consensus that dark matter must be collisionless yet, to date, its detailed nature remains a mystery [@Bertone2005279]. On the other hand, the discovery of neutrino oscillations [@Kajita:2016cak; @McDonald:2016ixn] indicates the need for nonzero neutrino masses. However, underpinning the detailed properties of neutrinos and the ultimate origin of their mass poses another great challenge for the [[standard model ]{}]{}of particle physics [@Valle:2015pba]. A tantalizing possibility is that cosmological dark matter is deeply related to the generation of neutrino masses [@Lattanzi:2014mia]. For example, dark matter could be a messenger particle associated to the neutrino mass generation [@Ma:2006km; @Hirsch:2013ola; @Merle:2016scw; @Bonilla:2016diq]. Its stability could also reflect a fundamental property of neutrinos, such as its possible Dirac nature [@Chulia:2016ngi]. Or it could follow, for example, as a remnant of the symmetry which accounts for the peculiar pattern of neutrino mixing angles indicated by the oscillation experiments [@Hirsch:2010ru; @Boucenna:2012qb]. In both cases dark matter would be a stable weakly interacting massive particle (WIMP). There are, however, many well-motivated alternatives to WIMP dark matter. The associated dark matter candidates need not be strictly stable, while providing viable cosmology. For example, a decaying gravitino [@Restrepo:2011rj; @Choi:2009ng] provides an attractive scenario for decaying dark matter related to neutrino physics. Here we focus on the possibility that the majoron $J$ plays the role of decaying dark matter. This has a two-fold motivation. Theoretically, the majoron is a very broad concept, emerging as a Nambu-Goldstone boson in any theory where neutrino masses arise from the spontaneous breaking of a continuous global symmetry, such as lepton number [@chikashige:1981ui; @Schechter:1981cv]. On the other hand, as an alternative to the $\Lambda$CDM paradigm, the majoron picture may have the right properties to address some potential drawbacks of the standard scenario, such as the “small scale crisis” which can be alleviated by the warm nature of the majoron [@Weinberg:2013aya; @Bullock:2017xww]. The majoron is assumed to acquire a mass $m_J$ through gravitational instanton effects that explicitly violate global symmetries [@coleman:1988tj]. The value $m_J$ of the majoron mass can not be computed by theory. A particularly interesting range for the mass is the keV range. Such keV majoron has been suggested by Berezinsky and Valle (BV) as a viable decaying dark matter candidate [@Berezinsky:1993fm]. On general theoretical grounds, the massive majoron is necessarily unstable, as it couples to neutrinos, with a strength proportional to their tiny mass [@Schechter:1981cv]. In order for the majoron to be the dark matter, it must be cosmologically long-lived, i.e. its lifetime $\tau_J$ should be of the order of the age of the Universe $t_0 =13.8\,\mathrm{Gyr}\simeq 4\times10^{17}\,\mathrm{s}$, or larger, $\tau_J \gtrsim t_0$. In fact, it has been shown that cosmic microwave background (CMB) data places a stronger requirement on the majoron decay rate [@Lattanzi:2007ux], in order to avoid producing too much fluctuation power on the largest CMB scales, following the decay of majoron to neutrinos and the subsequent modifications to the cosmological gravitational potentials. In the framework of a simple one-parameter extension of the standard $\Lambda$CDM model, one finds $\tau_J > 50\,\mathrm{Gyr}$ using WMAP9 data [@Lattanzi:2013uza]. The limit tightens to $\tau_J > 160\,\mathrm{Gyr}$ when Planck 2013 data and large-scale structures linear data from WiggleZ and BOSS are taken into account[^1] [@Audren:2014bca]. Here we adopt the most conservative limit as our reference choice when discussing DWDM. Indeed, the massive majoron dark matter model has been shown to be consistent with CMB data for interesting choices of the relevant parameters [@Lattanzi:2007ux; @Lattanzi:2013uza; @Audren:2014bca]. If the majoron was in thermal equilibrium with the plasma in the early Universe and decoupled at some later stage, a mass $m_J = \mathcal{O}(\mathrm{keV})$ would produce the right dark matter abundance [@Berezinsky:1993fm]. Moreover, a thermal particle with keV mass would be a WDM candidate. For a thermal majoron that decoupled when all the degrees of freedom of the standard model were still excited, measurements of CMB anisotropies yield the following constraints [@Lattanzi:2013uza]: $m_J=(0.158\pm0.007)\,\mathrm{keV}$ (68% C.L.) and $\tau_J>50\,\mathrm{Gyr}$ (95% C.L). We note that this value of the mass is in tension with constraints coming from observations of the Ly-$\alpha$ forest [@Narayanan:2000tp; @Baur:2015jsy; @Irsic:2017ixq]. There is the alternative possibility that the majoron was never in thermal equilibrium with the other species in the cosmological plasma. For this reason[^2], we will treat the mass of the majoron as a free parameter and consider different values for it in our study. Since the coupling of the majoron to neutrinos $g_\nu$ is proportional to the neutrino mass [@Schechter:1981cv], the decay $J \to \nu\nu$ can naturally have a very long lifetime on cosmological scales. The CMB constraints for the thermal majoron can be shown to imply $g_\nu < 5\times 10^{-18}$. Other decay channels may be present, depending on the model. For example, in type II seesaw models the majoron can also decay to photons. The effective (one-loop suppressed) coupling to $\gamma$’s can be constrained through X- and $\gamma$-ray observations [@Lattanzi:2013uza; @Bazzocchi:2008fh] However, since this coupling is rather model dependent, we will disregard the radiative decay channel in what follows. This is, in practice, equivalent to assume that neutrino masses are generated through simplest type I seesaw mechanism. In the present paper we examine the effect of decaying warm dark matter on non-linear structure formation, so far unexplored in the literature. The aim of this paper is exactly to fill this gap, and study the effect of decaying majoron dark matter on structure formation using N-body simulations. In fact, we show that such a DWDM majoron expected within the BV framework does indeed yield a viable cosmology, which can differ substantially from that of the standard $\Lambda$CDM paradigm. This happens for two reasons. First, due to the warm nature of the majoron and second, due to the fact that it decays. Our paper is organized as follows. In section \[Sec:simulations\] we explain the approach employed in our N-body simulations and demonstrate the convergence of our methodology. In section \[Sec:result\] we describe our results while in section \[Sec:baryon\] we discuss about the possible impact of various baryonic processes. Finally, in section \[sec:conclusion\] we draw our conclusions and summarize our results, commenting on their possible implications. Additional discussion on structure formation and the WDM mass allowed by Lyman-alpha forest data is given in the appendix. The simulations {#Sec:simulations} =============== Methodology {#Subsec:method} ----------- In order to scrutinize the novel features of the DWDM scenario we perform different cosmological simulations, as listed in Table. \[Tab:simulation\]. We consider DM that is either stable or that decays with a lifetime of $50\,\mathrm{Gyr}$, which is the lower limit from the CMB obtained in Ref. [@Lattanzi:2013uza]. We also consider the case of CDM, and two different cases of WDM. This makes a total of six N-body cosmological simulations. To avoid word cluttering in the following, we use abbreviations for these simulations, as given in the Tab. \[Tab:simulation\]. In the CDM simulations, the mass of the DM particle is large enough to suppress free-streaming on the initial matter power spectrum. In other words, this is the limit of the DM temperature-to-mass ratio going to zero. In the DWDM case, we consider two values of the DM mass, namely $m_J=0.158~\mathrm{keV}$ and $m_J=1.5\,\mathrm{keV}$. The former value, as mentioned in Sec. \[Sec:Intro\] , is the one that would give the right relic density for a scalar particle, like the majoron, that decoupled in the early Universe when all the degrees of freedom of the standard model were present. The latter value can be realized if the majoron has a nonthermal distribution, or if it is thermal but its density is diluted by an additional production of entropy after decoupling (both possibilities were described by an effective parameter called $\beta$ in [@Lattanzi:2007ux]). In any case, we will remain agnostic about the production mechanism, and assume a thermal distribution in all our WDM simulations when generating initial conditions (see below). This is also in view of the fact that even if the majoron provides a neat particle physics motivation for the DWDM scenario, nevertheless our results are more general, in the sense that they apply independently of the particular nature of DM. [p[2.5cm]{}&lt;|p[3.5cm]{}&lt;|p[3.5cm]{}&lt;|p[3.5cm]{}&lt;]{} Abbreviations & Initial Conditions & Lifetime & WDM mass\ SCDM & CDM & $\infty$ & N/A\ DCDM & CDM & $50\,\mathrm{Gyr}$ & N/A\ SWDM-M & WDM & $\infty$ & $1.5\,\mathrm{keV}$\ DWDM-M & WDM & $50\,\mathrm{Gyr}$ & $1.5\,\mathrm{keV}$\ SWDM-m & WDM & $\infty$ & $0.158\,\mathrm{keV}$\ DWDM-m & WDM & $50\,\mathrm{Gyr}$ & $0.158\,\mathrm{keV}$\ The values that we choose for the DM mass are in tension with lower limits obtained from observations of Ly$-\alpha$ flux-power spectra. For example, the recent analysis of Ref [@Irsic:2017ixq] finds $m>5.3 \,\mathrm{keV}$ at 95% CL for a thermal candidate, from a combined analysis of the XQ-100 and HIRES/MIKE data samples. This limit can be relaxed to $3.5 \,\mathrm{keV}$ by allowing for a non-smooth evolution of the temperature of the intergalactic medium (IGM). We choose to consider smaller values of the mass for two reasons. The first one is that the nature of our paper is exploratory, and the main purpose is to study the joint effects of the DM decay and free streaming. A small value of the mass allows us to maximize free-streaming in order to better highlight the interplay between these two effects, taking into account also the computational resources at our disposal. The second reason is that the interpretation of Ly$-\alpha$ data is somehow complicated by several factors, like for example the aforementioned dependence on the modeling of the IGM thermal history. For example, Ref [@Garzilli:2018jqh] finds that the Ly$-\alpha$ data can be made consistent with models excluded by other analyses. This, however, does not necessarily imply that thermal DM with the masses considered here can be made consistent with Ly$-\alpha$ observations; a dedicated study would be necessary for that purpose. That said we have, in any case, also performed simulations for “large” DM mass, $m_J = 5.3 \, \mathrm{keV}$. We found no appreciable difference with the CDM case in the range of scales that we are able to probe within our numerical resolution. The results for that case are given in the appendix. A future analysis might consider different values of the mass, using larger-resolution simulations, and also a non-thermal spectrum for the DM. The standard N-body simulation code `Gadget2` [@Springel:2005mi] is adopted to perform the simulations. `Gadget2` follows the evolution of a self-gravitating system of collisionless “particles”, taking into account the expansion of the Universe. These particles are in fact macroscopical objects, composed by a large number of DM particles. For this reason one usually refers to them as “simulation particles”, as opposed to actual DM particles. In order to implement the effect of decay, we include two modifications in the original `Gadget2` code, following the approach in Refs. [@1987ApJ...321...36S; @Enqvist:2015ara], which addressed the issue of dark matter decays into dark radiation. Although here we are concerned with dark matter decaying into relativistic neutrinos, the algorithm of the simulation is similar to Ref. [@Enqvist:2015ara]. First of all, the mass of the simulation particle is reduced by a small amount at each step in the simulation, in order to account for the effect of DM decay. Therefore, in the simulation the mass of the simulation particles is altered according to $$M(t) = M(1-R+R\, e^{-t(z)/\tau_J}), \label{eq:Mt}$$ where $M$ is the initial mass of the simulation particles , and $R \equiv (\Omega_M - \Omega_b)/\Omega_M$ is the DM fraction in the matter component, and $\Omega_b$ refers to the baryon contribution. In addition to reducing the simulation particle mass, we also modify the expansion rate of the universe in accordance with the energy content at each redshift. Due to the dark matter decaying into relativistic particles (in the case of the majoron, neutrinos), the expansion history in the DWDM majoron scenario is different from that of the stable DM case. The evolution of the dark matter and of the decay products $\rho_{dm}$ and $\rho_{dp}$ are described by $$\label{Eq:rho_evolution} \begin{gathered} {\dot{\rho}_{dm}}+3\mathcal{H} \rho_{dm} = -\dfrac{a}{\tau_J} \rho_{dm}, \\ {\dot{\rho}_{dp}}+4\mathcal{H} \rho_{dp} = \dfrac{a}{\tau_J} \rho_{dm}, \end{gathered}$$ where $\mathcal{H}$ and $a$ are the conformal Hubble parameter and the scale factor, and the dot represents the derivative with respect to the conformal time. Here we assume that the decay products are relativistic, so the pre-factor for the Hubble drag term for this component in Eq.  is $4$. On the other hand, $\mathcal{H}$ at each redshift is determined by $$\label{Eq:hubble} \mathcal{H}^2(z) = \dfrac{ 8\pi G}{3} a^2 (\rho_{dm}(z) + \rho_{b}(z) + \rho_{dp}(z) + \rho_\Lambda(z)),$$ where $G$, $\rho_b$, and $\rho_\Lambda$ are the gravitational constant, the baryon energy density, and the energy density of dark energy, respectively. We assume that dark energy is in the form of a cosmological constant. We also neglect the presence of the thermal relic neutrinos produced in the early phases of the cosmological evolution, both at the background and perturbation level. Note that $\rho_{b}$ and $\rho_\Lambda$ are unaffected by the energy exchange between DM and the decay products, hence they evolve as in the standard case (i.e., $\rho_b \propto a ^{-3}$ and $\rho_\Lambda = \mathrm{const}$). Therefore, given the initial values for $\rho_{dm}$ and $\rho_{dp}$, we need to numerically solve Eq.  in conjunction with Eq.  at each timestep, in order to obtain the precise Hubble parameter describing the expansion of the universe. For simplicity, in the simulation we neglect the effects of perturbations in the decay products. Indeed, we note that the contribution of the decay products to the energy density is very small, since we consider very long DM lifetimes. Moreover, the decay-produced neutrinos are free-streaming and thus do not cluster, due to their relativistic nature. So the main effect of the decay products is just to reduce the amount of matter that is able to cluster, and this is fully captured by decreasing the mass of each simulation particle as in Eq. \[eq:Mt\]. We expect this approximation to break down on the largest scales, above the free-streaming length of the decay products, where these are able to cluster. However, this happens around the horizon scale, which is much larger than the largest scales probed by our simulations, that use a box size of $50 h^{-1} {\ensuremath{\,\mathrm{Mpc}}}$. Moreover, the power spectrum on those scales can be reliably computed using linear theory, if necessary. As a result, we expect that addional effects related to perturbations in the decay-produced neutrinos will be subtle and not change our results significantly. Note that we do not include baryons in our simulation and thus neglect, among others, baryonic feedback processes. The reason again is that, given the scope of our paper, we want to focus on the interplay between DM decay and free streaming. The inclusion of baryonic effects, through hydrodynamic simulations, would be of course mandatory for a rigorous comparison between the predictions of the “full-fledged” DWDM scenario and the observations. We comment, anyway, on the possible effects of baryonic physics in Sec. \[Sec:baryon\]. Initial Conditions {#Subsec:IC} ------------------ ![Left panel: comparison of the matter power spectrum of initial condition for $\Lambda$CDM (red solid line), WDM with mass $1.5\,\mathrm{keV}$ (blue solid line) and WDM with mass $0.158\,\mathrm{keV}$ (black solid line). The vertical “Nyquist” band lies above the limit set by the scale of the average size of the simulation particle. Right panel: Relative difference between the WDM and CDM power spectra, for WDM with mass $1.5\,\mathrm{keV}$ (blue solid line) and $0.158\,\mathrm{keV}$ (black solid line). Here the matter power spectra are obtained from the output of `2LPTic`, hence the effect of finite numerical resolution is already included. The cut-off due to the free-streaming of WDM can be clearly seen.[]{data-label="Fig:ICPkcompare"}](Fig/IC_compare_simulation.pdf "fig:"){width="45.00000%"} ![Left panel: comparison of the matter power spectrum of initial condition for $\Lambda$CDM (red solid line), WDM with mass $1.5\,\mathrm{keV}$ (blue solid line) and WDM with mass $0.158\,\mathrm{keV}$ (black solid line). The vertical “Nyquist” band lies above the limit set by the scale of the average size of the simulation particle. Right panel: Relative difference between the WDM and CDM power spectra, for WDM with mass $1.5\,\mathrm{keV}$ (blue solid line) and $0.158\,\mathrm{keV}$ (black solid line). Here the matter power spectra are obtained from the output of `2LPTic`, hence the effect of finite numerical resolution is already included. The cut-off due to the free-streaming of WDM can be clearly seen.[]{data-label="Fig:ICPkcompare"}](Fig/IC_compare_ratio_simulation.pdf "fig:"){width="45.00000%"} To generate initial conditions for the N-body simulations, one uses linear theory to evolve the primordial perturbations in $k$ space up to some redshift deep in the matter-dominated era, but still early enough for the linear predictions to be valid. This is the initial redshift, in our case $z=99$, from which the N-body simulations start. Since this initial time is well before the DM decay kicks off, the initial power spectra for the stable and decaying DM case are the same. We adopted the fitting form of CDM matter power spectrum $P_\mathrm{CDM}$ given in Ref. [@Eisenstein:1997jh], which is based on the calculation of linear theory, to compute the initial power spectrum for CDM initial conditions. In the WDM scenario, we estimate the power spectrum at the initial redshift as $$P_\mathrm{WDM}(k) = T^2_{\mathrm{WDM}}(k)\times P_\mathrm{CDM}(k) \, ,$$ where $T_\mathrm{WDM}(k)$ is the transfer function given in Ref. [@Bode:2000gq] (where it is called $T_\chi$), which accounts for the cut-off in the matter power spectrum due to the free-streaming effect. The initial transfer function for thermal WDM can be written as $$T_{\mathrm{WDM}}(k) = \left(1+(\alpha k)^{2\nu} \right)^{-5/\nu}, \label{eq:wdmtransfer}$$ where $\alpha = 0.048(\Omega_{DM}/0.4)^{0.15} (h/0.65)^{1.3} (\mathrm{keV}/m_\mathrm{DM})^{1.15} (1.5/g)^{0.29}\,{\ensuremath{\,\mathrm{Mpc}}}$ and $\nu =1.2$. Here $\Omega_{DM}$ is the dark matter energy density, $m_\mathrm{DM}\equiv m_J$ is the dark matter mass and $g$ is the effective number of dark matter degrees of freedom ($g=1$ for the majoron). Note that $\alpha$ is a critical length that determines the cut-off scale in the initial power spectrum. Using $m_J=0.158\,\mathrm{keV}$ or $1.5\,\mathrm{keV}$, $g=1$, and the values listed below for the other parameters, one has that the transfer function reduces the initial fluctuation power to a fraction $1/e$ of the corresponding CDM value at $k\simeq1$ and $17\,h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$, respectively. We can take these values as rough estimates of the free-streaming wavenumber. In order to generate the initial condition for the cosmological simulation we used the `2LPTic` code [@Crocce:2006ve], based on the second-order Lagrangian perturbation theory. In Fig. \[Fig:ICPkcompare\], we show the initial ($z=99$) power spectra for CDM and for the two WDM models considered here, given by `2LPTic`, hence numerical limitations are already included [^3]. Note that when we extract initial conditions from the power spectrum, we use the same random seed for each pair of stable/decaying DM simulations. In other words, the two simulations of each pair have exactly the same initial conditions. The simulations start from redshift $z=99$. The input cosmological parameters are: the matter energy density $\Omega_{m}=0.3$, the cosmological constant energy density $\Omega_{\Lambda}=0.7$, the baryon energy density $\Omega_{b}=0.04$, the dimensionless Hubble constant $h=0.7$, the scalar spectral index $n_s=0.96$, and the power spectrum normalization factor $\sigma_{8}=0.8$. For WDM simulations, we input thermal velocities at $z=99$ to the simulation particles, consistently with the initial spectrum. This has however a negligible effect on nonlinear structure formation since thermal velocities have already decayed out at $z=99$, due to the expansion of the Universe. We have used $512^3$ simulation particles and a cube containing these particles with each side equals to $50\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$. The mass $M_\mathrm{sim}$ of each simulation particle at the initial time is $M_\mathrm{sim}\simeq7.8\times 10^{7} \,h^{-1} M_{\odot}$. Periodic boundary conditions are employed in order to avoid boundary effects. Numerical convergence tests {#Subsec:convergence} --------------------------- In this section we quantify the degree of convergence of our simulations. We do this by considering simulations with different volume and number of particles. In particular, we change the resolution of the simulations at fixed volume, or change the size of the simulation volume at fixed resolution. This will allow us to assess numerical limitations and to define the limit of validity of the results inferred from our simulations. There are two kinds of numerical limitations. The first is sample variance, also known as “cosmic variance”. In the simulation, the source of sample variance is the finite volume of the simulation and the fact that each simulation only provides a single realization of the underlying statistical distribution of particles. The sample variance prevents us from precisely predicting the density field on large scales [@vanDaalen:2011xb]. We use for our simulations a box size of $50\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$, corresponding to a fundamental mode $k \simeq 0.13 \,h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. The second kind of numerical limitation is due to the discreteness of the simulation particles, i.e. to the fact that we adopt particles to represent a continuous density field. The overall resolution limit of the simulations is set by both the box size and the number of particles, and is described by the Nyquist wavenumber $k_\mathrm{Nyq}$ $$k_\mathrm{Nyq} = \pi (N/V)^{1/3}. \label{eq:nyq}$$ Beyond the Nyquist wavenumber, the accuracy of the power spectrum is strongly degraded. For the parameters used in our baseline simulations, $k_\mathrm{Nyq} \simeq 32 \, h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. The finite resolution of the simulations has two consequences. First, non-zero power exists on all scales, called shot noise. The shot noise originates because we adopt particles to represent a continuous density field. The amplitude of the shot noise is independent of the wavenumber $k$ and depends instead on the number of particles in the simulation, and therefore on the resolution [@Colombi:2008dw; @vanDaalen:2011xb]. The second consequence is a discreteness peak in the power spectrum at twice the Nyquist limit. This small excess of power is a common feature of all the N-body simulations. This is however more of a problem for WDM simulations than CDM ones, since the former have much less power at small scales, making this numerical artifact more manifest. This causes the well-known spurious halo issues in standard WDM simulations [@Wang:2007he]. We will discuss this effect in more details in Sec. \[Subsec:HMF\]. As anticipated above, in order to test the convergence of our simulations, we compare the results from runs with different box size and particle resolution. To this purpose, we perform simulations with $N=128^3,\,256^3,\,512^3$ and $L=V^{1/3}=50,\,100\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$. We then compare the resulting power spectra at $z=0$ with that from our baseline run with $L= 50\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$ and $N= 512^3$. We concentrate on the DWDM-m case, i.e. the one with the smaller mass for the dark matter particle. This is because this is the case with the stronger suppression of small-scale power, for which numerical issues at small scales are thus in principle more relevant. In order to assess the impact of box size, we do the following. We compute the ratio of the matter power spectra at $z=0$ from the simulations with $\{L,\,N\} = \{50h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,128^3\}$ and $\{100h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,256^3\}$. The latter has different number of particles to ensure that the two simulations have the same resolution ($k_\mathrm{ny} \simeq 8 h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$ in both cases) and that we are isolating the effects of the finite simulation volume. The ratio between the spectra should give us a rough measure of the numerical error associated to a finite volume size of $50h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$, at that resolution. We also do the same for the pair of simulations with $\{L,\,N\} = \{50h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,256^3\}$ and $\{100h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,512^3\}$ ($k_\mathrm{ny} \simeq 16 h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$) to be confident that our results reliably extrapolate to our reference simulation with $\{L,\,N\} = \{50h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,512^3\}$ and $k_\mathrm{ny} \simeq 32 h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. Of course a more direct way would be to perform a simulation with $\{L,\,N\} = \{100h^{-1}{\ensuremath{\,\mathrm{Mpc}}},\,1024^3\}$, but we choose not to follow this path due to our limited computational resources. We show the ratio of the spectra computed in this way in Fig. \[Fig:Pk\_ratio\_50\_100\]. It is evident how the large-scale power of the simulations does not match due to the cosmic variance. However, we see that for both resolutions, the relative difference between $L=50$ and $100 h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$ is $10\%$ or better at wavenumbers above $k\simeq 2 h {\ensuremath{\,\mathrm{Mpc}}}^{-1}$. This makes us confident that the same applies at the resolution of our reference simulation. ![ Ratio of power spectra obtained from simulations with box-size $L=50$ and $100 h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$, at fixed resolution. The blue and yellow curves correspond to $k_\mathrm{ny} \simeq 8$ and $16 h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$, respectively. The green band shows a $10\%$ deviation between the spectra.[]{data-label="Fig:Pk_ratio_50_100"}](Fig/Pkz0_ratio_50_100_new.pdf){width="45.00000%"} Then, to study the effect of the finite resolution, in the left panel of Fig. \[Fig:Pk\_z0\_compare\_dif\_scale\] we show matter power spectra at $z=0$ from simulations with $50^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ box size and $N^{1/3}=128,\,256,\,512$. Values of $k_\mathrm{Ny}$ for this runs are $8,\,16$ and $32 \, h\,{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. The right panel of the same figure shows the corresponding plot for a $100^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ box size, with $k_\mathrm{Ny}= 4,\,8$ and $16 \, h\,{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. ![The matter power spectrum at $z=0$ of different simulation resolutions with $V=50^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ (left panel) and $V=100^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ (right panel). One can see that in almost the entire range of scales the matter power spectra at different resolutions converge, all the way up to the Nyquist limit.[]{data-label="Fig:Pk_z0_compare_dif_scale"}](Fig/Pkz0_dif_res_compare_50Mpc.pdf "fig:"){width="45.00000%"} ![The matter power spectrum at $z=0$ of different simulation resolutions with $V=50^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ (left panel) and $V=100^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ (right panel). One can see that in almost the entire range of scales the matter power spectra at different resolutions converge, all the way up to the Nyquist limit.[]{data-label="Fig:Pk_z0_compare_dif_scale"}](Fig/Pkz0_dif_res_compare_100Mpc.pdf "fig:"){width="45.00000%"} On large and intermediate scales, the matter power spectra at different resolutions converge fairly well, starting to deviate beyond the Nyquist wavenumber $k_\mathrm{Nyq}$ of the given resolution. In particular, the excess of power above the Nyquist wavenumber is a manifestation of particle shot noise. In order to better highlight this effect, we show in the two panels of Fig \[Fig:Pk\_z0\_compare\_dif\_scale\_ratio\] the ratios between each of the spectra and a (third) reference spectrum $P_{512}(k)$ for the $N=512^3$ case, evaluated at $z=0$. It can be seen that the simulations agree to within 5% or better below the Nyquist wavenumber. In particular, at $k=k_\mathrm{Ny}/2$ the error at $z=0$ is $5.7\%$ for the $128^3$ particles run and $3.4\%$ for the $256^3$ run, for a box size of $50\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$. The corresponding numbers for the $100\,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$ boxsize are $2.5\%$ and $2.6\%$. ![Effect of changing the simulation resolution at fixed box size $L$. The solid curves show the ratio between the matter power spectra at $z=0$ of Fig. \[Fig:Pk\_z0\_compare\_dif\_scale\], obtained with the settings for the particle number indicated in the legend, and the spectrum for $N=512^3$ choosen as reference. The left (right) panel is for $L=50\,(100) \,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$.[]{data-label="Fig:Pk_z0_compare_dif_scale_ratio"}](Fig/Pkz0_dif_res_compare_50Mpc_ratio.pdf "fig:"){width="45.00000%"} ![Effect of changing the simulation resolution at fixed box size $L$. The solid curves show the ratio between the matter power spectra at $z=0$ of Fig. \[Fig:Pk\_z0\_compare\_dif\_scale\], obtained with the settings for the particle number indicated in the legend, and the spectrum for $N=512^3$ choosen as reference. The left (right) panel is for $L=50\,(100) \,h^{-1}{\ensuremath{\,\mathrm{Mpc}}}$.[]{data-label="Fig:Pk_z0_compare_dif_scale_ratio"}](Fig/Pkz0_dif_res_compare_100Mpc_ratio.pdf "fig:"){width="45.00000%"} From the results presented in this section, it is clear that the parameter set $V=50^3\,h^{-3}{\ensuremath{\,\mathrm{Mpc}}}^3$ and $N=512^3$ provides an adequate benchmark choice for our simulations. In particular, we find that our simulations have $\sim 10\%$ accuracy or better in the wavenumber range $(1 - 20) h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. Simulation Results {#Sec:result} ================== By comparing the results of our simulations, we can infer the effect of DWDM on structure formation. In the following, we derive our results through detailed analyses of the density field, the matter power spectrum and the halo mass function inferred from our N-body simulations. Density Field ------------- ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_SCDM_z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_SWDM-M__z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_SWDM-m_z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_DCDM_z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_DWDM-M__z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/densityfield_DWDM-m_z0.pdf "fig:"){width="31.00000%"} ![Comparison of the density fields at $z=0$. The first and second rows correspond to stable and decaying cases (with lifetime fixed at the CMB limit, $50\,\mathrm{Gyr}$ [@Lattanzi:2007ux]), respectively. The first, second and third columns correspond to three different paradigms: $\Lambda$CDM, and WDM with masses $m_J = 1.5\,\mathrm{keV}$ and $m_J = 0.158\,\mathrm{keV}$, from left to right. The horizontal and the vertical axis are given in units of $h^{-1}\, {\ensuremath{\,\mathrm{Mpc}}}$ and represent the size of the simulation box. One clearly sees the free-streaming effect of WDM, indicated by the suppression of structure in the density field of the WDM simulations.[]{data-label="Fig:densityfield"}](Fig/colorbar.pdf "fig:"){width="50.00000%"} ![Comparison of the relative density fields at $z=0$. The left figure represents the relative density field of SCDM and DCDM, the middle figure of is the relative density field of SWDM-M and DWDM-M, and the right figure is the relative density field of SWDM-m and DWDM-m, respectively. The horizontal and vertical axis are given in the same units as in Fig. \[Fig:densityfield\]. One sees that in most of the regions, the density is larger in the stable case. However, there are small changes due to subtle features of the decay scenario, see text for explanation.[]{data-label="Fig:rel_densityfield"}](Fig/densityfield_rel_CDM_z0_new.pdf "fig:"){width="31.00000%"} ![Comparison of the relative density fields at $z=0$. The left figure represents the relative density field of SCDM and DCDM, the middle figure of is the relative density field of SWDM-M and DWDM-M, and the right figure is the relative density field of SWDM-m and DWDM-m, respectively. The horizontal and vertical axis are given in the same units as in Fig. \[Fig:densityfield\]. One sees that in most of the regions, the density is larger in the stable case. However, there are small changes due to subtle features of the decay scenario, see text for explanation.[]{data-label="Fig:rel_densityfield"}](Fig/densityfield_rel_WDM-M__z0_new.pdf "fig:"){width="31.00000%"} ![Comparison of the relative density fields at $z=0$. The left figure represents the relative density field of SCDM and DCDM, the middle figure of is the relative density field of SWDM-M and DWDM-M, and the right figure is the relative density field of SWDM-m and DWDM-m, respectively. The horizontal and vertical axis are given in the same units as in Fig. \[Fig:densityfield\]. One sees that in most of the regions, the density is larger in the stable case. However, there are small changes due to subtle features of the decay scenario, see text for explanation.[]{data-label="Fig:rel_densityfield"}](Fig/densityfield_rel_WDM-m_z0_new.pdf "fig:"){width="31.00000%"} ![Comparison of the relative density fields at $z=0$. The left figure represents the relative density field of SCDM and DCDM, the middle figure of is the relative density field of SWDM-M and DWDM-M, and the right figure is the relative density field of SWDM-m and DWDM-m, respectively. The horizontal and vertical axis are given in the same units as in Fig. \[Fig:densityfield\]. One sees that in most of the regions, the density is larger in the stable case. However, there are small changes due to subtle features of the decay scenario, see text for explanation.[]{data-label="Fig:rel_densityfield"}](Fig/colorbar_rel.pdf "fig:"){width="50.00000%"} In Fig. \[Fig:densityfield\], we compare the density field extracted from different simulations. The first, second and third columns correspond to three different scenarios: CDM, WDM with mass $m_J = 1.5\,\mathrm{keV}$ and WDM with mass $m_J = 0.158\,\mathrm{keV}$. The density field is calculated from the particle distribution by using the triangular shaped cloud scheme and further smoothed by a Gaussian filter. The first and second rows in Fig. \[Fig:densityfield\] correspond to the stable and the decaying case. The density contrast $\delta$ is defined as $$\label{density_contrast} \delta = \dfrac{\rho}{\bar{\rho}} -1,$$ where $\rho$ is the local density and $\bar{\rho}$ is the average density. The color scale we use in Fig. \[Fig:densityfield\] is the logarithm of $\delta+1$, which represents the ratio of local density $\rho$ to the average density $\bar{\rho}$. With the density field and the color scale, one can see how different cosmic structures form in different cosmologies. By comparing the stable $\Lambda$CDM and the stable WDM simulations, one can clearly see the suppression of structure in the SWDM case, due to the associated free-streaming effect. However, the effect of decay is not obvious through a simple visual comparison of the corresponding stable and decaying density fields. The well-known suppression of small-scale structure characteristic of WDM is evident when comparing the different columns in Fig. \[Fig:densityfield\]. We can see that a large portion of small-scale structure is smoothed out in the WDM simulation with WDM mass $m_J = 0.158\,\mathrm{keV}$, due to the large free-streaming length. In fact, the free streaming wavenumber is only one order of magnitude larger than the fundamental mode of the box. On the other hand, the density fields of the $\Lambda$CDM simulations and those of the WDM simulations with WDM mass $m_J = 1.5\,\mathrm{keV}$ look quite similar, because on the scales probed by the simulations free streaming is rather weak for such cases. However, a lack of small-scale power in the WDM simulation can still be observed. Note that the small peaks in the density field of WDM simulations are due to spurious halos from finite resolution effects and numerical fragmentation, as discussed in Sec. \[Subsec:convergence\]. We will also discuss such effects on the halo mass function in Sec. \[Subsec:HMF\]. It is difficult to appreciate the effect of decay by performing a quick visual comparison of the density fields in Fig. \[Fig:densityfield\]. Thus, in order to better isolate the effect of decay, we refer to the [*relative density field*]{} $\rho_S/\rho_D$ of the stable (S) over the decaying (D) case, shown in Fig. \[Fig:rel\_densityfield\]. In that figure, the color scale refers to $\log_{10}{\rho_S/\rho_D} = \log_{10}[(\delta_S+1)\exp(t_0/\tau_J)] - \log_{10}(\delta_D+1) $. One can see that the decay effect reduces the density in most regions of the density field, especially near the center of halos and the interior of filaments. This follows from the change in the gravitational potential due to the decay, which makes the potential wells more shallow. This is reflected in the fact that most regions are (relatively) overdense in the stable case, as indicated by the reddish regions in Fig. \[Fig:rel\_densityfield\]. Note also that changes in the gravitational potential also affect the dynamics of the simulation, causing diffusion of the simulation particles. This makes the final density distribution more diffuse with respect to the stable case. Therefore, we can see that regions near the periphery of the halos and filaments are denser in the decaying DM case, and appear as the blu-ish regions in Fig. \[Fig:rel\_densityfield\]. Matter Power Spectrum --------------------- ![Matter power spectra derived from our simulations, for the standard $\Lambda$CDM$\equiv$SCDM, DCDM, SWDM-M, DWDM-M, SWDM-m and DWDM-m cases, at redshifts $z=0, 1, 2, 3$. The solid lines represent the stable case, while the dashed ones correspond to the decaying case. The different colors are associated to different DM mass, and the pink band represents length scales smaller than the Nyquist limit. One can clearly see the evolution of the matter power spectrum, as well as late-time decay effects. Further details are given in the text. []{data-label="Fig:Pkcompare"}](Fig/Pkz3_compare.pdf "fig:"){width="45.00000%"} ![Matter power spectra derived from our simulations, for the standard $\Lambda$CDM$\equiv$SCDM, DCDM, SWDM-M, DWDM-M, SWDM-m and DWDM-m cases, at redshifts $z=0, 1, 2, 3$. The solid lines represent the stable case, while the dashed ones correspond to the decaying case. The different colors are associated to different DM mass, and the pink band represents length scales smaller than the Nyquist limit. One can clearly see the evolution of the matter power spectrum, as well as late-time decay effects. Further details are given in the text. []{data-label="Fig:Pkcompare"}](Fig/Pkz2_compare.pdf "fig:"){width="45.00000%"} ![Matter power spectra derived from our simulations, for the standard $\Lambda$CDM$\equiv$SCDM, DCDM, SWDM-M, DWDM-M, SWDM-m and DWDM-m cases, at redshifts $z=0, 1, 2, 3$. The solid lines represent the stable case, while the dashed ones correspond to the decaying case. The different colors are associated to different DM mass, and the pink band represents length scales smaller than the Nyquist limit. One can clearly see the evolution of the matter power spectrum, as well as late-time decay effects. Further details are given in the text. []{data-label="Fig:Pkcompare"}](Fig/Pkz1_compare.pdf "fig:"){width="45.00000%"} ![Matter power spectra derived from our simulations, for the standard $\Lambda$CDM$\equiv$SCDM, DCDM, SWDM-M, DWDM-M, SWDM-m and DWDM-m cases, at redshifts $z=0, 1, 2, 3$. The solid lines represent the stable case, while the dashed ones correspond to the decaying case. The different colors are associated to different DM mass, and the pink band represents length scales smaller than the Nyquist limit. One can clearly see the evolution of the matter power spectrum, as well as late-time decay effects. Further details are given in the text. []{data-label="Fig:Pkcompare"}](Fig/Pkz0_compare.pdf "fig:"){width="45.00000%"} ![Left panel: Ratio between the matter power spectra of decaying and stable dark matter, for CDM (red), WDM-M (orange), WDM-m (blue) at $z=0$. Right panel: Same as the left panel, at $z=1$. It can be noticed how the effect of the decay is manifest on all scales, but is more evident on small scales, and also more evident for WDM with respect to CDM.[]{data-label="Fig:Pk_ratiocompare"}](Fig/Pkz0_ratio_compare.pdf "fig:"){width="45.00000%"} ![Left panel: Ratio between the matter power spectra of decaying and stable dark matter, for CDM (red), WDM-M (orange), WDM-m (blue) at $z=0$. Right panel: Same as the left panel, at $z=1$. It can be noticed how the effect of the decay is manifest on all scales, but is more evident on small scales, and also more evident for WDM with respect to CDM.[]{data-label="Fig:Pk_ratiocompare"}](Fig/Pkz1_ratio_compare.pdf "fig:"){width="45.00000%"} The matter power spectrum of the simulations is calculated using the `ComputePk` code [@2014ascl.soft03015L] with the triangular shaped cloud scheme. Note that, since in the simulation we neglect the decay-produced neutrinos, we only consider the overdensity of the DM and baryons in calculating the matter power spectrum. In Fig. \[Fig:Pkcompare\], we show the matter power spectrum at $z=\{0,\,1,\,2,\,3\}$ for each of the simulations that we have performed, focusing on the differences between stable and decaying DM cases. The solid lines represent the matter power spectra from the simulations with stable DM, while the dashed lines correspond to the matter power spectra obtained in the simulations with decaying DM. The dashed vertical line corresponds to the Nyquist wavenumber $k_\mathrm{Nyq}$ defined in Eq. (\[eq:nyq\]), i.e. the scale of the average interparticle distance – the resolution limit of our simulations. For our simulation parameters, $k_\mathrm{Nyq}\simeq 32\,h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. From Fig. \[Fig:Pkcompare\], one can easily see that the effect of decay becomes manifest at lower redshifts. This is due to the late decay time of the DM candidate[^4]. To quantify the overall effect of decay, we focus on the matter power spectrum at $z=0$. As a reference, the scale of non-linearity at $z=0$ is roughly $0.15\,h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$. By comparing SWDM-m and SWDM-M with the standard $\Lambda$CDM$\equiv$SCDM paradigm, one can see that the matter power spectra on large scales (small $k$) are identical, but differ on small scales, the SWDM spectra being suppressed due to the free-streaming effect of WDM. This effect is very evident for the SWDM-m case, that has the larger free-streaming length. The difference between SWDM-M and SCDM is instead small, and visibile only at the largest k’s, because the free-streaming length for WDM with $m_J = 1.5\,\mathrm{keV}$ is still quite small, and thus free-streaming does not cause too much suppression on the scales probed in our simulations. By comparing DWDM-m and DWDM-M with SCDM, one can see that the small scale suppression due to the free-streaming effect of WDM still exists. Moreover, there is further suppression on all scales caused by the effect of decay. The presence of the decay, inherent in the BV model [@Berezinsky:1993fm], reduces the matter energy density in the universe, hence the growth factor is reduced, which delays the formation of structure. The decay-induced suppression does not show a strong dependence on scale. This should contrasted with the free-streaming effect of WDM, that has a strong dependence on the scale, due to the scale of the cut-off in the initial transfer function, related to the mass and temperature of the WDM. A lighter thermal WDM will cause a cut-off in the matter power spectrum on a larger scale, as originally envisaged in the BV model. The suppression due to the decay seems however to gradually decrease towards large scale. To better assess this behaviour, we compute the ratio between the decaying and stable power spectra, $P_{decaying}(k)/P_{stable}(k)$, for each of the three pairs of simulations. These are shown, for $z=0$ and $z=1$, in Fig. \[Fig:Pk\_ratiocompare\], that can be compared with Fig. 2 in Ref. [@Enqvist:2015ara], that shows the same quantity for the CDM case. First of all, we note that our results for CDM are very consistent with those of Ref. [@Enqvist:2015ara], also considered the slighthly different values for the lifetimes between their work and the present one. Let us briefly discuss the features of the curves in Fig. \[Fig:Pk\_ratiocompare\]. We find first of all confirmation that the decay suppresses the spectrum on all scales under consideration, since the curves always lie below unity. On the largest scales shown in the plot, the three ratios converge to a constant common value. That the large-scale behaviour of the curves is common could be easily expected by noting that, above the free-streaming length, WDM and CDM behave the same way. On the other hand, the suppression due to the decay is more evident on small scales, in all the cases under consideration. As noted by Enqvist et al. [@Enqvist:2015ara], this is due to the fact that mode-mode couplings make that differences that are small in the linear regime get enhanced by the nonlinear evolution. In fact, we can see that this effect is less pronounced in the right panel of Fig. \[Fig:Pk\_ratiocompare\], corresponding to $z=1$, when more scales were still in the linear regime with respect to $z=0$ (shown in the left panel). Another interesting feature that can be noticed in Fig. \[Fig:Pk\_ratiocompare\] is that the nonlinear enhancement of the effect of the decay on small scales is stronger for lighter WDM. There is a distinct drop in the curve, especially evident in the $m_J=0.158\,\mathrm{keV}$ case, at scales right above the free-streaming length. In other words, it seems that the combination of the cutoff in the linear power spectrum due to the WDM thermal velocity and of the nonlinear evolution, enhances the effect of the DM decay. To summarize, by comparing the decaying and the stable cases, one can see that the effect of decay is to suppress power on all scales, with the suppression being more severe on the small, nonlinear scales. In contrast, the effect of WDM [*per se*]{} is to suppress the matter power spectrum on small scales, depending on the mass of the WDM candidate. Also, the small-scale suppression due to the decay is more evident for WDM, as one can see by comparing the matter power spectrum of $\Lambda$CDM simulations with those corresponding to WDM simulations. Halo Mass Function {#Subsec:HMF} ------------------ ![ Evolution of the halo mass function for the standard $\Lambda$CDM$\equiv$SCDM paradigm (black circle) compared with the simulations corresponding to DCDM (blue triangle), SWDM-m (green square), and DWDM-m (red diamond). The dashed lines of corresponding colors in the $z=0$ panel represent our derived halo mass function fits based on the given cosmology and the data points obtained from our simulations. The data points of our simulations that do not fit well to the theoretical WDM halo mass function are mainly due to spurious halos. By comparing the stable and decaying cases, we can see that the effect of decay is to reduce the number density of halos for all mass scales. However, the effect of the warm DM nature is seen by setting a cut-off mass, which is the mass scale that the halo mass functions of WDM simulations start to deviate from those of CDM simulations. []{data-label="Fig:Halocompare"}](Fig/HMF_z3_compare.pdf "fig:"){width="45.00000%"} ![ Evolution of the halo mass function for the standard $\Lambda$CDM$\equiv$SCDM paradigm (black circle) compared with the simulations corresponding to DCDM (blue triangle), SWDM-m (green square), and DWDM-m (red diamond). The dashed lines of corresponding colors in the $z=0$ panel represent our derived halo mass function fits based on the given cosmology and the data points obtained from our simulations. The data points of our simulations that do not fit well to the theoretical WDM halo mass function are mainly due to spurious halos. By comparing the stable and decaying cases, we can see that the effect of decay is to reduce the number density of halos for all mass scales. However, the effect of the warm DM nature is seen by setting a cut-off mass, which is the mass scale that the halo mass functions of WDM simulations start to deviate from those of CDM simulations. []{data-label="Fig:Halocompare"}](Fig/HMF_z2_compare.pdf "fig:"){width="45.00000%"} ![ Evolution of the halo mass function for the standard $\Lambda$CDM$\equiv$SCDM paradigm (black circle) compared with the simulations corresponding to DCDM (blue triangle), SWDM-m (green square), and DWDM-m (red diamond). The dashed lines of corresponding colors in the $z=0$ panel represent our derived halo mass function fits based on the given cosmology and the data points obtained from our simulations. The data points of our simulations that do not fit well to the theoretical WDM halo mass function are mainly due to spurious halos. By comparing the stable and decaying cases, we can see that the effect of decay is to reduce the number density of halos for all mass scales. However, the effect of the warm DM nature is seen by setting a cut-off mass, which is the mass scale that the halo mass functions of WDM simulations start to deviate from those of CDM simulations. []{data-label="Fig:Halocompare"}](Fig/HMF_z1_compare.pdf "fig:"){width="45.00000%"} ![ Evolution of the halo mass function for the standard $\Lambda$CDM$\equiv$SCDM paradigm (black circle) compared with the simulations corresponding to DCDM (blue triangle), SWDM-m (green square), and DWDM-m (red diamond). The dashed lines of corresponding colors in the $z=0$ panel represent our derived halo mass function fits based on the given cosmology and the data points obtained from our simulations. The data points of our simulations that do not fit well to the theoretical WDM halo mass function are mainly due to spurious halos. By comparing the stable and decaying cases, we can see that the effect of decay is to reduce the number density of halos for all mass scales. However, the effect of the warm DM nature is seen by setting a cut-off mass, which is the mass scale that the halo mass functions of WDM simulations start to deviate from those of CDM simulations. []{data-label="Fig:Halocompare"}](Fig/HMF_z0_compare_fit.pdf "fig:"){width="45.00000%"} The halo mass function is defined as the number density of DM halos per unit logarithmic mass interval. In order to estimate the halo mass function, we need to identify halos, i.e., bound objects, within the large set of particles in our simulations. For this purpose, we make use of the parallel halo finder package $\texttt{AHF}$ [@Knollmann:2009pb] in order to calculate the halo mass function based on our snapshots of simulations. In $\texttt{AHF}$, the adaptive mesh refinement algorithm is adopted to identify clumps in the density field. Therefore, it can build up the hierarchical structure for the halos and sub-halos obtained in the snapshots. After iteratively removing the particles unbounded by the gravitational potential of the halo and refining the halo edge, the properties of the halos are finally determined by the particles within its virial radius $R_{vir}$. Here $R_{vir}$ is defined as the point where the density profile of the particles drops below $\Delta_{vir} \rho_c$ in which $\Delta_{vir}$ is a constant depending on the cosmology and the $\rho_c$ is the critical density of the universe. In Fig. \[Fig:Halocompare\], the black circle, blue triangle, green square, and red diamond represent halo mass functions obtained from the $\Lambda$CDM$\equiv$SCDM paradigm and the simulations corresponding to DCDM, SWDM-m and DWDM-m, respectively. In order to avoid cluttering in the figures, we do not illustrate the halo mass functions corresponding to DWDM-M and SWDM-M in the figure. The dashed lines of corresponding colors in the $z=0$ panel are the halo mass function fits calculated based on the given cosmology and the halo mass functions obtained from our simulations. Note that as a result of the strong cut-off in the WDM transfer function, resulting in a suppression of small-scale power, discreteness effects close to the resolution limit will be more important than for CDM simulations. Indeed, as a consequence of finite resolution effects in the simulations, spurious clumps are produced by numerical fragmentation [@Wang:2007he; @Lovell:2013ola; @Leo:2017zff]. Note that the halos produced in early simulations [@Bode:2000gq] were considered to be the result of the “top-down” structure formation scenario of WDM  [@Knebe:2003hs]. However, further studies have demonstrated that this phenomenon depends on the average interparticle distance, i.e. on the resolution of the simulation, hence could be regarded as a numerical artifact [@Wang:2007he]. For small halo masses, these spurious clumps will outnumber the genuine halos. In order to identify the spurious clumps, we first calculate the halo mass function of the corresponding cosmology using the code `hmf`, which is the back end of `HMFcalc` [@Murray:2013qza], with the fitting model in Ref. [@Tinker:2008ff]. Then we introduce the fitting method in Ref. [@Schneider:2011yu], which provides a precise fit for the halo mass function of WDM cosmology. The overall halo mass function fit for the WDM scenarios used in this work can be written as $$n(M)= (1+M_{hm}/M)^{-\gamma}\times n_{Tinker}(M),$$ where the $M_{hm}$ is defined as the mass scale at which the amplitude of the WDM transfer function is reduced to 1/2, $\gamma$ is a free parameter for fitting the correct shape of the halo mass function and $n_{Tinker}(M)$ is the halo mass function fit in Ref. [@Tinker:2008ff]. $M_{hm}$ is expected to mainly affect the properties of WDM haloes [@Schneider:2011yu]. $M_{hm}$ is related to its corresponding length scale $\lambda_{hm}$ by $$M_{hm} = \dfrac{4\pi}{3} \bar{\rho} \left(\dfrac{\lambda_{hm}}{2}\right)^3,$$ where $\bar{\rho}$ is the average density of the universe. Here we follow Ref. [@Schneider:2011yu] to calculate $M_{hm}$ based on our simulation results and perform the fitting. For the best-fit values, we find that $\gamma \approx 0.309$ in SWDM-m and $\gamma \approx 0.345$ in DWDM-m, in which the larger $\gamma$ for DWDM-m indicates the existence of further suppression coming from the decay. For the halo mass function fit for DCDM, we simply introduce a factor $A$ to account for the effect of decay, which is written as $n(M)=A \times n_{Tinker}(M).$ By comparing the halo mass function fits with the halo mass functions from the simulations, we can infer that for SWDM-m and DWDM-m the halo mass functions from the simulations (green square and red diamond) deviating from the corresponding halo mass function fits are mainly composed of spurious halos. The genuine halo mass functions for WDM simulations should exhibit the same trend as the corresponding halo mass function fits for small masses. The effect of decay and the free-streaming effect of WDM can also be separated due to their distinct impact on the halo mass functions. From the difference between the stable case and decaying case, we can infer that the effect of decay in the halo mass function is to reduce the number density of halos in every mass scale. In other words, the effect of decay produces an overall downward shift on the halo mass function. On the other hand, the free-streaming effect of WDM is to set a cut-off halo mass which is roughly the mass scale that the WDM halo mass function deviates from CDM halo mass function. We also note that, at large halo masses, it is difficult to assess the differences between the halo mass functions of different cosmologies, due to the variance caused by the scarcity of halos with large masses. Effects of baryonic physics {#Sec:baryon} --------------------------- To close our discussion we now comment on the effects of baryonic physics, so far neglected. Despite the fact that baryons are themselves biased tracers of the DM gravitational potential, their role in structure formation is distinct from that of DM as a result of their ability to cool down through radiative processes. This makes baryons able to form compact astrophysical objects, such as stars, resulting in a different distribution compared to that of DM. Thanks to the development of N-body simulation techniques, high-resolution and large-scale hydrodynamics simulations have now become feasible. Many studies have shown that baryonic processes can generate non-trivial effects on astrophysical observables such as the halo density profile, the matter power spectrum and the halo mass function. However, the precise details of baryonic processes remain poorly understood [@Rudd:2007zx; @Stanek:2008am; @Cui:2011; @vanDaalen:2011xb; @Bocquet:2015pva; @Chan:2015tna; @Despali:2016meh]. Baryons affect the halo mass and density profile in several ways. When falling into the potential wells created by the DM, baryons are gravitationally heated and exchange energy with DM during relaxation. Hence they remain more diffuse and can create core-like density profiles at the center of halos. Later, as they dissipate energy through radiative processes, baryons sink into the center of halos and finally convert into stars. This steepens the density profile near the center. On the other hand, the presence of supernovae (SN) and active galactic nuclei (AGN), reduces the effect of radiative cooling and adiabatic contraction, since baryons are ejected out from the center of the halos. Baryonic physics also affects the matter power spectrum, see for example Ref. [@vanDaalen:2011xb] for a comprehensive review. At intermediate scales ($k\simeq 0.8 - 5 \,h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$) the power spectrum is suppressed because the pressure of baryons smoothens the density field. In contrast, the spectrum rises at small scales because of radiative cooling that allows baryons to cluster at those scales. Concerning the halo mass function, the number density of low-mass halos increases as a result of cooling and star formation. However, AGN feedback can reduce the effect of cooling and hence the abundance of low-mass halos. In summary, we have shown that in a DDM cosmology, there is an overall suppression of the density fluctuations, due to the decay of the DM particle. This is accompanied, in the case of WDM, by the well-known smoothing of the density field and suppression of the small-scale power in the matter power spectrum and halo mass function. Taking into account the full set of baryonic processes, including gravitational heating, radiative cooling, adiabative contraction, stellar and AGN feedback, we expect that the impact of baryonic physics will be generally weakened by the decaying and warm nature of DM. In fact, since the gravitational potential is shallower in the DWDM scenario, due to both the decay and the free-streaming of WDM, the ability of halos to accrete baryons into their center will be somehow limited. Therefore, we expect the star-formation rate to decrease as a result, leading in turn to a lower efficiency of the stellar and AGN feedback. In this sense, we would argue that the baryon component remains in a relatively smooth density distribution containing fewer structures, while still being a biased tracer of the DM. This is only a qualitative assessment of the effects of baryonic physics in a DWDM structure formation scenario. Full-fledged hydrodynamic simulations would be necessary to assess the interplay between baryons and DWDM. Conclusions {#sec:conclusion} =========== In this paper we have examined the cosmology of warm dark matter, both for the stable as well as decaying cases, paying special attention to how it affects structure formation. We have performed DM-only N-body simulations of the nonlinear evolution and compared the matter power spectrum associated to warm dark matter masses of $1.5\,\mathrm{keV}$ and $0.158\,\mathrm{keV}$, with that expected for the stable cold dark matter $\Lambda$CDM paradigm, taken as our reference model. We have scrutinized the effects associated to the warm nature of dark matter, as well as the fact that it decays. We find that the nonlinear evolution somehow couples the two effects, in such a way that the effect of the decay becomes more pronounced below the free-streaming scale of WDM. All of our considerations are general, though we have been strongly motivated by the fact that the DWDM scenario can naturally appear in particle physics. A nice example is provided by the keV majoron DM scenario suggested in the original BV-proposal [@Berezinsky:1993fm]. The majoron itself emerges as a Nambu-Goldstone boson within a broad class of particle theories where neutrino mass generation takes place through the spontaneous breaking of a continuous ungauged lepton number symmetry. The majoron picks up a mass from gravitational effects, expected to explicitly break global symmetries. Hence it must necessarily decay to neutrinos, with an amplitude proportional to their tiny mass, which typically gives it cosmologically long lifetimes [@Schechter:1981cv]. As a reference value for decaying dark matter lifetime we have taken the conservative limit following from CMB observations obtained in [@Lattanzi:2013uza]. We have modified the standard N-body simulation code so as to include the effect of decay besides the free-streaming effect. Through these simulations we have shown that the DWDM picture suggested in the BV proposal leads to predictions on small scales that differ substantially from those of the standard $\Lambda$CDM paradigm. A dedicated analysis, using better resolution simulations and including baryons, is required to assess whether this could address the potential drawbacks of the $\Lambda$CDM scenario. We have also qualitatively discussed the possible impact on the DWDM scenario when baryonic physics is taken into account. Our results illustrate that the observations of large-scale structures in the Universe can in principle be used in order to constrain the particle physics model underlying the origin of dark matter. In our case, our results may be extended in order to constrain the lifetime and mass of the keV dark matter majoron. Acknowlegdments {#acknowlegdments .unnumbered} =============== This research was supported by the Spanish grants FPA2017-85216-P (AEI/FEDER, UE), SEV-2014-0398 and PROMETEOII/2018/165 (Generalitat Valenciana), by the Italian INFN through the InDark and Gruppo IV fundings, by ASI through the Grant 2016-24-H.0 (COSMOS) and the ASI/INAF Agreement I/072/09/0 for the Planck LFI Activity of Phase E2, and by the Taiwan MoST grants MOST-105-2112-M-007-028-MY3 and MOST-107-2112-M-007-029-MY3.\ Results for 5.3 keV decaying dark matter ========================================= ![Left panel: The matter power spectrum at $z=0$ for SCDM (black dashed), SWDM (blue solid) and DWDM (red solid) with $m_J = 5.3\,\mathrm{keV}$. Right panel: Ratio between SWDM (blue solid) and DWDM (red solid) with $m_J = 5.3\,\mathrm{keV}$ and SCDM. The ratio between SWDM and SCDM is very close to $1$ on all scales, due to the small free-streaming length of such a heavy WDM particle. []{data-label="Fig:Pk_compare_Lyman"}](Fig/Pkz0_compare_Lyman.pdf "fig:"){width="45.00000%"} ![Left panel: The matter power spectrum at $z=0$ for SCDM (black dashed), SWDM (blue solid) and DWDM (red solid) with $m_J = 5.3\,\mathrm{keV}$. Right panel: Ratio between SWDM (blue solid) and DWDM (red solid) with $m_J = 5.3\,\mathrm{keV}$ and SCDM. The ratio between SWDM and SCDM is very close to $1$ on all scales, due to the small free-streaming length of such a heavy WDM particle. []{data-label="Fig:Pk_compare_Lyman"}](Fig/Pkz0_ratio_compare_Lyman.pdf "fig:"){width="45.00000%"} ![The halo mass function at $z=0$ for SCDM (black circle), SWDM (green square) and DWDM (red diamond) with $m_J = 5.3\,\mathrm{keV}$. Like the matter power spectrum, the halo mass function is similar for SCDM and SWDM with $m_J = 5.3\,\mathrm{keV}$, despite for some deviations in the high-mass end related to cosmic variance. The halo mass function of DWDM with $m_J = 5.3\,\mathrm{keV}$ shows suppression of the halo number density compared to that of SCDM and SWDM at all mass scales, as discussed in Sec. \[Subsec:HMF\]. []{data-label="Fig:HMF_Lyman"}](Fig/HMF_z0_compare_Lyman.pdf){width="45.00000%"} Although the uncertainty in the evolution of the IGM temperature might cast doubt on the interpretation of the Lyman-alpha forest [@Hui:2016ltb; @Zhang:2017chj] measurements, we note that recent Lyman-alpha forest observations may set a strong lower limit on the WDM mass. Therefore, for completeness, we also perform simulations using the 95% CL lower limit on the mass of the WDM particle allowed by Lyman-alpha forest [@Irsic:2017ixq] data, i.e. $m_{J} \geq 5.3\,\mathrm{keV}$. We keep the lifetime $\tau_J = 50\,\mathrm{Gyr}$ as in the other simulations with decay. In this appendix, we present the results with such a mass for both stable and decaying dark matter. In Fig. \[Fig:Pk\_compare\_Lyman\], we compare the matter power spectrum of SWDM and DWDM with $m_J = 5.3\,\mathrm{keV}$ to that of SCDM. We show the individual matter power spectra at $z=0$ in the left panel, and the ratios to the SCDM matter power spectrum in the right panel. Note that the difference between SWDM with $m_J = 5.3\,\mathrm{keV}$ and SCDM is smaller than $1\%$ on all scales. This is associated to the relatively small free-streaming length of such a “large” mass WDM particle. Furthermore, a visual comparison with the red solid curve in the left panel of Fig. \[Fig:Pk\_ratiocompare\], shows that the power suppression due to the decay is in practice the same for WDM with $m_J = 5.3\,\mathrm{keV}$ and DCDM. This is again a consequence of the small free-streaming length of the WDM. Similar considerations apply to the halo mass functions for SCDM, SWDM and DWDM, shown in Fig. \[Fig:HMF\_Lyman\]. The number densities of halos are almost identical for SCDM and SWDM with $m_J = 5.3\,\mathrm{keV}$, except for some deviations in the high-mass end due to cosmic variance. Also note that the large number of spurious halos that were seen in the light WDM simulations discussed in Sec. \[Subsec:HMF\] disappear for WDM with $m_J = 5.3\,\mathrm{keV}$. Moreover, the decay suppresses the halo mass function of DWDM on all scales. From the analysis of the matter power spectrum and the halo mass function, we conclude that the WDM mass allowed by the Lyman-alpha forest is, at the scales probed by our analysis, undistinguishable from CDM. This holds for both the stable and decaying case. [^1]: Note however, that this last limit is obtained assuming a model with primordial tensor modes, motivated at the time by the BICEP2 claim, so the two limits cannot be directly compared. [^2]: In fact, there are no model-independent limits on the majoron mass. [^3]: The sudden change of slope for the WDM-m scenario around $k=3\, h{\ensuremath{\,\mathrm{Mpc}}}^{-1}$ is due to the presence of shot noise, which will be discussed in Sec. \[Subsec:convergence\]. [^4]: In our DWDM picture the late majoron decays simply reflect the tiny neutrino mass [@Schechter:1981cv].
--- abstract: 'The Baryon Acoustic Oscillation (BAO) feature in the power spectrum of galaxies provides a standard ruler to probe the accelerated expansion of the Universe. The current surveys covering a comoving volume sufficient to unveil the BAO scale are limited to redshift $z \lesssim 0.7$. In this paper, we study several galaxy selection schemes aiming at building an emission-line-galaxy (ELG) sample in the redshift range $0.6<z<1.7$, that would be suitable for future BAO studies using the Baryonic Oscillation Spectroscopic Survey (BOSS) spectrograph on the Sloan Digital Sky Survey (SDSS) telescope. We explore two different colour selections using both the SDSS and the Canada France Hawai Telescope Legacy Survey (CFHT-LS) photometry in the *u, g, r*, and *i* bands and evaluate their performance selecting luminous ELG. From about 2,000 ELG, we identified a selection scheme that has a 75 percent redshift measurement efficiency. This result confirms the feasibility of massive ELG surveys using the BOSS spectrograph on the SDSS telescope for a BAO detection at redshift $z\sim1$, in particular the proposed *eBOSS* experiment, which plans to use the SDSS telescope to combine the use of the BAO ruler with redshift space distortions using emission line galaxies and quasars in the redshift $0.6<z<2.2$.' bibliography: - 'biblio.bib' date: 'Accepted October 2nd 2012 by MNRAS. Received in original form July 17th 2012.' title: Investigating Emission Line Galaxy Surveys with the Sloan Digital Sky Survey Infrastructure --- \[firstpage\] cosmology - large scale structure - galaxy - selection - baryonic acoustic oscillations Introduction {#section:introduction} ============ ------------- ------------------ ------------------ ----------------- ------------------------- ------------- ------------- ----- redshift Sample variance range $\bar{n}(k_{1})$ $\bar{n}(k_{2})$ area req. \[deg$^{2}$\] for $k_{1}$ for $k_{2}$ for $k_{1}$ for $k_{2}$ $[0.3,0.6]$ 1.0 2.1 33 71 6188 204 440 $[0.6,0.9]$ 1.1 2.5 75 162 2585 194 419 $[0.9,1.2]$ 1.3 2.9 121 261 1615 195 421 $[1.2,1.5]$ 1.5 3.2 164 354 1227 201 435 $[1.5,1.8]$ 1.7 3.6 273 589 1041 284 613 ------------- ------------------ ------------------ ----------------- ------------------------- ------------- ------------- ----- With the discovery of the acceleration of the expansion of the universe [@1998AJ....116.1009R; @1999ApJ...517..565P], possibly driven by a new form of energy with sufficient negative pressure, recent results have concluded that $\sim96$ percent of the energy density of the universe is in a form not conceived by the Standard Model of particle physics and not interacting with the photons, hence dubbed “dark”. Lying at the heart of this discovery is the distance-redshift relation mapped by the type Ia supernovae (SnIa) combined with the temperature power spectrum of the cosmic microwave background fluctuations. Since the first detections, there has been a huge increment of data up to redshift $z\sim 1$ (@1998AJ....116.1009R,@1999ApJ...517..565P,,@2007ApJ...666..694W, @2004ApJ...607..665R, @2007ApJ...659...98R, @2009AJ....138.1271D [@Riess2011ApJ...730..119R]) The current precision and accuracy required to obtain deeper insight on the cosmological model using SnIa is limited by the systematic errors of this probe; therefore a joint statistical analysis with other probes is mandatory to assess a firm picture of the cosmological model. Corresponding to the size of the well-established sound horizon in the primeval baryon-photon plasma before photon decoupling [@1970ApJ...162..815P], the BAO scale provides a standard ruler allowing for geometric probes of the global metric of the universe. In the late-time universe it manifests itself in an excess of galaxies with respect to an unclustered (Poisson) distribution at the comoving scale $r \sim100 h^{-1} \mathrm{Mpc}$ — corresponding to a fundamental wave mode $k\sim 0.063 h \mathrm{Mpc}^{-1}$. The value of this scale at higher redshift is accurately measured by the peaks in the CMB power spectrum ([*e.g.*]{} @2009ApJS..180..330K [@Komatsu_2011]). Galaxy clustering and CMB observations therefore allow for a consistent comparison of the same physical scale at different epochs. The first detection of the ‘local’ BAO [@2005MNRAS.362..505C; @2005ApJ...633..560E] were based on samples at low redshift $z \leq 0.4$. Further analysis on a larger redshift range ($z>0.5$) and a wider area confirm the first result, reducing the errors by a factor of 2 [@Percival_2010; @Blake_2011]. Measurements of the BAO feature have thus become an important motivation for large galaxy redshift surveys; the small amplitude of the baryon acoustic peak, and the large value of $r_\mathrm{BAO}$, require comoving volumes of order of $\sim 1 \mathrm{Gpc}^3 h^{-3}$ and at least $10^5$ galaxies to ensure a robust detection ([*e.g.*]{} @1997PhRvL..79.3806T [@2003ApJ...594..665B]). BAO studies using luminous red galaxies (LRG) are currently being pushed to $z=0.7$ by the Baryonic Oscillation Spectroscopic Survey (BOSS) experiment as part of the Sloan Digital Sky Survey III (SDSS-III) survey [@2011AJ....142...72E]. So far, with a third of the spectroscopic data, the BAO feature has been measured at $z=0.57$ with a $6.7\sigma$ significance [@BOSSDR9BAO2012arXiv1203.6594A]. The final data set, which will be completed by mid-2014, will have a mean galaxy density of about $150$ galaxies per square degree over 10,000 deg$^2$. Recently, the WiggleZ experiment has obtained a significant $\sim 4.9\sigma$ detection of the BAO peak at $z=0.6$, by combining information from three independent galaxy surveys: the SDSS, the 6-degree Field Galaxy Survey (6dFGS) and the WiggleZ Dark Energy Survey [@Blake_2011]. In contrast to SDSS, WiggleZ has mapped the less biased, more abundant emission line galaxies [@Drinkwater_2010]. The next generation of cosmological spectroscopic surveys plans to map the high-redshift universe in the redshift range $0.6\leq z\leq2$ using the largest possible volume; see BigBOSS [@bigBOSS_2011], PFS-SuMIRe[^1], and EUCLID[^2]. To achieve this goal, suitable tracers covering this redshift range are needed. Above $z \sim 0.6$ the number density of LRGs decreases while the bulk of galaxy population is composed of star forming galaxies [@Abraham_1996; @Ilbert_06]; it is therefore compelling to build a large sample of such type of galaxies, which allows one to cover a large area and hence a large volume. The main challenges for future BAO surveys is to efficiently select targets for which a secure redshift can be measured within a short exposure time. Contrary to continuum-based LRG survey, the observational strategy of next generation surveys such as BigBOSS, PFS-SuMIRe, and EUCLID is based on redshift measurements using emission lines, which are a common feature of star-forming galaxies. In this paper we focus on targeting strategies for selecting luminous ELGs at $0.6<z<1.7$ using optical photometry, and we test our strategies using the BOSS spectrograph on the SDSS telescope [@Gunn_2006]. The plan of the paper is as follow. In section \[section:ELGs\_BAO\], we derive the necessary ELG redshift distribution to detect the BAO feature. In section \[section:color\_selection\] we explain how the ELG selection criteria were designed using different photometric catalogs, based on the performances of the BOSS spectrograph. In section \[section:Measurements\] we compare observed spectra issued from this selection with simulations and we discuss the efficiency of the proposed selection schemes. In section \[properties\] we discuss the main physical properties of the ELGs. In section \[section:discussion\], we present the redshift distribution of the observed ELGs and how to improve the selection. In appendix \[tble\_appendix\] we display a representative set of the spectra observed. Throughout this study we assume a flat $\Lambda$CDM cosmology characterized by $(\Omega_m, n_s, \sigma_8)=(0.27,0.96,0.81)$. Magnitudes are given in the AB system. Baryon Acoustic Oscillations {#section:ELGs_BAO} ============================ ![image](BAO_needs2.pdf){width="180mm"} Density and geometry requirements --------------------------------- In order to constrain the distance-redshift relation at $z>0.6$ using the BAO, we need a galaxy sample that covers the volume of the universe observable at this redshift. In this section we derive the required mean number density of galaxy, $\bar{n}(z)$, and the area to be covered in order to observe the BAO feature at the one percent level. The statistical errors in the measure of the power spectrum of galaxies $P(k,z)$, evaluated at redshift $z$ and at scale $k$, arise from sample variance and shot noise [@1986MNRAS.219..785K]. Denoting the latter as $\mathcal{N}(z)=1/\bar{n}(z)$, to measure a significant signal the minimal requirement is $$\bar{n}(z)P(k, z) = \frac{P(k, z)}{\mathcal{N}(z)} \gtrsim 2. \label{eqn:np_is_1}$$ As the amplitude of the power spectrum decreases with redshift, the required density increases with redshift. [*e.g.*]{}, at $z=0.6$, we need a galaxy density of $\bar{n}=2.1 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$ ; at $z=1.5$, $\bar{n}=3.2 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$. The full trend in redshift bins is given in Table \[BAO\_req\] and in Figures \[ELG\_bao\_needs\] a) and b) which show equation (\[eqn:np\_is\_1\]) as a function of redshift for $k\simeq0.063 \;h \; {\rm Mpc}^{-1}$ and $k\simeq0.12\;h \; {\rm Mpc}^{-1}$ (the location of the first and the second harmonics of the BAO peak in the linear power spectrum). In order to minimize the sample variance, we must sample the largest possible volume (a volume of 1 $\mathrm{Gpc}^3 \; h^{-3}$ roughly corresponds to a precision in the BAO scale measurement of 5 percent). To quantify this calculation, we use the effective volume sampled $V_{eff}$, defined as [@Tegmark_1997] $$V_\mathrm{eff}(k)= 4 \pi \int dr \, r^2 \left[ \frac{\bar{n}(r) b^2(z) P(r,k)}{1+\bar{n}(r) b^2(z) P(r,k)} \right]^2 . \label{eff_vol}$$ In this calculation, we assume a linear bias according to the DEEP2 study by @2008ApJ...672..153C that varies according to the redshift as $b(z)=b_0 (1+z)$, with $b(z=0.8)=1.3$. The bias could be larger for the more luminous ELGs, that are thought to be the progenitors of massive red galaxies [@cooper2008]. We shall evaluate the bias of ELGs more precisely in a future paper. The corresponding area to be surveyed in order to reach $V_\mathrm{eff}\sim 1\mathrm{Gpc}^3 \; h^{-3}$ is shown in Table \[BAO\_req\], setting redshift bins of width $\Delta z=0.3$ from $z=0.3$ to $z=1.8$. The Figure \[ELG\_bao\_needs\] c) shows the behavior of $V_\mathrm{eff}$ as a function of the area for a given slice of redshift with $\bar{n}$ given in the third column of Table \[BAO\_req\]. For the redshift range $[0.6,0.9]$ the survey area must be $\gtrsim$2,500 $\mathrm{deg}^2$. For the redshift range $[0.9,1.2]$ the survey area must be $\gtrsim$1,600 $\mathrm{deg}^2$. The observation of $[0.6,1.7]$ with a single galaxy selection thus needs 2,500 $\mathrm{deg}^2$ to sample the BAO at all redshifts. ### Reconstruction of the galaxy field {#reconstruction-of-the-galaxy-field .unnumbered} To obtain a high precision on the measure of the BAO scale, it is necessary to correct the 2-point correlation function from the dominant non-linear effect of clustering. The bulk flows at a scale of $20\;h^{-1}\;{\rm Mpc}$ that form large scale structures smear the BAO peak: it is smoothed by the velocity of pairs (At redshift 1 the rms displacement for biased tracers due to bulk flows is $8.5\;h^{-1}\; {\rm Mpc}$ in real space and $17\;h^{-1} \; {\rm Mpc}$ in redshift space) [@2007ApJ...664..675E; @2007ApJ...664..660E]. Reconstruction consists in correcting this smoothing effect. The key quantity that allows reconstruction on a data sample is the smoothing scale used to reconstruct the velocity field and should be as close to $5\;h^{-1}\; {\rm Mpc}$ as possible in order to measure the bulk flows without being biased by other non-linear effects that occur on smaller scales. The reconstruction algorithm applied on the SDSS-II Data Release 7 [@Abazajian_2009] LRG sample sharpens the BAO feature and reduces the errors from 3.5 percent to 1.9 percent. This sample has a density of tracers of $10^{-4}\; h^3\; {\rm Mpc}^{-3}$ and the optimum smoothing applied is $15\;h^{-1}\; {\rm Mpc}$ [@2012arXiv1202.0090P]. On the SDSS-III/BOSS data in our study (different patches cover 3,275 deg$^2$ on a total of 10,000 deg$^2$), reconstruction sharpens the BAO peak allowing a detection at high significance, but does not significantly improve the precision on the distance measure due to the gaps in the current survey (see @BOSSDR9BAO2012arXiv1203.6594A). To allow an optimum reconstruction using a smoothing three times smaller ($5 \;h^{-1}\; {\rm Mpc}$) it is necessary to have a dense and contiguous galaxy survey : gaps in the survey footprint smaller than 1 Mpc and a sampling density higher than $3 \times 10^{-4}\; h^3 \; {\rm Mpc}^{-3}$. This setting should reduce the sample variance error on the acoustic scale by a factor four. Observational requirements -------------------------- A mean galaxy density of $3 \times 10^{-4}\; h^3 \; {\rm Mpc}^{-3}$ can be reached by a projected density of 162 galaxies $\mathrm{deg}^{-2}$ with $0.6<z<0.9$, 261 $\mathrm{deg}^{-2}$ with $0.9<z<1.2$, 354 with $1.2<z<1.5$, and 589 with $1.5<z<1.8$. Considering a simple case where a survey is divided in three depths, the shallow one covering 2,500 deg$^2$ should contain 419,000 galaxies ; the medium 421,000 galaxies over 1,600 deg$^{2}$ ; and the deep 435,000 galaxies over 1,200 deg$^{2}$. This represents a survey containing 1,350,000 measured redshifts in the redshift range $[0.6,1.5]$. The challenge is to build a selection function that enhances the observation of these projected densities. Given a ground-based large spectroscopic program that measures $1.5 \times10^6$ spectra (it corresponds to about 4 years of dark time operations on SDSS telescope dedicated to ELGs), the challenge is to define a selection criterion that samples galaxies to measure the BAO on the greatest redshift range possible. We define the selection efficiency as the ratio of the number of spectra in the desired redshift range and the number of measured spectra. The example in the previous paragraph needs a selection with an efficiency of $1.35/1.5\sim$ 90 percent. Previous galaxy targets selections ---------------------------------- To reach densities of tracers $\gtrsim10^{-4}\; h^3\; {\rm Mpc}^{-3}$ at $z>0.6$ with a high efficiency, a simple magnitude cut is not enough. Such a selection would be largely dominated by low-redshift galaxies. The use of colour selections is necessary to narrow the redshift range of the target selection for observations. SDSS-I/II galaxies are selected with visible colours in the red end of the colour distribution of galaxies, resulting in a sample of LRG and not ELGs [@2001AJ....122.2267E]. The projected density of LRG is $\sim120$ deg$^{-2}$ with a peak in the redshift distribution at $z\sim0.35$. With the SDSS-I/II LRG sample, the distance redshift relation was reconstructed at 2 percent at $z=0.35$. BOSS has currently completed about half of its observation plan. The tracers used by BOSS are, as SDSS-I/II LRG, selected in the red end of the color distribution of galaxies, they are called CMASS (it stands for ‘constant mass’ galaxies) and the selection will be detailed in Padmanabhan et al. in prep. (2012). The current BAO detection using the data release 9 (a third of the observation plan) with the CMASS tracers at $z\sim 0.57$ has a $6.7\sigma$ significance (@BOSSDR9BAO2012arXiv1203.6594A). WiggleZ blue galaxies are selected using UV and visible colours: they have a density of 240 galaxies deg$^{-2}$ and a peak in the redshift distribution around $z=0.6$ [@Drinkwater_2010]. The WiggleZ experiment has obtained a $4.9\sigma$ detection of the BAO peak at $z=0.6$ [@Blake_2011]. At their peak density, both of these BAO surveys reach a galaxy density of $3 \times 10^{-4}\; h^3\; {\rm Mpc}^{-3}$, which guarantees a significant detection of the BAO. Galaxy selections beyond $z=0.6$ were already performed by surveys such as the VIMOS-VLT Deep Survey[^3] (VVDS, see @2005Natur.437..519L), DEEP2[^4] (see @Davis_2003) or Vimos Public Extragalactic Redshift Survey[^5] (VIPERS, see Guzzo et al. 2012, in preparation), but they are not tuned for a BAO analysis. The DEEP2 Survey selected galaxies using BRI photometry in the redshift range $0.75-1.4$ on a few square degrees with a redshift success of 75 percent using the Keck Observatory. It studied the evolution of properties of galaxies and the evolution of the clustering of galaxies compared to samples at low-redshift. In particular, insights in galaxy clustering to $z=1$ brings a strong knowledge about the bias of these galaxies [@2008ApJ...672..153C]. The VVDS wide survey observed 20 000 redshift on 4 deg$^2$ limited to $I_{AB}<22.5$ ; they studied the properties of the galaxy population to redshift $1.2$ and the small scale clustering around $z=1$. The VIPERS survey maps the large scale distribution of 100 000 galaxies on 24 $\mathrm{deg}^2$ in the redshift range $0.5-1.2$ to study mainly clustering and redshift space distortions. Their colour selection, based on *ugri* bands, is described in more detail in the section \[section:discussion\]. Color Selections {#section:color_selection} ================ Our aim is to explore different colour selections that focus on galaxies located in $0.6<z<1.7$ with strong emission lines, so that assigning redshifts to these galaxies is feasible within short exposure times (typically one hour of integration on the 2.5m SDSS telescope). The methodology used here has been first explored and experimented by @Davis_2003, @Adelberger_2004, @Drinkwater_2010. @Adelberger_2004 derived different colour selections for faint galaxies (with $23<R<25.5$) at redshifts $1<z<3$ based on the Great Observatories Origins Deep Survey data (GOODS, see @2003ApJ...587...25D). @Drinkwater_2010 selected ELGs using UV photometry from the Medium Imaging Survey of the GAlaxy EVolution EXplorer (MIS-GALEX, see @2005ApJ...619L...1M) data combined with SDSS, to obtain a final density of $238$ ELGs per square degree with $0.2< z <0.8$ over $\sim 800$ square degrees. Our motivation is to probe much wider surveys than GOODS or GALEX (ultimately a few thousands square degrees) and to concentrate on intrinsically more luminous galaxies (typically with $g<23.5$) with a redshift distribution extended to redshift 1.7. The selection criteria studied in this work are designed for a ground-based survey and more specifically for the SDSS telescope, a 2.5m telescope located at Apache Point Observatory (New Mexico, USA), which has a [*unique*]{} wide field of view to carry out LSS studies [@Gunn_2006]. The current BOSS spectrographs cover a wavelength range of $3600-10200 \AA$. Its spectral resolution, defined by the wavelength divided by the resolution element, varies from $R\sim 1,600$ at $3,600\AA$ to $R\sim 3,000$ at $10,000\AA$ [@2011AJ....142...72E]. The highest redshift detectable with the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line doublet $(\lambda 3727,\lambda 3729)$ is thus $z_\mathrm{max}=1.7$. To select ELGs in the redshift range $[0.6,1.7]$ we have explored two different selection schemes: first using *u,g,r* photometry and secondly using *g,r,i* photometry. Photometric data properties: SDSS, CFHT-LS and COSMOS ----------------------------------------------------- ![The four bands *ugri* and their precision are illustrated; in red for SDSS photometry; in blue for CFHTLS photometry. The *u* band quality is limiting the precision of the colour selection on SDSS photometry. Note that the photometric redshift CFHTLS catalog is cut at $i=24$, and the SDSS data is R-selected with $err_R\leq 0.2$.[]{data-label="mag_errors_SDSS_CFHT"}](logmagErrmag2.pdf){width="88mm"} ![image](selection.pdf){width="150mm"} The photometric SDSS survey, delivered under the data release 8 (DR8, @SDSS_DR8), covers 14,555 square degrees in the 5 photometric bands *u, g, r, i, z*. It is the largest volume multi-color extragalactic photometric survey available today. The 3$\sigma$ magnitude depths are: $u=22.0$, $g=22.2$, $r=22.2$, $i=21.3$; see @1996AJ....111.1748F for the description of the filters and @1998AJ....116.3040G for the characteristics of the camera. The magnitudes we use are corrected from galactic extinction. The Canada France Hawaii Telescope Legacy Survey[^6] (hereafter CFHTLS) covers $\sim155$ deg$^2$ in the *u,g,r,i,z* bands. The transmission curves of the filters differ slightly[^7] from SDSS. The data and cataloging methods are described in the T0006 release document[^8]. The 3$\sigma$ magnitude depths are: $u=25.3$, $g=25.5$, $r=24.8$, $i=24.5$. The CFHT-LS photometry is ten times (in $r$ and $i$) to thirty times (in $u$) deeper than SDSS DR8, however the CFHTLS covers a much smaller field of view than SDSS DR8. The magnitudes we use are corrected from galactic extinction. The CFHT-LS photometric redshift catalogs are presented in @Ilbert_06, and @Coupon_2009 ; the photometric redshift accuracy is estimated to be $\sigma_z < 0.04 (1+z)$ for $g\leq 22.5$. This photometric redshift catalog is cut at $i=24$, beyond which photometric redshifts are highly unreliable. Fig. \[mag\_errors\_SDSS\_CFHT\] displays the relative depth between SDSS and CFHT-LS wide surveys in the *u,g,r,i*-bands. COSMOS is a deep 2 deg$^2$ survey that has been observed at more than 30 different wavelengths [@2007ApJS..172....1S]. The COSMOS photometric catalog is described in @Capak_2007 and the photometric redshifts in @Ilbert_2009. The COSMOS Mock Catalog, (hereafter CMC; see[^9]) is a simulated spectro-photometric catalog based on the COSMOS photometric catalog and its photometric redshift catalog. The magnitudes of an object in any filter can be computed using the photometric redshift best-fit spectral templates (@Jouvel_2009, Zoubian et al. 2012, in preparation). The limiting magnitudes of the CMC in the each band are the same as in the real COSMOS catalog (detection at $5\sigma$ in a 3" diameter aperture): $\emph{u}<26.4$, $\emph{g}<27$, $\emph{r}<26.8$, $\emph{i}<26.2$. For magnitudes in the range $14<m<26$ in the *g,r,i* bands from the Subaru telescope and in the *u* band from CFHTLS, the CMC contains about 280,000 galaxies in 2 deg$^2$ to COSMOS depth. The mock catalog also contains a simulated spectrum for each galaxy. These simulated spectra are generated with the templates used to fit COSMOS photometric redshifts. Emission lines are empirically added using Kennicutt calibration laws [@Kennicutt_1998; @Ilbert_2009], and have been calibrated using zCOSMOS [@Lilly_2009] as described in Zoubian et al. 2012, in preparation. The strength of $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission lines was confirmed using DEEP2 and VVDS DEEP luminosity functions [@LeFevre2005; @zhu09]. Finally a host galaxy extinction law is applied to each spectrum. Predicted observed magnitudes take into account the presence of emission lines. Color selections {#color-selections} ---------------- Based on the COSMOS and CFHT-LS photometric redshifts, we explore two simple colour selection functions using the *ugr* and *gri* bands. Fig. \[selection\_figure\] shows the targets available in the *ugr* and *gri* colour planes. We construct a bright and a faint sample based on the photometric depths of SDSS and CFHT-LS. ### [*ugr*]{} selection The *ugr* colour selection is defined by $-1<u-r<0.5$ and $-1<g-r<1$ that selects galaxies at $z\geq 0.6$ and ensures that these galaxies are strongly star-forming ($u-r$ cut). The cut $-1<u-g<0.5$ removes all low-redshift galaxies ($z<0.3$). Finally the magnitude range is $20<g<22.5$ and $g<$23.5 for the bright and the faint samples, resp. Fig. \[selection\_figure\] a) and b). ### [*gri*]{} selection The bright *gri* colour selection is defined by the range $19<i<21.3$. We select blue galaxies at $z\sim 0.8$ with $0.8<r-i<1.4$ and $-0.2<g-r<1.1$ (Fig. \[selection\_figure\] c). In the faint range $21.3<i<23$, we tilt the selection to select higher redshifts with $-0.4<g-r<0.4$, $-0.2<r-i<1.2$ and $g-r<r-i$ (Fig. \[selection\_figure\] d). $\# \mathrm{deg}^{-2}$ $\bar{u}$ $\bar{g}$ $\bar{r}$ $\bar{i}$ $\bar{z}$ $\sigma_z $ $\bar{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ $Q^1_{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ $Q^3_{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ -- ------------------------ ----------- ----------- ----------- ----------- ----------- ------------- --------------------------------------------------------- --------------------------------------------------------- --------------------------------------------------------- ------- ------- b 130.0 21.98 21.87 21.69 - 1.25 0.53 61.74 46.47 88.39 f 1450.8 23.27 23.18 22.98 - 1.19 0.38 16.60 13.06 22.26 b 257.2 - 22.69 21.87 20.93 0.80 0.21 13.85 8.65 22.21 f 2170.5 - 23.34 23.09 22.55 0.93 0.31 10.23 6.83 15.99 b 193.3 21.95 21.8 21.7 - 1.28 0.38 f 1766.8 23.37 23.19 23.07 - 1.29 0.31 b 361.4 - 22.62 21.8 20.82 0.81 0.11 f 3317.5 - 23.34 23.11 22.55 1.03 0.35 b 232.2 21.89 21.76 21.69 - 1.27 0.37 f 1679.1 23.36 23.18 23.06 - 1.28 0.31 b 391.6 - 22.62 21.78 20.8 0.82 0.1 f 3334.2 - 23.34 23.11 22.54 1.03 0.33 b 166.96 21.76 21.77 21.52 - b 204.96 - 22.57 21.75 20.76 Predicted properties of the selected samples -------------------------------------------- The *ugr* colour selection avoids the stellar sequence, but not the quasar sequence. Hence, the contamination of the *ugr* selection by point-source objects is primarily due to quasars; see Fig. \[selection\_figure\] a) and b). The resulting photometric-redshift distribution as derived from the CFHT-LS photometric redshift catalog has a wide span in redshift, covering $0.6<z<2$ as shown in Fig. \[selection\_figure\_bis\]. The distribution is centered at $z=1.3$ for the bright and the faint sample with a scatter of $0.3$ (see Table \[mock\_selections\]). The expected $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ fluxes are computed from the CMC catalog and are shown in Fig. \[selection\_figure\_bis\]. For 90 percent of galaxies in the faint sample, the predicted flux is above $10.6 \times 10^{-17}\mathrm{erg\,cm^{-2}\,s^{-1}}$. The bright sample galaxies show strong emission lines. The *gri* selection avoids both the stellar sequence and the quasar sequence; see Fig. \[selection\_figure\] c) and d). Thus the contamination from point-sources should be minimal. Fig. \[selection\_figure\_bis\] shows the photometric redshift distribution of the *gri* selection applied to CFHT photometry. The redshifts are centered at $z=0.8$ for the bright and $1.0$ for the faint sample (see Table \[mock\_selections\]). The expected $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux, computed with the CMC catalog, is shown in Fig. \[selection\_figure\_bis\]. Emissions are weaker than for the *ugr* selection as expected. The different selections shown in Fig. \[selection\_figure\] and Fig \[selection\_figure\_bis\] are summarized in Table \[mock\_selections\], which contains the number densities available, mean magnitudes, mean redshifts, and mean $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ fluxes (when available) of the different samples considered. We have lower densities in the CMC than in the CFHT-LS catalog. This is probably due to cosmic variance as the CMC only covers 2 deg$^2$. The SDSS colour-selected samples are complete for the bright samples at $g<22.5$ and $i<21.3$, not for the faint samples. The CFHTLS-selected samples are complete for both the bright and faint samples; see Fig.\[cumulative\_samples\], where the total cumulative number counts (solid line) of the *ugr* and *gri* colour-selected samples are plotted as a function of $g$ and $i$ bands respectively. On the bright end of this Figure, although both photometry are complete at the bright limit, we note a discrepancy between the total amount of target selected on CFHT and SDSS that implies selections on CFHT are denser than on SDSS (difference between the red and blue solid lines). This is due to the transposition of the color selection from one photometric system to the other. In fact, we select targets on SDSS with a transposed criterion from CFHT using the calibrations by @Regnault_2009. The transposed criterion is as tight as the original. But as the errors on the magnitude are larger in the SDSS system, their colour distributions are more spread. Therefore the SDSS selection is a little less dense than the CFHT selection. Targeting the bright range is limited by galaxy density, in the best case one can reach 300 targets deg$^{-2}$ and it contains point-sources (stars and quasars) and low-redshift galaxies. In the faint range, the target density is ten times greater, but the exposure time necessary to assign a reliable redshift will be much longer (one magnitude deeper for a continuum-based redshift roughly corresponds to an exposure five times longer). The stellar, quasars and low-redshift contamination is smaller in the faint range. Fig. \[selection\_figure\_bis\] shows the distributions in redshift and in $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux we expect given a magnitude range and a colour criterion within the framework of the CMC simulation. The main trend is that the *ugr* selection identifies strong $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emitters out to $z\sim2$ where the *gri* peaks at redshift $1$ and extends to $1.4$ with weaker $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emitters. ![image](selection2.pdf){width="180mm"} We also used a criterion to split targets in terms of compact and extended sources, which is illustrated in Fig. \[cumulative\_samples\]. For CFHT-LS we have used the half-light radius ($r_2$ value, to be compared to the $r_{2}^{limit}$ value which defines the maximal size of the PSF at the location of the object considered - see Coupon et al 2009 and CFHT-LS T0006 release document) to divide the sample into compact and extended objects. For SDSS we used the “<span style="font-variant:small-caps;">type</span>” flag, which separates compact (<span style="font-variant:small-caps;">type</span>=6) from extended objects (<span style="font-variant:small-caps;">type</span>=3). For the *ugr* colour selection, the number counts are dominated by compact blue objects (quasars) at $g\leq22.2$. At $g\geq 22.2$ the counts are dominated by extended ELGs. For comparison we show in Fig. \[cumulative\_samples\] the cumulative counts of the XDQSO catalog from @Bovy_2011 who identified quasars in the SDSS limited to $g<21.5$. We notice an excellent match with the bright (compact) *ugr* colour-selected objects. For the *gri* colour selection, there is a low contamination by compact objects because the colour box does not overlap with either the stellar or the quasar sequence. ![image](cumul.pdf){width="180mm"} ELG Observations {#section:Measurements} ================ To test the reliability of both the bright *ugr* ($g<22.5$) and the bright *gri* ($i<21.3$) colour selections, we have conducted a set of dedicated observations, as part of the “Emission Line Galaxy SDSS-III/BOSS ancillary program”. The observations were conducted between Autumn 2010 and Spring 2011 using the SDSS telescope with the BOSS spectrograph at Apache Point Observatory. A total of $\sim$2,000 spectra, observed 4 times 15 minutes, were taken in different fields: namely, in the Stripe 82 (using single epoch SDSS photometry for colour selection) and in the CFHT-LS W1, W3 and W4 wide fields (using CFHT-LS photometry). This data set was released in the SDSS-III Data Release 9[^10]. Description of SDSS-III/BOSS spectra ------------------------------------ We used the SDSS photometric catalog [@SDSS_DR8] to select 313 objects according to their *ugr* colours located in the Stripe-82 and 899 objects selected according to their *gri* colours in the CFHT-LS W3 field. In addition we used the CFHT-LS photometry to select 878 *ugr* targets in the CFHT-LS W1 field, and 391 *gri* targets in the CFHT-LS W3 field for observation. The spectra are available in SDSS Data Release 9 and flagged ‘ELG’. All of these spectra were manually inspected to confirm or correct the redshifts produced by two different pipelines (<span style="font-variant:small-caps;">zCode</span> and its modified version that we used to fit the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line doublet). As the BOSS pipeline redshift measurement is designed to fit LRG continuum some ELG with no continuum were assigned wrong redshifts. To classify the observed objects, we have defined seven sub-categories : ### Objects with secure redshifts {#objects-with-secure-redshifts .unnumbered} - ‘ELG’, Emission-line galaxy (redshift determined with multiple emission lines). Usually these spectra have a weak ‘blue’ continuum and lack a ‘red’ continuum. Empirically, using <span style="font-variant:small-caps;">platefit vimos</span> pipeline output, this class corresponds to a spectrum with more than two emission lines with observed equivalent widths $EW \leq -6 \AA$ ; see examples in Appendix \[tble\_appendix\]. - ‘RG’, Red Galaxy with continuum in the red part of its spectrum, allowing a secure redshift measurement through multiple absorption lines ( [*e.g.*]{} Ca K&H, Balmer lines) and the $4000\AA$ break. Some of these objects have also weak emission (E+A galaxies). Empirically their spectra have a mean $D_n(4000)$ of $1.3$ ; where $D_n(4000)$ is the ratio of the continuum level after the break and before the break. These galaxies typically have $i\sim20$, which is fainter than the CMASS targeted by BOSS. - ‘QSO’, Quasars, which are identified through multiple broad lines. Examples are given in Fig. \[qsos\]. - Stars. ### Objects with unreliable redshifts {#objects-with-unreliable-redshifts .unnumbered} - ‘Single emission line’ : the spectra contain only a single emission line which cannot allow a unique redshift determination. For this population, the CFHT T0006 photometric redshifts are compared to the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ redshift (assuming the single emission line is $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$) in Fig. \[sinlgeEmLowContiRedshift\]. The two estimates agree very well : 77.7 percent have $(z_{spec} - z_{phot})/(1+ z_{spec})<0.1$ for the *gri* selection and 62.7 percent for the *ugr* selection. These galaxies with uncertain redshift tend to have slightly fainter magnitudes with a mean CFHT *g* magnitude at 22.6 and a scatter of 0.6, whereas for the whole ELGs is 22.4 with a scatter of 0.4. - ‘Low continuum’ spectra that show a $4000 \AA$ break too weak for a secure redshift estimate. The agreement between photometric and spectroscopic redshift estimation is excellent : 84.6 percent within 10 percent errors; see Fig. \[sinlgeEmLowContiRedshift\]. - ‘Bad data’, the spectrum is either featureless, extremely noisy or both. The detailed physical properties of the ELGs are discussed in section \[properties\] and a number of representative spectra are displayed in Appendix \[tble\_appendix\]. ![T0006 CFHT-LS photometric redshifts of single emission line and low continuum galaxies observed against $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ redshift. A strong correlation is clearly evident. A slight systematic over-estimation of the photometric redshift is visible above redshift 1.2 (these photometric redshifts were calibrated below 1.2).[]{data-label="sinlgeEmLowContiRedshift"}](singleEmLoxConti.pdf){width="88mm"} Redshift Identification ----------------------- The results of the observations are summarized by categories in Table \[objects\_W3\]. For the targets selected using SDSS photometry and with the *ugr* selection : 32 percent are ELGs at a redshift $z>0.6$ (100 spectra). The low-redshift ELGs represent 32 percent of the observed targets (101 spectra).The other categories are : 65 ‘bad data’ (20 percent), 30 quasars (10 percent), 10 stars (3.5 percent), and 7 red galaxies with $z<0.6$ (2.5 percent). With the *gri*-selection, 57 percent of the targets are at $z>0.6$. However, still 21 percent of the spectra fall into the bad data class. Using CFHTLS photometry 46 percent of targets are ELGs at $z>0.6$ and 14 percent are quasars with the *ugr* selection. With the *gri*-selection, 73 percent are galaxies at $z>0.6$, five-sixths of which are ELG. For both selections, targeting with CFHTLS is more efficient than with SDSS. The complete classification of observed targets is in Table \[objects\_W3\]. The redshift distribution of the observed objects is compared to the distributions from the BOSS and WiggleZ current BAO experiments in Fig. \[ELG\_nz\]. The Figure shows that *ugr* and *gri* target selections enable a BAO study at higher redshifts. With a joint selection, we can reach the requirements described in Table \[BAO\_req\] to detect BAO feature to redshift 1. ---------------------- -------- ----- -------- ----- -------- ----- -------- ----- Type Number % Number % Number % Number % ELG($z>0.6$) 450 50 240 61 100 32 402 46 ELG($z<0.6$) 60 7 3 1 101 32 84 9 RG($z>0.6$) 73 8 46 12 0 0 0 0 RG($z<0.6$) 30 3 0 0 7 3 0 0 single emission line 36 4 12 3 0 0 102 12 low continuum 13 1 1 0 0 0 0 0 QSO 8 1 5 1 30 10 126 14 stars 44 5 12 3 10 3 6 1 bad data 185 21 72 18 65 20 158 18 total 899 100 391 100 313 100 878 100 ---------------------- -------- ----- -------- ----- -------- ----- -------- ----- ![Observed redshift distribution for the *ugr* ELGs (blue), the *gri* ELGs (black) compared to the distribution of galaxies from BOSS (red) and WiggleZ (green). Magenta lines represent constant density of galaxies at 1 and 3 $\times10^{-4}\; h^3 \;{\rm Mpc}^{-3}$, it constitutes our density goals.[]{data-label="ELG_nz"}](nZobserved.pdf){width="88mm"} Comparison of measured ELGs with the CMC forecasts -------------------------------------------------- ![image](nicePlot5.pdf){width="180mm"} To investigate the expected purity of ELG galaxies samples, we created mock catalogs covering redshifts between 0.6 and 1.7. Continuum spectra of ELGs were generated from the Cosmos Mock Catalog and emission lines were added according to the modeling described in @Jouvel_2009. Two simulated galaxy catalogs were built, one for each colour selection function (*ugr* and *gri*). Each synthetic spectrum was affected by sky and photon noise as if observed by BOSS spectrographs, by using the <span style="font-variant:small-caps;">specsim1d</span> software. We simulated a set of four exposures of 900 seconds each. The resulting simulated spectra were then analyzed by the <span style="font-variant:small-caps;">zCode</span> pipeline [@2006MNRAS.372..425C] to extract the spectroscopic redshift. As our targets are mainly emission line galaxies, we only use the redshift estimate based on fitting discrete emission line templates in Fourier space over all z. We address the flux measurement of emission lines. This exercise was conducted using the <span style="font-variant:small-caps;">Platefit Vimos</span> software developed by @Lamareille_2009. This software is based on the <span style="font-variant:small-caps;">platefit</span> software that was developed to analyze SDSS spectra [@2004ApJ...613..898T; @2004MNRAS.351.1151B]. The <span style="font-variant:small-caps;">platefit vimos</span> software was developed to measure the flux of all emission lines after removing the stellar continuum and absorption lines from lower resolution and lower signal-to-noise ratio spectra . The stellar component of each spectrum is fit by a non-negative linear combination of 30 single stellar population templates with different ages (0.005, 0.025, 0.10, 0.29, 0.64, 0.90, 1.4, 2.5, 5 and 11 Gyr) and metallicities (0.2, 1 and 2.5 $Z_\odot$). These templates have been derived using the @2003MNRAS.344.1000B libraries and have been resampled to the velocity dispersion of VVDS spectra. The dust attenuation in the stellar population model is left as a free parameter. Foreground dust attenuation from the Milky Way has been corrected using the @1998ApJ...500..525S maps. After removal of the stellar component, the emission lines are fit as a single nebular spectrum made of a sum of Gaussians at specified wavelengths. All emission lines are set to have the same width, with the exception of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]\lambda3727$ line which is a doublet of two lines at 3726 and 3729 $\AA$ that appear broadened compared to other single lines. Detected emission lines may also be removed from the original spectrum in order to obtain the observed stellar spectrum and measure indices, as well as emission-line equivalent widths. The underlying continuum is obtained by smoothing the stellar spectrum. Equivalent widths are then measured via direct integration over a $5\sigma$ bandpass of the emission-line Gaussian model divided by the underlying continuum. Then emission lines fluxes are measured for each simulated spectra using the extracted redshift from <span style="font-variant:small-caps;">zCode</span> and the true redshift for cross-checks. We consider that a redshift has been successfully measured if $\Delta z/(1+z)<0.001$. We believe that this threshold could be lowered to $10^{-4}$ in the future by using a more advanced redshift solver. Using the current pipeline, we can distinguish these two regimes. The first regime is the redshift range $z<1.0$. Many emission lines (\[OII\], H$\beta$, \[OIII\]) are present in the SDSS spectrum. For $g<23.5$, 91 percent of the redshift are measured sucessfully. Among the remaining 9 percent, catastrophic failures represent 3.5 percent (the pipeline outputs a redshift between 0 and 1.6 with $\Delta z/(1+z)>0.01$). Inaccurate redshifts represent 3.9 percent (the pipeline outputs a redshift between 0 and 1.6 with $0.001<\Delta z/(1+z)<0.01$) and 1.5 percent are not found by the pipeline ($z=-9$ is output). The second regime is the redshift range $1.0\leq z< 1.7$: the redshift determination hinges on the identification of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet. For $g<23.5$, 66.8 percent of the redshifts are measured sucessfully. 19.1 percent are catastrophic failures and 14.1 percent are inaccurate redshifts. Work is ongoing to improve the redshift measurement efficiency at $z>1$. In the second regime, the minimum $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux required to compute a reliable redshift depends on the redshift / wavelength, because of the strong OH sky lines in the spectrum. We infer from the observed spectra that to measure a reliable redshift, we require a $5\sigma$ detection of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ lines, which means a (blended or not) detection of two peaks in the emission line separated by $2(1+z)$. The detection significance is defined from the 1d spectrum. From the data the faintest $5\sigma$ detections are made with a flux of $4\times 10^{-17} \mathrm{erg\,s^{-1}\,cm^{-2}}$ and the brightest $5\sigma$ detections need a flux of $2\times 10^{-16} \mathrm{erg\,s^{-1}\,cm^{-2}}$ to be on top of sky lines. The simulation shows the same thresholds; see Fig. \[OII\_detection\_limit\]. The simulation confirms the detection limit we observe. The bottom plot of Fig. \[OII\_detection\_limit\] raises the issue that the sky time variation has a non-negligible impact on the detection limit of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission doublet for redshifts $z>1.1$. Though this ELG sample is too small to address this issue. In fact the sample was observed during ten different nights and the number of ELG with $z>1.1$ is less than 60. It is thus not possible to derive a robust trend comparing the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ detections to the sky value of each observation. Handling this issue would require a sample of $\sim$500 redshifts in $1.1<z<1.6$ observed many times over many nights. With such a sample in hands, we could quantify exactly how to optimize the observational strategy. Physical properties of ELGs {#properties} =========================== All *ugr* and *gri* ELG spectra were analyzed with two different software packages: the PlateFit VIMOS [@Lamareille_2009] and the Portsmouth Spectroscopic Pipeline [@2012arXiv1207.6115T]. In this section we discuss the following physical properties of the observed ELGs: redshift, star forming rate (SFR), stellar mass, metallicity and classification of the ELG type (Seyfert 2, LINERs, SFG, composite). Observations a larger samples of ELGs are planned to estimate how these quantities vary over time and with their environment, and also to estimate how the clustering depends on these physical quantities. It is key to replace future BAO tracers in the galaxy formation history. With the current sample, we draw simple trends using means and standard deviation of the observed quantities, and we place the ELGs in the galaxy classification made by @Lamareille_2010 [@Marocco_2011]. Main Properties --------------- The main properties of the ELGs are shown in the Table \[main\_properties\]. The star forming rate was computed using the equation 18 of @Argence_2009. The stellar mass was estimated using the CFHTLS *ugriz* photometry. (The errors on the stellar mass using only SDSS photometry were too large to be meaningful, thus the empty cells in the table). The metallicity is estimated using the calibration by @Tremonti_2004. The main trends are : - The *gri*-selected galaxies of CFHTLS are the more massive galaxies in terms of stellar mass. - The *ugr* selects stronger star-forming galaxies than *gri* (due to the *u*-band selection). There is a factor of two variations in the strength of the measured oxygen lines. - The *ugr* selects galaxies that have $12 + \log{[OH]} \in [8,9]$ whereas *gri* focuses slightly more on the higher $12 + \log{[OH]} \approx 9$. - the SFR appears to be independent of the color selection schemes. ----------------------------------------------------------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- mean $\sigma$ mean $\sigma$ mean $\sigma$ mean $\sigma$ EW$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ -14.86 9.01 -16.75 10.13 -50.58 27.24 -30.75 23.04 Flux$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ 16.85 9.65 18.58 10.37 30.36 30.1 24.23 39.27 EW$_{H_\beta}$ -10.28 10.8 -10.72 8.65 -24.27 22.88 -17.18 19.34 Flux$_{H_\beta}$ 15.44 8.6 14.63 7.72 12.97 15.16 12.57 23.91 EW$_{\left[\mathrm{O\textrm{\textsc{iii}}}\right]}$ -10.09 10.98 -11.33 10.76 -65.3 91.56 -16.89 30.49 Flux$_{\left[\mathrm{O\textrm{\textsc{iii}}}\right]}$ 17.74 20.15 17.43 21.59 35.13 53.49 13.39 37.79 $12 + \log OH$ 8.94 0.20 8.92 0.19 8.69 0.21 8.69 0.25 $\log$SFR$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ 0.97 0.35 0.92 0.45 0.96 1.24 0.76 0.84 $\log(M^*/M_\odot)$ 10.85 0.3 10.23 6.87 9.33 0.80 - - ----------------------------------------------------------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- Classification. --------------- ![image](classification.pdf){width="180mm"} We use a recent classification [@Lamareille_2010; @Marocco_2011] for the ELG sample. The classification is made using $\log(\left[\mathrm{O\textrm{\textsc{iii}}}\right]/H_\beta)$, $\log(\left[\mathrm{O\textrm{\textsc{ii}}}\right]/H_\beta)$, $D_n(4000)$, and $\log(max(EW_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]},EW_{\left[\mathrm{Ne\textrm{\textsc{iii}}}\right]}))$. We compare the ELG sample to zCOSMOS, as zCOSMOS has numerous star forming galaxies in the redshift range we are observing. Fig. \[classification\_ELG\] a) shows that the zCOSMOS and the *ugr* ELG samples are located in three of the five areas delimited by the classification: Seyfert 2 (‘Sy2’), Star Forming Galaxies (‘SFG’), and a third region where both mix (‘Sy2/SFG’). There are a few LINERs and Composite in either sample. Fig. \[classification\_ELG\] b) separates the *ugr* galaxies in the ‘Sy2/SFG’ area into ‘SFG’ or ‘Sy2’, and shows that zCOSMOS galaxies from the ‘Sy2/SFG’ area are both ‘Sy2’ and ‘SFG’ where the *ugr* ELGs in the ‘Sy2/SFG’ area are mostly ‘SFG’. The *gri* observed sample is located in the area of Star Forming Galaxies (‘SFG’), whether one considers the one selected on CFHT or on SDSS. Finally, the ELG selected, *ugr* or *gri*, are both in the ‘SFG’ part of the classification. Discussion {#section:discussion} ========== Redshift identification rates in [*ugr*]{} and [*gri*]{} -------------------------------------------------------- We summarize the redshift measurement efficiency of the *gri* and *ugr* colour-selected galaxies presented in this paper in Tables \[objects\_W3\] and \[redshift\_efficiency\], and we compare the results with those of WiggleZ [@Drinkwater_2010], BOSS and VIPERS (the percentages about VIPERS are based on a preliminary subset including only $\sim$ 20 percent of the survey). The original VIPERS selection flag (J. Coupon and O. Ilbert private communication) is defined to have colours compatible with an object at $z > 0.5$ if it has ($r-i \geq 0.7$ and $u-g\geq1.4$) or ($r-i \geq 0.5(u-g)$ and $u-g<1.4$) (Guzzo et al. (2012), in preparation). The efficiencies in the Table \[redshift\_efficiency\] show that a better photometry and thus more precise colours yield a better efficiency in terms of obtaining objects in the targeted redshift range. It also shows the colour selections proposed in this paper are competitive for building an LSS sample. To determine the necessary precision on the photometry to stay at the efficiencies observed, we degrade the photometry of the observed ELGs, then reselect them and recompute the efficiencies. Using a photometry less precise than the CFHTLS by a factor of 2.5 in the errors (the ratio of the median values of the mag errors in bins of 0.1 in magnitude equals 2.5) does not significantly change neither the efficiency nor the redshift distribution implied by the colour selection. This change also corresponds to loosening the colour criterion by 0.1 mag. For the *eBOSS* survey a photometry 2.5 times less precise than CFHTLS should be sufficient to maintain a high targeting efficiency (for comparison, SDSS is 10 times less precise than CFHTLS); Fig. \[degradedPhotometry:fig\] shows the smearing of the galaxy positions in the colour-colour plane for a degraded photometry. -------------- --------------- ----------- --------- -- -- -- -- selection spectroscopic object in quasars scheme redshift z window *gri* SDSS 80 62 1 *gri* CFHTLS 82 73 1 *ugr* SDSS 80 32 10 *ugr* CFHTLS 78 56 13 WiggleZ 60 35 - BOSS 95 95 - VIPERS 80 70 - -------------- --------------- ----------- --------- -- -- -- -- : Redshift efficiency in percent. The second column ‘spectroscopic redshift’ quantifies the amount of spectroscopic redshift obtained with the selection. The third column ‘object in z window’ is the number of spectroscopic redshift that are in the range the survey is aiming at, it is the efficiency of the target selection. The redshift window for ELG selection is $z>0.6$.[]{data-label="redshift_efficiency"} ![U-R vs. G-I colored according to the photometric redshift. On the left CFHT-LS photometry, on the right CFHT-LS photometry degraded by a factor 2.5. This comparison shows how the degradation of the photometry smears the clean separations between galaxy populations in redshift.[]{data-label="degradedPhotometry:fig"}](degradedPhotometry.pdf){width="88mm"} Measurement of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet, single emission line spectra ------------------------------------------------------------------------------------------------------ For ground-based spectroscopic surveys observing ELGs with $1<z<1.7$, the only emission line remaining in the spectrum to assign the spectroscopic redshift is the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet. For the redshift to be certain the doublet must be split ([*i.e.*]{}, we do not want the target to be classified as ‘single emission line’ ELG). Fig. \[OiiRedshifts:Fig\] shows a subsample of the observed bright *ugr* ELGs where $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublets are well resolved. ![image](OiiResolution.pdf){width="180mm"} We can circumvent the ‘single emission line’ ELG issue (Fig. \[sinlgeEmLowContiRedshift\]) by increasing the resolution of the spectrograph. This modification will enhance a better split of $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$, and will increase the room available to observe the doublet by rendering sky lines ‘thiner’. The sky acts as an observational window and prevents some narrow redshift ranges to be sampled by the spectrograph; see Fig. \[OII\_detection\_limit\]. Increasing the resolution dilutes the signal, and thus the exposure time has to be increased to reconstruct properly the doublet above the mean sky level. We performed a simulation of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line fit to quantify by which amount the resolution must be increased to have no ‘single emission line’ ELGs. We fit one or two Gaussians on a doublet with a total flux of $10^{-16} \mathrm{erg\,s^{-1}\,cm^{-2}}$ (lowest ‘single emission line’ flux observed) contaminated by a noise of $3 \times 10^{-17} \mathrm{erg\,s^{-1}\,cm^{-2}}$ (typical BOSS dark sky). The $\chi^{2}$ of the two fits are equal at low resolution and become disjoint in favor of the fit with 2 Gaussians for a resolution above 3000 at $7454.2\AA$ ([*i.e.*]{} $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ at redshift 1). Such an increase in resolution could help assigning proper redshifts to ‘single emission line’ ELGs. How / why redshift went wrong ----------------------------- The main difference in redshift measurement efficiency between SDSS and CFHT-LS colour selection is mainly due to the difference in photometry depth. Using calibrations made by @Regnault_2009, it is possible to translate the colour selection criteria from CFHT-LS magnitudes to SDSS magnitudes. The colour difference can be as large as 1 magnitude as the SDSS magnitude cut is close to the detection limit of the SDSS survey; see Fig. \[griComparison\] where SDSS *gri* colour-selected galaxies are represented with their CFHTLS magnitudes. ![*gri* selection based on colours from SDSS (black box) represented on CFHTLS magnitudes. The scatter is quite large: about half the targets would not have been selected if we used CFHTLS photometry. The ‘wanted’ objects are galaxies at $z>0.6$ or quasars and ‘unwanted’ objects are the rest. []{data-label="griComparison"}](griComparison_both.pdf){width="80mm"} How to improve ELG selection for future surveys ----------------------------------------------- We suggest a few ways to increase the redshift measurement efficiency and reach the requirements set in the second section. For the *ugr* selection : lowering *u-g* cut to 0.3 diminishes the contamination by low-redshifts galaxies. Additional low-redshift galaxies can be removed from the selection through an inspection of the images. Some of the low-redshift galaxies are quite extended, and one could mistake a high-redshift merger for an extended low-redshift galaxy. Visual inspection reduces the low-redshift share from 9 percent to 4 percent. The compact and extended selection on the CFHT data is very efficient at identifying quasars. There is also room for improving the spectroscopic redshift determination and thus re-classify ‘single emission line’ galaxies : they represent a 12 percent share, among which 10 percent are at $z>0.6$. It seems reasonable to assume an efficiency improvement from 46 percent ELG($z>0.6$) + 14 percent quasar to 61 percent ELG($z>0.6$) + 14 percent quasar. Thus a total efficiency of $\sim75\%$ For the *gri* selection : improving the spectroscopic redshift determination pipeline can gain up to 5 percent efficiency thus increasing from $73$ to 78 percent of ELG($z>0.6$). We have also optimized target selections for BAO sampling density using the four bands *ugri*. We find that the optimum selections have a redshift distribution close to the smooth combination of the *gri* and *ugr* selections discussed here; see Fig. \[ugriSelections\]. Conclusion ========== We present an efficient emission-line galaxy selection that can provide a sample from which one can measure the BAO feature in the 2-point correlation function at $z>0.6$. With the photometry available today we can plan for a BAO measure to redshift 1 with the BOSS spectrograph. A representative set of photometric surveys that might be available for target selection in a near future on more than 2,000 square degrees are : - The Kilo Degree Survey (KIDS)[^11] aims at observing 1500 square degrees in the *ugri* bands with $3\sigma$ depth of 24.8, 25.4, 25.2, 24.2 using the VST. - the South Galactic Cap U-band Sky Survey[^12] (SCUSS) aims a $5 \sigma$ limiting magnitude of $23.0$ - the Dark Energy Survey (DES) aims at observing 5,000 square degrees in *griz* bands with 10 $\sigma$ depth of 24.6, 24.1, 24.3, 23.9. This survey does not include the *u* band [@Abbott_2005; @2008MNRAS.386.1219B]. - the Large Synoptic Survey Telescope (LSST) [@Ivezic_2008] plans to observe 20,000 square degrees in *ugrizy* bands with 5 $\sigma$ depth of 26.1, 27.4, 27.5, 26.8, 26.1, 24.9. Using such deeper photometric surveys and improved pipelines, it should be possible to probe BAO to redshift $z=1.2$ in the next 6 years, [*e.g.*]{} by the *eBOSS* experiment, and to $z=1.7$ in the next 10 years, [*e.g.*]{} by PFS-SuMIRE or *BigBOSS* experiment. Acknowledgements {#acknowledgements .unnumbered} ================ Johan Comparat especially thanks Carlo Schimd and Olivier Ilbert for insightful discussions about this observational program and its interpretation. We thank the SDSS-III/BOSS collaboration for granting us this ancillary program. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The BOSS French Participation Group is supported by Agence Nationale de la Recherche under grant ANR-08-BLAN-0222. ![Photometric redshift distributions obtained using the *ugri* bands. The dashed black lines are the low and high density goals mentioned in Section \[section:ELGs\_BAO\], $\bar{n}=10^{-4}$ and $3 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$. The dashed red line is the BOSS CMASS sample. The solid blue line is the distribution enhanced by the *ugri* selection, it has a projected sky density of $\sim340$ deg$^{-2}$. The solid red line is the *gri* selection (projected sky density $\sim350$ deg$^{-2}$). The solid green is the *ugr* selection (projected sky density $\sim400$ deg$^{-2}$). It shows the possibility of making a selection able to sample $[0.6,1.2]$ for a BAO experiment.[]{data-label="ugriSelections"}](allSelectionsCompared2.pdf){width="88mm"} Table of a subsample of observed galaxies at $z>0.6$ {#tble_appendix} ==================================================== \[gri\_table\] ![image](elgLoz.pdf){width="180mm"} \[Em\_loz\] ![image](elgMez.pdf){width="180mm"} \[Em\_midz\] ![image](elgHiz.pdf){width="180mm"} \[Em\_hiz\] ![image](qso.pdf){width="180mm"} \[qsos\] \[lastpage\] [^1]: http://sumire.ipmu.jp/en/ [^2]: http://sci.esa.int/euclid [^3]: http://cesam.oamp.fr/vvdsproject/ [^4]: http://deep.berkeley.edu/index.html [^5]: http://vipers.inaf.it/project.html [^6]: http://www.cfht.hawaii.edu/Science/CFHLS/ [^7]: http://cadcwww.dao.nrc.ca/megapipe/docs/filters.html [^8]: http://terapix.iap.fr/cplt/T0006-doc.pdf [^9]: http://lamwws.oamp.fr/cosmowiki/RealisticSpectroPhotCat [^10]: http://dr9.sdss3.org/ [^11]: http://kids.strw.leidenuniv.nl/ [^12]: http://batc.bao.ac.cn/Uband/
--- abstract: 'The radiation pressure induced coupling between an optical cavity field and a mechanical oscillator can create entanglement between them. In previous works this entanglement was treated as that of the quantum fluctuations of the cavity and mechanical modes around their classical mean values. Here we provide a fully quantum approach to optomechanical entanglement, which goes beyond the approximation of classical mean motion plus quantum fluctuation, and applies to arbitrary cavity drive. We illustrate the real-time evolution of optomechanical entanglement under drive of arbitrary detuning to show the existence of high, robust and stable entanglement in blue detuned regime, and highlight the quantum noise effects that can cause entanglement sudden death and revival.' author: - Qing Lin - Bing He - 'R. Ghobadi' - Christoph Simon title: Fully Quantum Approach to Optomechanical Entanglement --- Introduction ============ The study of optomechanical systems (OMS) has undergone rapid development over the recent years [@R1; @R2; @R3]. The quantum level of OMS has been reached in experiments [@ex00; @ex01; @ex010; @ex02; @ex020; @ex03]. Entanglement is a particularly striking quantum feature. The coupling of the cavity field of an OMS to the mechanical oscillator under radiation pressure can lead to their entanglement. This mesoscopic or macroscopic entanglement possesses both fundamental interest and potential applications. Theoretically an OMS is often approached via the expansion of its fluctuations about the mean values of the cavity and mechanical mode operators, where these mean values are determined by the classical equations of motion. This approximation of replacing a quantum system operator with the sum of a classical value and the accompanying quantum fluctuation has been widely applied to generic nonlinear quantum systems whose Heisenberg-Langevin equations are not analytically solvable [@semi]. Most previous studies of optomechanical entanglement (see, e.g. [vitali, pater, vitali-07,vitali-08, h-p-08, galve, zou, abdi, G]{}) concern that of the fluctuations around the steady state solution of the classical Langevin equations under continuous-wave (CW) drive. Some other works have considered the entanglement under periodic [@m-e-09; @m-e-12; @f-g] or pulsed drive [@pulse2]. A common feature of these treatments is that the linearized dynamics about the fluctuations is based on a specific classical mean motion as the background, and the entanglement of the fluctuations can be closely connected to the classical motion of OMS [@d-e]. However, the classical motion of an OMS can be chaotic [@mulstability], so it is not always possible to quantify this entanglement of fluctuations [@d-e]. Very recently several quantum features of OMS have been studied in considerable detail. This research includes OMS dynamics under single photon drive [@p1; @p2; @p3; @bhe; @p4], control and generation of OMS quantum states [@s2; @s3; @s4; @s5; @s6; @s7], enhancement of OMS nonlinearity for quantum information processing [@n1; @n2; @n3; @n4] and other quantum properties of OMS [@q1; @q2]. These studies consider the quantum states associated with the cavity mode $\hat{a}$ and mechanical mode $\hat{b}$ themselves, as in Fig. 1, instead of those for their fluctuations. Starting from a separable quantum state of the cavity and mechanical mode, the optomechanical coupling can entangle them to an entangled quantum state. The less unexplored entanglement of such fully quantum OMS, which is independent from classical motion, is the theme to be discussed below. Notice that this type of entanglement was also recently discussed in a different approach [@et-new], which works with the numerical simulation based on the approximate Fokker-Planck equation to find the entanglement signature and other properties. This paper is organized as follows. In Sec. II, we discuss the dynamics about the OMS in strong drive and weak coupling regime. In this regime the quantum states of an OMS keep to be Gaussian. The real-time evolution and quantum noise effect on the entanglement of such Gaussian states are studied with examples in Sec. III. Then we present a rather detailed discussion about the difference between our concerned entanglement and that of the cavity and mechanical fluctuation in Sec. IV. The conclusions from our study are given in the final section. dynamics under strong drive and weak coupling ============================================= We consider an OMS driven by a pulsed drive with the central frequency $% \omega_0$ and arbitrary frequency distribution $E(\omega-\omega_0)$. Its profile $E(t)e^{i\omega_0 t}$ in time domain is related to $% E(\omega-\omega_0)$ by the Fourier transform. The drive reduces to a CW one when $E(t)$ is constant. Without cavity and mechanical damping, one has the unitary evolution operator $U(t,0)=\exp\{-iH_0t\}\mathcal{T}\exp\{-i\int_0^t d\tau H_S(\tau)\}$ for the OMS, where $H_0=\omega_c \hat{a}^{\dagger}\hat{a}% +\omega_m \hat{b}^{\dagger}\hat{b}$ ($\hbar \equiv 1$) describes the cavity and mechanical oscillation with their frequency $\omega_c$ and $\omega_m$, respectively, and $$\begin{aligned} \vspace{-0.3cm} H_S(t)&=&-\sqrt{2}g\{\cos(\omega_m t) \hat{x}% _m+\sin(\omega_m t)\hat{p}_m\}\hat{a}^\dagger \hat{a} \nonumber \\ &+&iE(t)(\hat{a}^\dagger e^{i\Delta_0 t}-\hat{a}e^{-i\Delta_0 t}) \label{HS}\end{aligned}$$ inside the time-ordered exponential is the system Hamiltonian in the interaction picture with respect to $H_0$, which is obtained by the transformation $H_S(t)=e^{iH_0t}H(t)e^{-iH_0t}$ on the Hamiltonian $H(t)=-g(\hat{b}+ \hat{b}^{\dagger})\hat{a}^{\dagger}\hat{a}+iE(t)(\hat{a}^{\dagger}e^{-i% \omega_0t} -\hat{a}e^{i\omega_0t})$ of the OMS. In the above equation, $g$ is the optomechanical coupling constant, and $% \Delta_0=\omega_c-\omega_0$ is the detuning of the drive’s central frequency from the cavity frequency. The dimensionless mechanical coordinate operator and mechanical momentum operator are defined as $\hat{x}_{m}=(\hat{b}+\hat{b}% ^\dagger)/\sqrt{2}$ and $\hat{p}_{m}=-i(\hat{b}-\hat{b}^\dagger)/\sqrt{2}$, respectively. The cavity (mechanical) damping at the rate $\kappa$ ($% \gamma_m $) can be described in terms of a linear coupling between the cavity (mechanical) mode with the stochastic Langevin noise operator $\hat{% \xi}_c$ ($\hat{\xi}_m$) of the reservoir [@book]: $$\begin{aligned} H_D(t)=i\big(\sqrt{\kappa}\hat{a}^\dagger \hat{\xi}_c(t) +\sqrt{\gamma_m}% \hat{b}^\dagger \hat{\xi}_m(t)\big )+H.c. \label{HD}\end{aligned}$$ The associated noises are assumed to be the white ones satisfying $\langle \hat{\xi}_l(t) \hat{\xi}^{\dagger}_l(\tau) \rangle_R=(n_{l}+1)\delta(t-\tau)$ ($l=c,m$) with the respective quanta number $n_{l}$ in thermal equilibrium. Such approximation is valid for the mechanical reservoir given the quality factor $\omega_m/\gamma_m\gg 1$ [@vitali]. Because the system-reservoir coupling in (\[HD\]) takes its form in the interaction picture with respect to the total self oscillation Hamiltonian of both system and reservoir, it should be added into the time-ordered exponential $\mathcal{T} \exp\{-i\int_0^t d\tau H_S(\tau)\}$ in the interaction picture to construct the evolution operator $U_S(t,0)=\mathcal{T}e^{-i\int_0^t d\tau \big(% H_S(\tau)+H_D(\tau)\big)}$ for the combination of the OMS and its associated reservoirs (its momentary action $U_S(t+dt,t)$ gives the exact Langevin equation and master equation of the OMS) [@book]. The development of the entanglement between the cavity and mechanical mode is closely connected to the dynamical evolution of these modes. Their evolution under $U_{S}(t,0)$ involves three non-commutative processes—cavity drive, optomechanical coupling and dissipation, so it is impossible to solve the system dynamics directly from this joint evolution operator. Our method to reduce the intricacy is factorizing it into numerous ones corresponding to relatively tractable processes [@bhe]. Here we apply the technique to find a factorization that is suitable to study the dynamically evolving Gaussian states. Our factorization is obtained as $$U_{S}(t,0)=U_{E}(t,0)U_{OM}(t,0)U_{K}(t,0)U_{D}(t,0),$$where $U_{D}(t,0)=\mathcal {T}\exp \{-i\int_{0}^{t}d\tau H_{D}(\tau )\}$ (see Appendix A for details). The effective Hamiltonian in the first operator $% U_{E}(t,0)=\mathcal{T}\exp \{-i\int_{0}^{t}d\tau \tilde{H}_{E}(\tau )\}$ for the pure cavity drive process takes the form $$\tilde{H}_{E}(\tau )=iE(\tau )e^{i\Delta _{0}\tau }\hat{A}^{\dagger }(t,\tau )+H.c,$$with $\hat{A}(t,\tau )=e^{-\frac{\kappa }{2}(t-\tau )}\hat{a}+\hat{n}% _{c}(t,\tau )$ being the sum of the decayed cavity mode operator and the colored cavity noise operator $\hat{n}_{c}(t,\tau )=\sqrt{\kappa }\int_{\tau }^{t}d\tau ^{\prime }e^{-\kappa (\tau ^{\prime }-\tau )/2}\hat{\xi}_{c}(\tau ^{\prime })$. The third evolution operator is $U_{K}(t,0)=\mathcal{T}\exp \{ig\int_{0}^{t}d\tau \hat{K}_{m}(t,\tau )\hat{A}^{\dagger }\hat{A}(t,\tau )\}$, where $\hat{K}_{m}(t,\tau )=\cos (\omega _{m}\tau )\hat{X}_{m}(t,\tau )+\sin (\omega _{m}\tau )\hat{P}_{m}(t,\tau )$ is a linear combination of the mechanical operators $\hat{X}_{m}(t,\tau )=\hat{B}(t,\tau )+\hat{B}% ^{\dagger }(t,\tau )$ and $\hat{P}_{m}(t,\tau )=-i\hat{B}(t,\tau )+i\hat{B}% ^{\dagger }(t,\tau )$ from $\hat{B}(t,\tau )=e^{-\frac{\gamma _{m}}{2}% (t-\tau )}\hat{b}+\hat{n}_{m}(t,\tau )$ and $\hat{n}_{m}(t,\tau )=\sqrt{% \gamma _{m}}\int_{\tau }^{t}d\tau ^{\prime }e^{-\gamma _{m}(\tau ^{\prime }-\tau )/2}\hat{\xi}_{m}(\tau ^{\prime })$. To the first order of the optomechanical coupling constant $g$, the effective Hamiltonian in the process $U_{OM}(t,0)=\mathcal{T}\exp \{-i\int_{0}^{t}d\tau \tilde{H}_{OM}(\tau )\}$ of optomechanical coupling is $$\begin{aligned} \tilde{H}_{OM}(\tau ) &=&g\hat{K}_{m}(t,\tau )\big(\hat{A}^{\dagger }(t,\tau )D(\tau )+\hat{A}(t,\tau )D^{\ast }(\tau ) \nonumber \\ &+&|D(\tau )|^{2}\big), \label{LOM}\end{aligned}$$where $$\begin{aligned} D(\tau ) &=&e^{-\frac{\kappa }{2}(t-\tau )}\int_{0}^{\tau }dt^{\prime }E(t^{\prime })e^{i\Delta _{0}t^{\prime }}e^{-\frac{\kappa }{2}(t-t^{\prime })} \nonumber \\ &+&\int_{0}^{\tau }dt^{\prime }[\hat{n}_{c}(t,t^{\prime }),\hat{n}% _{c}^{\dagger }(t,\tau )]E(t^{\prime })e^{i\Delta _{0}t^{\prime }}.\end{aligned}$$ Under the effective Hamiltonian in (\[LOM\]), the OMS evolves according to the following differential equations: which evolves the cavity and mechanical mode in terms of the following differential equations: $$\begin{aligned} &&-i\frac{d\hat{a}}{d\tau }=ge^{-(\kappa +\gamma _{m})(t-\tau )/2}D(\tau )(e^{-i\omega _{m}\tau }\hat{b}+e^{i\omega _{m}\tau }\hat{b}^{\dagger }) \nonumber \\ &+&ge^{-\kappa (t-\tau )/2}D(\tau )\cos (\omega _{m}\tau )(\hat{n}% _{m}(t,\tau )+\hat{n}_{m}^{\dagger }(t,\tau )) \nonumber \\ &+&ge^{-\kappa (t-\tau )/2}D(\tau )\sin (\omega _{m}\tau )(i\hat{n}% _{m}(t,\tau )-i\hat{n}_{m}^{\dagger }(t,\tau )), \nonumber \\ &&-i\frac{d\hat{b}}{d\tau }=ge^{-(\kappa +\gamma _{m})(t-\tau )/2}e^{i\omega _{m}\tau }(D^{\ast }(\tau )\hat{a}+D(\tau )\hat{a}^{\dagger }) \nonumber \\ &+&ge^{i\omega _{m}\tau -\frac{\gamma _{m}}{2}(t-\tau )}\big(\hat{n}% _{c}(t,\tau )D^{\ast }(\tau )+\hat{n}_{c}^{\dagger }(t,\tau )D(\tau )\big) \nonumber \\ &+&ge^{i\omega _{m}\tau -\frac{\gamma _{m}}{2}(t-\tau )}|D(\tau )|^{2}. \label{om}\end{aligned}$$The $\hat{a}$ ($\hat{b}$) terms on the right side of (\[om\]) are due to the beam-splitter (BS) action in the quadratic Hamiltonian ([LOM]{}), and the $\hat{a}^{\dagger }$ ($\hat{b}^{\dagger }$) terms reflect the coexisting squeezing (SQ) action. Next we start with an initial OMS state $\rho (0)$ in thermal equilibrium with the environment, i.e. $\rho (0)$ is a Gaussian state as the product of a cavity vacuum and a finite temperature mechanical thermal state. This initial state becomes entangled under optomechanical coupling. Its evolution can be studied by successively acting each factor in the factorized form $% U_{E}(t,0)U_{OM}(t,0)U_{K}(t,0)U_{D}(t,0)$ of the joint evolution operator $% U_{S}(t,0)$ on the total initial state $\chi (0)=\rho (0)R(0)$, in which $R(0)$ denotes the reservoir state in thermal equilibrium with $% \rho (0)$. One has $U_{D}(t,0)\chi (0)U_{D}^{\dagger }(t,0)=\chi (0)$ since, under thermal equilibrium, the system-reservoir coupling in (\[HD\]) does not change the state $\rho (0)$ (see Appendix B for details), and $% U_{K}(t,0) $ also keeps $\chi (0)$ invariant because $\hat{A}(t,\tau )|0\rangle _{C}=0$ for the combined initial vacuum state $|0\rangle _{C}$ of the cavity and its zero temperature reservoir. Thus the expectation values of system operators $\hat{O}(t)$ reduce to the following trace over system and reservoir degrees of freedom (see Appendix B for details): $$\begin{aligned} \langle \hat{O}(t)\rangle &=&\mbox{Tr}_{S\otimes R}\big\{U_{OM}^{\dagger }(t,0)U_{E}^{\dagger }(t,0)\hat{O}U_{E}(t,0)U_{OM}(t,0) \nonumber \\ &\times &\chi (0)\big\}.\vspace{-0.3cm} \label{main}\end{aligned}$$In weak coupling regime where the Hamiltonian of $U_{OM}(t,0)$ takes the form in (\[LOM\]), the evolved OMS is preserved to be in Gaussian state, because the state $\mbox{Tr}_R\{U_E(t,0)U_{OM}(t,0)\chi(0)U^% \dagger_{OM}(t,0)U_E^\dagger(t,0)\}$ of the evolved OMS is only determined by the quadratic Hamiltonian in $U_{OM}(t,0)$ and the displacement Hamiltonian in $U_{E}(t,0)$. OMS entanglement ================ Entanglement evolution under CW drive ------------------------------------- The entanglement of the evolved Gaussian states can be quantified by the logarithmic negativity $E_{\mathcal{N}% } $ [@adesso]. One should consider the correlation matrix (CM) with the elements $$\hat{V}_{ij}(t)=1/2\langle \hat{u}_{i}\hat{u}_{j}+\hat{u}_{j}\hat{u}% _{i}\rangle -\langle \hat{u}_{i}\rangle \langle \hat{u}_{j}\rangle ,$$where $\hat{\vec{u}}=(\hat{x}_{c}(t),\hat{p}_{c}(t),\hat{x}_{m}(t),\hat{p}% _{m}(t))^{T}$, for the calculation of $E_{\mathcal{N}}$ (see Appendix C for details). Each entry of the CM can be calculated following (\[main\]) with $\hat{O}=\hat{u}_{i}\hat{u}_{j}+\hat{u}_{j}\hat{u}_{i}$, etc. We first illustrate the real-time evolution of OMS entanglement under the CW drives of different detuning. The first example we present in Fig. 2 is the entanglement evolution under blue detuned CW drives. The entanglement values measured by $E_{\mathcal{N}}$ become stable with time and, at the SQ resonant point $\Delta_0=-\omega_m$, the steady entanglement reaches the maximum. Unlike the stationary entanglement between the fluctuations $\delta% \hat{a}$ and $\delta\hat{b}$ under a SQ resonant drive, which is upper bounded by $E_{\mathcal{N}}=\ln 2\approx 0.693$ due to the limitation of classical steady state conditions (see Eq. (\[condi\]) below) [@vitali-08], the evolved entanglement between the cavity mode $\hat{a}$ and mechanical mode $\hat{b}$ themselves can be well beyond this limit (see Fig. 2(b)). Compared with the blue detuned regime, the entanglement of the red detuned regime shown in Fig. 3 is lower. This reflects the difference of the BS action from the SQ action in creating the optomechanical entanglement. Quantum noise effect -------------------- The exact degree of entanglement is determined by two competitive factors—the direct BS and SQ action on the initial quantum state $\rho(0)$ of OMS, and the noise drives depending on the drive detuning and intensity. Given a CW drive, the noise drive terms in (\[om\]) are magnified by the functions with the modulo $|D(\tau)|=E/\sqrt{0.25\kappa^2+\Delta_0^2}$, indicating their more significant effect at a small detuning $\Delta_0$ or with a stronger drive intensity $E$. In what follows, we illustrate the noise effect as a function of time and of different system parameters. First, the entanglement for some values of detuning in Figs. 2 and 3 will die at a finite time. The phenomenon that entanglement is killed by noise in this way is known as entanglement sudden death (ESD) [@ESD; @ESD2]. The system evolution according to (\[om\]) provides a model in which the ESD for the continuous variable states is caused by the colored noises ($\hat{n}% _c(t,\tau)$, $\hat{n}_m(t,\tau)$ and their conjugates on the right side of (\[om\])) rather than the white noises in many other examples (see the references in [@ESD2]). In this situation the noise effect can be so significant that this type of ESD happens while the optomechanical coupling exists all the time. Interestingly, the entanglement under some drives, e.g. $\Delta_0=1.5\omega_m$ in Fig. 3(b), can also revive from time to time during evolution. Fig. 4 shows the magnitude of the noise correction to optomechanical entanglement in the system parameter space. Given the same drive intensities, the relations between the entanglement and drive detuning after sufficiently long interaction time are shown in Figs.4(a)-4(b). With the increase of cavity drive intensity, the entanglement in a more extended detuning range around the BS resonant point $\Delta_0=\omega_m$ will be eliminated by the quantum noises. The overall tendency of the entanglement change with the drive intensity for various drive detuning values is described in Figs. 4(c)-4(d). The plots in these figures show a competition between the effective coupling $gD(t)$ and the noise drives \[see the respective terms in (\[om\])\] in affecting the degree of entanglement. The entanglement reaches the maximum at a certain drive intensity $E$ determined by the system parameters, instead of monotonically increasing with $E$ which enhances the effective optomechanical coupling. Despite the existence of the noises, the entanglement in the blue detuned regime can be high. The SQ generated entanglement is also rather robust against temperature; see the comparison in Figs. 4(a) and 4(b). Entanglement evolution under pulsed drive ----------------------------------------- Finally, in Fig. 5, we provide an example of entanglement evolution for OMS driven by a pulse. Pulsed optomechanics is a newly developed research field [pulse2, Vanner1, Vanner2]{}. Here we use Gaussian pulses with the width $% \omega _{m}$. Due to the contribution from a spectrum of frequencies, the entanglement for the drives of different central frequency detuning evolves similarly. Another noticeable feature is that, given the same system parameters, the entanglement generated under the pulse could last even longer than that of the CW ones. This can be explained by the contribution from the frequency components outside the regime, in which the noise effect quickly destroys the corresponding entanglement. Entanglement under pulsed drive was also discussed in the approach based on classical mean motion background [@pulse2]. Like in the CW cases, our results independent of classical background are consequent upon the dynamics involving a significantly different quantum noise effect. difference from entanglement of Fluctuations ============================================ We are concerned with the regime of strong drive ($E/\kappa \gg 1$) and weak optomechanical coupling ($g/\kappa \ll 1$) in the study of Gaussian state entanglement for OMS. Starting from our initial OMS quantum state (the cavity in a vacuum state and the mechanical oscillator in a thermal state), such entanglement for the evolved quantum state develops as the optomechanical coupling in Fig. 1 starts with the cavity field being built up by an external drive $E(t)$. Meanwhile, the generated entanglement is also weakened or even destroyed by the noise drives. The entanglement in the same regime was well studied in the fluctuation expansion approach [vitali, pater, vitali-07,vitali-08, h-p-08, galve, zou, abdi, G]{}. The steady entanglement of the cavity and mechanical fluctuation is based on the classical steady state of OMS, the existence of which is determined by Routh-Hurwitz criterion [@RH] in terms of the following inequalities [@vitali]: $$\begin{aligned} s_{1}&=&2\gamma _{m}\kappa \{[\kappa ^{2}+\left( \omega _{m}-\Delta \right) ^{2}][\kappa ^{2}+\left( \omega _{m}+\Delta \right) ^{2}]+\gamma _{m}[(\gamma _{m}+2\kappa ) \left( \kappa ^{2}+\Delta ^{2}\right) +2\kappa \omega _{m}^{2}]\}+\Delta \omega _{m}G^{2}(\gamma _{m}+2\kappa )^{2}>0,\nonumber\\ s_{2}&=&\omega _{m}\left( \kappa ^{2}+\Delta ^{2}\right) -G^{2}\Delta >0, \label{condi}\end{aligned}$$ with $G=\sqrt{2}g\alpha _{s}$ and $\Delta =\Delta _{0}-g^{2}|\alpha _{s}|^{2}/\omega _{m}$ expressed in terms of the cavity field amplitude $\alpha _{s}=E/(\kappa +i\Delta )$ as the stationary solution to the Langevin equation. The workable regime for the fluctuation expansion approach is depicted with these conditions in Fig. 6. Both of the approaches work in the red detuned regime. Fig. 7, however, shows that even in this common regime the entanglement between the fluctuations can be very different from the OMS entanglement discussed in this paper. As we mentioned at the beginning, the fluctuation expansion approach works with approximating the OMS operators with the sum of their mean values following classical dynamics without noise drives and the fluctuations evolving according to quantum mechanics. Then the system operators in the system-reservoir coupling of (\[HD\]) are replaced by their fluctuations, so that only delta-function correlated Langevin noises $\hat{\xi}_{c}$ and $% \hat{\xi}_{m}$ independent of cavity drive detuning and intensity are relevant to the linearized dynamics about the fluctuations and their entanglement. Instead, in our fully quantum approach, the linearized dynamics for the system operators in weak coupling regime involves the magnified noise drives due to the cubic term $-g(\hat{b}+\hat{b}^{\dagger })% \hat{a}^{\dagger }\hat{a}$ of the original OMS Hamiltonian; see Eq. (\[om\]). The difference of the quantum noise effects is expected to be experimentally tested by the measurement of cavity fluctuation amplitude. As illustrated in Fig. 8, the cavity fluctuations found in the different approaches drastically deviate with drive intensity. This phenomenon also indicates the distinct quantum states due to the different linearized dynamics. Our concerned OMS quantum states and those of the fluctuations around classical steady states can be seen to be different from their CMs, which are in one-to-one correspondence to the respective Gaussian states. The entanglement for the evolved states of fully quantum OMS can thus significantly differ from that previously considered in the fluctuation expansion approach. Conclusion ========== In conclusion, we have studied the dynamically generated entanglement of quantum OMSs that are initially in thermal equilibrium with their environment. The meaning of our research manifests in two aspects. First, one sees that high and robust entanglement for fully quantum OMSs can be generated with blue detuned drive. In contrast, the previous fluctuation expansion approximation working under the classical steady state condition specifies an upper bound for the entanglement in blue detuned regime and only focuses on the steady entanglement under red detuned drives. This finding involving the different implementations is important to the related experimental studies on OMSs entering quantum regime. Second, our fully quantum dynamical approach shows that the noise effect on a quantum OMS drastically differs from that affecting the cavity and mechanical fluctuation considered in the previous approach, though the system dynamics is linearized for weak optomechanical coupling in both approaches. In the regime where the magnified noise effect is significant, complicated evolution patterns such as entanglement sudden death and revival exist for our concerned macroscopic entanglement. Such non-trivial quantum noise effect can also exist in other quantum nonlinear systems. B.H. thanks M. Hillery for helpful conversations. This work was supported by AITF and NSERC. Q. L. acknowledges the support by NSFC(No. 11005040), NCETFJ(No. 2012FJ-NCET-ZR04), PPYMTSTRHU (No. ZQN-PY113) and CSC. Factorization of Joint System-Reservoir Evolution Operator ========================================================== Our discussion is based on the two following factorizations for a unitary evolution operator $U(t,0)=\mathcal{T}\exp \{-i\int_0^t d\tau \big(H_1(\tau) +H_2 (\tau)\big)\}$ involving two processes described by $H_1(t)$ and $H_2(t)$, respectively: $$\begin{aligned} \mathcal{T}e^{-i\int_0^t d\tau (H_1(\tau) +H_2 (\tau))} = \mathcal{T}e^{-i\int_0^t d\tau H_1(\tau) }~\mathcal{T}e^{-i\int_0^t d\tau V^{\dagger}_1(\tau,0)H_2(\tau)V_1(\tau,0)}, \label{a}\end{aligned}$$ and $$\begin{aligned} \mathcal{T}e^{-i\int_0^t d\tau (H_1(\tau) +H_2 (\tau))} =\mathcal{T}e^{-i\int_0^t d\tau V_2(t,\tau )H_1(\tau)V^{\dagger}_2(t,\tau) }~\mathcal{T}e^{-i\int_0^t d\tau H_2(\tau)}, \label{b}\end{aligned}$$ where $V_k(t,\tau)=\mathcal{T}\exp \{-i\int_\tau^t d\tau' H_k(\tau')\}$ for $k=1,2$. The operator $U(t,0)$ is the solution to the differential equations $d U/dt=-i\big(H_1(t)+H_2 (t)\big)\hat{U}(t)$, while $V_1(t,0)=\mathcal{T}\exp \{-i\int_0^t d\tau H_1(\tau) \}$ is the solution to $d V_1/dt=-i H_1(t)V_1(t)$. The initial condition for the differential equations is $U(0,0)=V_1(0,0)=I$, the identity operator. The differential of $W(t,0)=V_1^{\dagger}(t,0)U(t,0)$ with respect to $t$ gives $$\begin{aligned} \frac{d W}{dt}&=&-V_1^{\dagger}\frac{d V_1}{dt}V_1^{\dagger}U+V_1^{\dagger}\frac{d U}{dt}=iV_1^{\dagger}H_1V_1\hat{V}_1^{\dagger}\hat{U}-iV_1^{\dagger}(H_1+H_2)\hat{U}=-iV_1^{\dagger}H_2 V_1 W.\end{aligned}$$ One has the solution to the above differential equation as $W(t,0)=\mathcal{T}\exp \{-i\int_0^t d\tau V_1^{\dagger}(\tau,0)H_2(\tau)V_1(\tau,0)\}$, thus proving the factorization in (\[a\]). By exchanging $H_1(t)$ and $H_2(t)$ in (\[a\]), one has the factorization of the operator $U(t,0)$ as $$V_2(t,0)~\mathcal{T}e^{-i\int_0^t d\tau V^{\dagger}_2(\tau,0)H_1(\tau)V_2(\tau,0)}=V_2(t,0)~\mathcal{T}e^{-i\int_0^t d\tau V^{\dagger}_2(\tau,0)H_1(\tau)V_2(\tau,0)}V^\dagger_2(t,0)V_2(t,0).$$ Because $V_2(t,0)$ is a unitary operation, one can rewrite the right side of the above as $\mathcal{T}e^{-i\int_0^t d\tau V_2(t,\tau)H_1(\tau)V^\dagger_2(t,\tau)}V_2(t,0)$, giving the form in Eq.(\[b\]). Here we have used the relation $V_2(t,0)V^{\dagger}_2(\tau,0)=V_2(t,\tau)$. We first apply Eq. (\[b\]) to factorize $U_D(t,0)=\mathcal {T}\exp\{-i\int_{0}^t d\tau H_D(\tau)\}$ out of the system-reservoir evolution operator $U_S(t,0)=\mathcal{T}\exp\{-i\int_0^t d\tau \big(H_S(\tau)+H_D(\tau)\big)\}$, where $H_S(\tau)$ and $H_D(\tau)$ are given in Eqs. (1) and (2) of the main text, respectively. In this way one has $$\begin{aligned} U_S(t,0)=\mathcal{T}\exp\{-i\int_0^t d\tau U_D(t,\tau)H_S(\tau)U_D^\dagger(t,\tau)\} ~\mathcal {T}\exp\{-i\int_{0}^t d\tau H_D(\tau)\}. \label{one}\end{aligned}$$ The cavity mode operator $\hat{a}$ in $H_S(\tau)$ is transformed to $$\begin{aligned} U_D(t,\tau)\hat{a}U_D^\dagger(t,\tau)=e^{-\frac{\kappa}{2}(t-\tau)}\hat{a}+\hat{n}_c(t,\tau)\equiv \hat{A}(t,\tau)\end{aligned}$$ in $U_D(t,\tau)H_S(\tau)U_D^\dagger(t,\tau)$, and the mechanical mode operator is transformed to $$\begin{aligned} U_D(t,\tau)\hat{b}U_D^\dagger(t,\tau)=e^{-\frac{\gamma_m}{2}(t-\tau)}\hat{b}+\hat{n}_m(t,\tau)\equiv\hat{B}(t,\tau),\end{aligned}$$ where $\hat{n}_c(t,\tau)=\sqrt{\kappa}\int_{\tau}^t d\tau'e^{-\kappa(\tau'-\tau)/2}\hat{\xi}_c(\tau')$ and $\hat{n}_m(t,\tau)=\sqrt{\gamma_m}\int_{\tau}^t d\tau'e^{-\kappa(\tau'-\tau)/2}\hat{\xi}_m(\tau')$ [@bhe]. The transformed operators satisfy the equal-time commutation relation $[\hat{A}(t,\tau), \hat{A}^\dagger(t,\tau)]=[\hat{B}(t,\tau),\hat{B}^\dagger(t,\tau)]=1$. Then the Hamiltonian in the first time-ordered exponential of (\[one\]) becomes $$\begin{aligned} U_D(t,\tau)H_S(\tau)U_D^\dagger(t,\tau)=\big(iE(t)\hat{A}^\dagger(t,\tau) e^{i\Delta_0 t}-iE^\ast(t)\hat{A}(t,\tau)e^{-i\Delta_0 t})-g\hat{K}_m(t,\tau)\hat{A}^\dagger\hat{A}(t,\tau), \label{ts}\end{aligned}$$ where $$\hat{K}_m(t,\tau)=\cos(\omega_m \tau) \big(\hat{B}_m(t,\tau)+\hat{B}^\dagger_m(t,\tau)\big)+\sin(\omega_m \tau)\big(-i\hat{B}_m(t,\tau)+i\hat{B}^\dagger_m(t,\tau)\big).$$ By using (\[a\]) we factorize out the drive Hamiltonian in (\[ts\]) as follows: $$\begin{aligned} &&\mathcal{T}\exp\{-i\int_0^t d\tau U_D(t,\tau)H_S(\tau)U_D^\dagger(t,\tau)\}\nonumber\\ &=& \mathcal{T}\exp\big\{-i\int_0^t d\tau\big(iE(t)\hat{A}^\dagger(t,\tau) e^{i\Delta_0 t}-iE^\ast(t)\hat{A}(t,\tau)e^{-i\Delta_0 t})~\mathcal{T}\exp\big\{ig\int_0^t d\tau U_E^\dagger(\tau,0)\hat{K}_m(t,\tau)\hat{A}^\dagger\hat{A}(t,\tau)U_E(\tau,0)\big\}\nonumber\\ &=&\mathcal{T}\exp\big\{-i\int_0^t d\tau\big(iE(t)\hat{A}^\dagger(t,\tau) e^{i\Delta_0 t}-iE^\ast(t)\hat{A}(t,\tau)e^{-i\Delta_0 t}\big)\big\}\nonumber\\ &\times & \mathcal{T}\exp\big\{ig\int_0^t d\tau \hat{K}_m(t,\tau)\big(\hat{A}^\dagger(t,\tau)+D^\ast(\tau)\big)\big(\hat{A}(t,\tau)+D(\tau)\big)\big\}, \label{two}\end{aligned}$$ where $U_{E}(\tau,0)=\mathcal{T}\exp\{\int_0^\tau dt' E(t')e^{i\Delta_0 t'}\hat{A}^\dagger(t,t')-H.c.\}$. In (\[two\]) the effect of $U_{E}(\tau,0)$ on the cavity operator $\hat{A}(t,\tau)$ is the displacement $$\begin{aligned} U_E^\dagger(\tau,0)\hat{A}(t,\tau)U_E(\tau,0)&=&\hat{A}(t,\tau)+e^{-\frac{\kappa}{2}(t-\tau)}\int_0^\tau dt' E(t')e^{i\Delta_0 t'}e^{-\frac{\kappa}{2}(t-t')} +\int_0^\tau dt' \Gamma_c(t',\tau)E(t')e^{i\Delta_0 t'}\nonumber\\ &\equiv & \hat{A}(t,\tau)+D(\tau), \label{displace}\end{aligned}$$ where $$\Gamma_c(t',\tau)=[\hat{n}_c(t,t'),\hat{n}_c^{\dagger}(t,\tau)]=e^{-\kappa(\tau-t')/2}-e^{-\kappa(t-\tau)/2} e^{-\kappa(t-t')/2}.$$ The next step is to factorize the second time-ordered exponential in (\[two\]) as follows: $$\begin{aligned} &&\mathcal{T}\exp\big\{ig\int_0^t d\tau \hat{K}_m(t,\tau)\big(\hat{A}^\dagger(t,\tau)+D^\ast(\tau)\big)\big(\hat{A}(t,\tau)+D(\tau)\big)\big\}\nonumber\\ &=&\mathcal{T}\exp\big\{ig U_K(t,\tau)\big(\hat{K}_m(t,\tau) \big(\hat{A}^{\dagger}(t,\tau)D(\tau)+\hat{A}(t,\tau)D^{\ast} (\tau )+|D(\tau)|^2\big)U_K^\dagger(t,\tau)\big\}\nonumber\\ &\times &\mathcal{T}\exp\{ig\int_0^t d\tau \hat{K}_m(t,\tau)\hat{A}^{\dagger}\hat{A}(t,\tau)\}, \label{three}\end{aligned}$$ where $U_K(t,\tau)=\mathcal{T}\exp\{ig\int_\tau^t dt' \hat{K}_m(t,t')\hat{A}^{\dagger}\hat{A}(t,t')\}$. To the first order of $g$, the operation $U_K(t,\tau)$ in the first time-ordered exponential of the above equation transforms the mechanical operator as $$\begin{aligned} U_K(t,\tau)\hat{K}_m(t,\tau)U_K^\dagger(t,\tau)&=&\hat{K}_m(t,\tau)-2g \int_\tau^t d\tau' e^{-\gamma_m(\tau'-\tau)/2} \sin \omega_m(\tau-\tau')\hat{A}^{\dagger}(t,\tau')\hat{A}(t,\tau')+\cdots\nonumber\\\end{aligned}$$ and the cavity operator $\hat{A}(t,\tau)$ as $$\begin{aligned} &&U_K(t,\tau)\hat{A}(t,\tau)U_K^\dagger (t,\tau) =\hat{A}(t,\tau)-ig\int_\tau^t dt' e^{-\kappa(t'-\tau)/2}\hat{K}_m(t,t')\hat{A}(t,t')+\cdots\end{aligned}$$ The neglected higher order terms in the above expansions are successively lowered by a small factor in the order of $g(t-\tau)$, because all terms in the expansions only contain the operators $\hat{K}_m$ and $\hat{A}$ not magnified by the drive intensity $E(t)$, so the dominant first order contribution leads to the effective optomechanical coupling Hamiltonian $$\begin{aligned} \tilde{H}_{OM}(\tau)=-g \hat{K}_m(t,\tau) \big(D^\ast(\tau)\hat{A}(t,\tau)+D(\tau) \hat{A}^\dagger(t,\tau)+| D(\tau)|^2\big)\end{aligned}$$ for the first time-ordered exponential in (\[three\]), which is defined as $U_{OM}(t,0)=\mathcal{T}\exp\{-i\int_0^t d\tau \tilde{H}_{OM}(\tau)\}$. Now we have exactly factorized the joint evolution operator as $$\begin{aligned} U_S(t,0)=U_E(t,0)U_{OM}(t,0)U_K(t,0)U_D(t,0). \label{fac}\end{aligned}$$ Expectation Value of System Operators ===================================== We apply the factorization of the joint evolution operator in (\[fac\]) to find the expectation value of a system operator $\hat{O}$: $$\begin{aligned} \mbox{Tr}_S\{\hat{O}\rho(t)\}&=&\mbox{Tr}_S\big\{\hat{O}~\mbox{Tr}_R\{U_E(t,0)U_{OM}(t,0)U_K(t,0)U_D(t,0)\rho(0)R(0)U^\dagger_D(t,0)U^\dagger_K(t,0)U^\dagger_{OM}(t,0)U^\dagger_E(t,0)\}\big\}\nonumber\\ &=&\mbox{Tr}_{S\otimes R}\big\{U^\dagger_{OM}(t,0)U^\dagger_{E}(t,0)\hat{O}U_{E}(t,0)U_{OM}(t,0)\big(U_K(t,0)U_D(t,0)\rho(0)R(0)U^\dagger_D(t,0)U^\dagger_K(t,0)\big)\big\}. \label{exp}\end{aligned}$$ The action $U_K(t,0)U_D(t,0)\rho(0)R(0)U^\dagger_D(t,0)U^\dagger_K(t,0)$ is on the the product of the initial system state $$\begin{aligned} \rho(0)=|0\rangle_c\langle 0|\otimes \sum_{n=0}^\infty \frac{n_{m}^n}{(1+n_{m})^{n+1}}|n\rangle_m\langle n|\equiv |0\rangle_c\langle 0|\otimes \rho_m \label{input}\end{aligned}$$ and the associate reservoir state $R(0)$ in thermal equilibrium with $\rho(0)$, where $n_m$ is the thermal phonon number at the temperature $T$. We first look at $U_D(t,0)\chi(0)U^\dagger_D(t,0)$, where $\chi(0)=\rho(0)R(0)$ and $$U_D(t,0)=\mathcal{T}\exp\big\{\int_0^td\tau\big(\sqrt{\gamma_m}\hat{b}^\dagger \hat{\xi}_m(\tau)-\sqrt{\gamma_m}\hat{b} \hat{\xi}^\dagger_m(\tau)\big)\big\} ~\mathcal{T}\exp\big\{\int_0^td\tau\big(\sqrt{\kappa}\hat{a}^\dagger \hat{\xi}_c(\tau)-\sqrt{\kappa}\hat{a} \hat{\xi}^\dagger_c(\tau)\big)\big\}.$$ The second operator of the cavity and vacuum reservoir coupling does not change $\chi(0)$ because $$\begin{aligned} \big(\sqrt{\kappa}\hat{a}^\dagger \hat{\xi}_c(\tau)-\sqrt{\kappa}\hat{a} \hat{\xi}^\dagger_c(\tau)\big)|0\rangle_C=0 \label{vc}\end{aligned}$$ for the product state $|0\rangle_C$ of the cavity vacuum and its associate vacuum reservoir. If the action of the first operator involving mechanical mode and mechanical reservoir coupling changes the joint initial state $\chi(0)$, the system state $$\tilde{\rho}(t)=\mbox{Tr}_R \big\{\mathcal{T}e^{\int_0^td\tau\{\sqrt{\gamma_m}\hat{b}^\dagger \hat{\xi}_m(\tau)-\sqrt{\gamma_m}\hat{b} \hat{\xi}^\dagger_m(\tau)\}} \chi(0)\mathcal{T}e^{-\int_0^td\tau \{\sqrt{\gamma_m}\hat{b}^\dagger \hat{\xi}_m(\tau)-\sqrt{\gamma_m}\hat{b} \hat{\xi}^\dagger_m(\tau)\}}\big\}$$ evolved under such coupling will be different from $\rho(0)$. The system quantum state $\tilde{\rho}(t)$ is the solution to the master equation $$\begin{aligned} \dot{\tilde{\rho}}&=&\gamma_m(n_{th}+1)\big\{\hat{b}\tilde{\rho}(t)\hat{b}^{\dagger}-\frac{1}{2}\tilde{\rho}(t)\hat{b}^{\dagger}\hat{b}-\frac{1}{2}\hat{b}^{\dagger}\hat{b} \tilde{\rho}(t)\big\} +\gamma_m n_{th}\big\{\hat{b}^\dagger\tilde{\rho}(t)\hat{b}-\frac{1}{2}\tilde{\rho}(t)\hat{b}\hat{b}^\dagger-\frac{1}{2}\hat{b}\hat{b}^\dagger \tilde{\rho}(t) \big\} \label{master}\end{aligned}$$ in Lindblad form [@book]. The initial state for the above master equation is $\tilde{\rho}(0)=\rho_m$, and $n_{th}$ is the thermal quantum number of the reservoir. Here we assume the possible non-equilibrium between system and reservoir, so that $n_{th}$ could be different from $n_m$ (in the main text we only consider the situation of thermal equilibrium). This master equation can be exactly solved by the super-operator technique [@a-m] as $$\begin{aligned} \tilde{\rho}(t)=\sum_{n=0}^\infty\frac{\big(n_{th}+(n_m-n_{th})e^{-\gamma_m t/2}\big)^n}{\big(1+n_{th}+(n_m-n_{th})e^{-\gamma_m t/2}\big)^{n+1}}|n\rangle_m\langle n|.\end{aligned}$$ If the system and reservoir is in thermal equilibrium, i.e. $n_{th}=n_m$, the above state will be $\rho_m$ constantly with time. Under this condition, therefore, the operation $U_{D}(t,0)$ keeps the joint initial state $\chi(0)$ invariant. Moreover, similar to (\[vc\]), one has $U_K(t,0)\chi(0)U_K^\dagger(t,0)=\chi(0)$. Thus the system operator expectation value in (\[exp\]) will reduce to the form in (7) of the main text. Calculation of Entanglement Measured by Logarithmic Negativity ============================================================== The entanglement of bipartite Gaussian states is quantified via the correlation matrix $$\begin{aligned} \hat{V}= \left( \begin{array} [c]{cc} \hat{A} & \hat{C} \\ \hat{C}^T & \hat{B} \end{array} \right). \label{corr-matrix}\end{aligned}$$ with the elements $\hat{V}_{ij}(t)=0.5\langle \hat{u}_i\hat{u}_j+\hat{u}_j\hat{u}_i\rangle-\langle \hat{u}_i\rangle\langle \hat{u}_j\rangle$, where $\hat{\vec{u}}=(\hat{x}_c(t),\hat{p}_c(t),\hat{x}_m(t),\hat{p}_m(t))^T$. The logarithmic negativity as a measure for the entanglement is given as [@v-w; @adesso] $$\begin{aligned} E_{\cal N}=\mbox{max}[0, -\ln 2\eta^{-}],\end{aligned}$$ where $$\begin{aligned} \eta^{-}=\frac{1}{\sqrt{2}}\sqrt{\Sigma-\sqrt{\Sigma^2-\mbox{det}\hat{V}}}\end{aligned}$$ and $$\begin{aligned} \Sigma=\mbox{det}\hat{A}+\mbox{det}\hat{B}-2\mbox{det}\hat{C}.\end{aligned}$$ $U_E(t,0)$ in (\[exp\]) does not contribute to the correlation matrix elements. Given the quadratic Hamiltonian $H_{OM}$ in (6) of the main text, the operation $U_{OM}$ transforms the vector $(\hat{x}_c(t),\hat{p}_c(t),\hat{x}_m(t),\hat{p}_m(t))^T$ in terms of the following linear differential equation: $$\begin{aligned} \frac{d}{d\tau}\left( \begin{array} [c]{c}\hat{x}_c\\ \hat{p}_c\\ \hat{x}_m\\ \hat{p}_m \end{array} \right)&=&\left( \begin{array} [c]{cccc} 0 & 0 & l_3(t,\tau) & l_4 (t,\tau) \\ 0 & 0 & l_1(t,\tau) & l_2(t,\tau) \\ -l_2 (t,\tau) & l_4(t,\tau) & 0 & 0\\ l_1(t,\tau) &-l_3(t,\tau) & 0 & 0 \end{array} \right)\left( \begin{array} [c]{c}\hat{x}_c \\ \hat{p}_c\\ \hat{x}_m \\ \hat{p}_m \end{array} \right)+\left( \begin{array} [c]{c}\hat{f}_1 \\ \hat{f}_2\\ \hat{f}_3 \\ \hat{f}_4 \end{array} \right)\nonumber\\ &\equiv & \frac{d}{d\tau}\hat{\vec{v}}=\hat{M}(t,\tau)\hat{\vec{v}}+\hat{\vec{f}}(t,\tau), \label{VCM}\end{aligned}$$ where $$\begin{aligned} l_1(t,\tau)&=&g e^{-\kappa (t-\tau)/2-\gamma_m(t-\tau)/2} \big(D(\tau)+D^\ast(\tau)\big)\cos(\omega_m\tau),\nonumber\\ l_2(t,\tau)&=&g e^{-\kappa (t-\tau)/2-\gamma_m(t-\tau)/2} \big(D(\tau)+D^\ast(\tau)\big)\sin(\omega_m\tau),\nonumber\\ l_3(t,\tau)&=&ig e^{-\kappa (t-\tau)/2-\gamma_m(t-\tau)/2}\big(D(\tau)-D^\ast(\tau)\big) \cos(\omega_m\tau),\nonumber\\ l_4(t,\tau)&=&ig e^{-\kappa (t-\tau)/2-\gamma_m(t-\tau)/2} \big(D(\tau)-D^\ast(\tau)\big) \sin(\omega_m\tau),\end{aligned}$$ and $$\begin{aligned} \hat{f}_1(t,\tau)&=&\frac{i}{\sqrt{2}}g~ e^{-\kappa (t-\tau)/2}\big(D(\tau)-D^\ast(\tau)\big)\big\{\cos(\omega_m\tau)\big(\hat{n}_m(t,\tau)+\hat{n}^\dagger_m(t,\tau)\big)-\sin(\omega_m\tau)\big(i\hat{n}_m(t,\tau)-i\hat{n}^\dagger_m(t,\tau)\big)\big\},\nonumber\\ \hat{f}_2(t,\tau)&=& \frac{1}{\sqrt{2}}ge^{-\kappa (t-\tau)/2}\big(D(\tau)+D^\ast(\tau)\big)\big\{\cos(\omega_m\tau)\big(\hat{n}_m(t,\tau)+\hat{n}^\dagger_m(t,\tau)\big)-\sin(\omega_m\tau)\big(i\hat{n}_m(t,\tau)-i\hat{n}^\dagger_m(t,\tau)\big)\big\},\nonumber\\ \hat{f}_3(t,\tau)&=&-g\big(\hat{n}_c(t,\tau)D^{\ast}(\tau)+\hat{n}^\dagger_c(t,\tau)D(\tau)+|D(\tau)|^2\big) e^{-\gamma_m(t-\tau)/2}\sin(\omega_m\tau),\nonumber\\ \hat{f}_4(t,\tau)&=&~g\big(\hat{n}_c(t,\tau)D^{\ast}(\tau)+\hat{n}^\dagger_c(t,\tau)D(\tau)+|D(\tau)|^2\big)e^{-\gamma_m(t-\tau)/2}\cos(\omega_m\tau). \label{noise}\end{aligned}$$ In the above the terms containing $\hat{n}_c$, $\hat{n}_m$ and their conjugates contribute to the correlation matrix (\[corr-matrix\]), and the pure drive terms proportional to $|D(\tau)|^2$ do not contribute to $\hat{V}$, but they affect the system mean motion $\langle \hat{v}_i(t)\rangle$. The solution to (\[VCM\]) is $$\begin{aligned} \hat{\vec{v}}(t)=\mathcal{T}e^{\int_0^t d\tau \hat{M}(t,\tau)}\hat{\vec{v}}(0)+\mathcal{T}e^{\int_0^t d\tau \hat{M}(t,\tau)}\int_0^t d\tau (\mathcal{T}e^{\int_0^\tau d\tau' \hat{M}(t,\tau')})^{-1}\hat{\vec{f}}(t,\tau). \label{sol}\end{aligned}$$ In the general situation the time-ordered exponentials in the solution (\[sol\]) should be expanded to infinite series (Magnus expansion [@expansion]) for numerical calculations. Given a cavity drive with its profile $|E(t)|\leq C$ ($C$ is a constant) such that the function $D(t)$ defined in (\[displace\]) is bounded, the decay factor $e^{-(\kappa+\gamma_m)(t-\tau)/2}$ dominates the behavior of the matrix $\hat{M}(t,\tau)$, so one has the approximate commutator $[\hat{M}(t,\tau_1),\hat{M}(t,\tau_2)]\approx 0$ in the concerned regimes in which $gE(t)$ is not very large. Then the time-ordered exponentials in the above solution can be replaced by the ordinary exponentials to have a closed form of the solution to the differential equation (\[VCM\]) as $$\begin{aligned} \hat{\vec{v}}(t)&\approx &e^{\int_0^t d\tau \hat{M}(t,\tau)}\hat{\vec{v}}(0)+ \int_0^t e^{\int_\tau^t d\tau' \hat{M}(t,\tau')}\hat{\vec{f}}(t,\tau)d\tau\nonumber\\ &=& \big(\cosh(\sqrt{m(t,0})\big)\hat{I}+\frac{\sinh\big(\sqrt{m(t,0)}\big)}{\sqrt{m(t,0)}}\hat{K}(t,0)\big)\hat{\vec{v}}(0)\nonumber\\ &+& \int_0^t d\tau\big(\cosh(\sqrt{m(t,\tau})\big)\hat{I}+\frac{\sinh\big(\sqrt{m(t,\tau)}\big)}{\sqrt{m(t,\tau)}}\hat{K}(t,\tau)\big)\hat{\vec{f}}(t,\tau). \label{result}\end{aligned}$$ Here we have defined $\hat{K}(t,\tau)=\int_\tau^t d\tau' \hat{M}(t,\tau')$, and the function $m(t,\tau)$ from the relation $\hat{K}^2(t,\tau)=m(t,\tau)\hat{I}$ is $$\begin{aligned} m(t,\tau)=\frac{1}{4}|\int_\tau^t d\tau'\big(l_1(t,\tau')+il_2(t,\tau')-il_3(t,\tau')+l_4(t,\tau')\big)|^2-\frac{1}{4}|\int_\tau^t d\tau'\big(l_1(t,\tau')-il_2(t,\tau')-il_3(t,\tau')-l_4(t,\tau')\big)|^2.\nonumber\\\end{aligned}$$ With arbitrary system parameters, the first term in (\[sol\]) from the initial value $\hat{\vec{v}}(0)$ of system operators contributes to one part of the correlation matrix $\hat{V}_1(t)$, where the average in the calculation of the matrix elements is taken with respect to the initial system state $\rho(0)$. This reflects the reliance of the system quantum state at the time $t$ on this initial state. Meanwhile, the second term of noise driving leads to another part of the correlation matrix $\hat{V}_2(t)$, where the average is over the reservoir state $R(0)$. Summing up the two matrices gives the total correlation matrix $\hat{V}(t)=\hat{V}_1(t)+\hat{V}_2(t)$. For a comparison with the entanglement evolution found in the main text, we give an example of entanglement evolution solely determined by matrix $\hat{V}_1(t)$ in Fig. (C-1). In the absence of quantum noise effect, the entanglement measured by logarithmic negativity tends to stable value with time, and there does not exist the phenomenon of entanglement sudden death in Fig. 1 and 2 of the main text. [99]{} G. J. Milburn and M. J. Woolley, Acta Physica Slovaca 61, 483 (2011). Y. Chen, J. Phys. B: At. Mol. Opt. Phys. 46, 104001 (2013). M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, arXiv:1303.0733. A. H. Safavi-Naeini, *et al.*, Phys. Rev. Lett. 108, 033602 (2012). N. Brahms, *et al.*, Phys. Rev. Lett. 108, 133601 (2012). E. Verhagen, S. Delglise, S. Weis, A. Schliesser, and T. J. Kippenberg, Nature 482, 63 (2012). T. P. Purdy, R. W. Peterson, and C. A. Regal, Science 339, 801 (2013). T. A. Palomaki, J. W. Harlow, J. D. Teufel, R. W. Simmonds, and K. W. Lehnert, Nature 495, 210 (2013). T. P. Purdy, P.-L. Yu, R. W. Peterson, N. S. Kampel, and C. A. Regal, Phys. Rev. X 3, 031012 (2013). M. Hillery, Acta Physica Slovaca 59, 1 (2009). D. Vitali, *et al*, 98, 030405 (2007). M. Paternostro, *et al*, Phys.Rev. Lett. 99, 250401 (2007). D. Vitali, P. Tombesi, M. J. Woolley, A. C. Doherty, and G. J. Milburn, Phys. Rev. A 76, 042336 (2007). C. Genes, A. Mari, P. Tombesi, and D. Vitali, 78, 032316 (2008). M. J. Hartmann and M. B. Plenio, Phys. Rev. Lett. 101, 200503 (2008). F. Galve, L. A. Pachon, and D. Zueco, Phys. Rev. Lett. 105, 180501 (2010). C.-L. Zou, X.-B. Zou, F.-W. Sun, Z.-F. Han, and G.-C. Guo, Phys. Rev. A 84, 032317 (2011). M. Abdi, S. Barzanjeh, P. Tombesi, and D. Vitali, Phys. Rev. A 84, 032325 (2011). R. Ghobadi, A. R. Bahrampour, and C. Simon, 84, 033846 (2011). A. Mari and J. Eisert, 103, 213603 (2009). A. Mari and J. Eisert, New J. Phys. 14, 075014 (2012). A. Farace and V. Giovannetti, 86, 013820 (2012). S. G. Hofer, W. Wieczorek, M. Aspelmeyer, and K. Hammerer, Phys. Rev. A 84, 052327 (2011). G. Wang, L. Huang, Y.-C. Lai, and C. Grebogi, 112, 110406 (2014). F. Marquardt, J. G. E. Harris, and S. M. Girvin, 96, 103901 (2006). P. Rabl, Phys. Rev. Lett. 107, 063601 (2011). A. Nunnenkamp, K. B[ø]{}rkje, and S. M. Girvin, Phys. Rev. Lett. 107, 063602 (2011). J.-Q. Liao, H. K. Cheung, and C. K. Law, Phys. Rev. A 85, 025803 (2012). B. He, Phys. Rev. A 85, 063820 (2012). J.-Q. Liao and F. Nori, arXiv:1304.6612. B. Pepper, R. Ghobadi, E. Jeffrey, C. Simon, and D. Bouwmeester, 109, 023601 (2012). S. Basiri-Esfahani, U. Akram, and G. J. Milburn, New J. Phys. 14, 085017 (2012). X.-W. Xu, Y.-J. Li, and Y.-x. Liu, Phys. Rev. A 87, 025803 (2013). X.-W. Xu, H. Wang, J. Zhang, and Y.-x. Liu, Phys. Rev. A 88, 063819 (2013). X.-X. Ren, H.-K. Li, M.-Y. Yan, Y.-C. Liu, Y.-F. Xiao, and Q. Gong, Phys. Rev. A 87, 033807 (2013). U. Akram, W. P. Bowen, and G. J. Milburn, New J. Phys. 15, 093007 (2013). K. Stannigel, *et al*, Phys. Rev. Lett. 109, 013603 (2012). M. Ludwig, A. H. Safavi-Naeini, O. Painter, and F. Marquardt, Phys. Rev. Lett. 109, 063601 (2012). X.-Y. Lü, W.-M. Zhang, S. Ashhab, Y. Wu, and F. Nori, Scientific Reports 3, 2943 (2013). J.-Q. Liao, K. Jacobs, F. Nori, and R. W. Simmonds, New. J. Phys. 16, 072001 (2014). M. Ludwig, B. Kubala, and F. Marquardt, New J. Phys. 10, 095013 (2008). J. Qian, A. A. Clerk, K. Hammerer, and F. Marquardt, Phys. Rev. Lett. 109, 253601 (2012). S. Kiesewetter, Q. Y. He, P. D. Drummond, and M. D. Reid, arXiv: 1312.6474. C. W. Gardiner and P. Zoller, *Quantum Noise* (Springer-Verlag, Berlin, 2000). G. Adesso, A. Serafini, and F. Illuminati, Phys. Rev. A 70, 022318 (2004). T. Yu and J. H. Eberly, Phys. Rev. Lett. 93, 140404 (2004). T. Yu and J. H. Eberly, Science 323, 598 (2009). M. R. Vanner, et al, Proc. Natl. Acad. Sci. USA 108, 16182 (2011). M. R. Vanner, J. Hofer, G. D. Cole, and M. Aspelmeyer, Nat. Comm. 4, 2295 (2013). I. S. Gradshteyn and I. M. Ryzhik, *Table of Integrals, Series and Products* (Academic Press, Orlando, 2000). L. M. Arèvalo-Aguilar and H. Moya-Cessa, J. Opt. B: Quantum Semiclass. Opt. 10, 671 (1998). G. Vidal and R. F. Werner, Phys. Rev. A 65, 032314 (2002). W. Magnus, Commun. Pure Appl. Math. 7, 649 (1954).
--- bibliography: - 'main.bib' title: Semantic Photo Manipulation with a Generative Image Prior --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010371.10010382&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Image manipulation&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178.10010224.10010240.10010241&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Image representations&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010293.10010294&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Neural networks&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt;
--- abstract: 'We investigate a model for colloidal network formation using Brownian Dynamics computer simulations. Hysteretic springs establish transient bonds between particles with repulsive core. If a bonded pair is separated by a cutoff distance, the spring vanishes and reappears only if the two particles contact each other. We present results for the the bond lifetime distribution and investigate the properties of the van Hove dynamical two-body correlation function. The model displays crossover from fluid-like dynamics, via transient network formation, to arrested quasi-static network behavior.' author: - Philip Krinninger - Andrea Fortini - Matthias Schmidt date: 'November 27, 2015, revised: March 10, 2016' title: Minimal Model for Dynamic Bonding in Colloidal Transient Networks --- Introduction ============ Network structures are ubiquitous in nature. They influence the properties of many soft matter systems, such as gels [@REF0], suspensions [@Puertes; @Nature] or entangled polymers [@Green]. At larger length scales, spatial  [@Blair] and force networks [@Snoeijer; @Utter] occur in granular matter. In living systems, neuronal circuits can be regarded as networks; the neurones can be identified as nodes, and the synapses serve as links [@Zucker]. Many of these examples constitute networks with a static structure, i.e., the position of the nodes, which form the backbone of the network, is fixed in space. Only few of the links between nodes break or form over time. However, there are also [*transient*]{} networks, where the position of the nodes changes in time. Hence, the general shape of the network changes. In polymer science the concept of transient networks is well-known [@REF1; @REF2; @REF3; @REF4; @REF5] and being used to explain e.g. the presence of the rubber plateau in rheological experiments [@deGennes]. Theoretical approaches for transient networks have been developed by [*Tanaka et al*]{} [@REF1]. In their work, the sticky end-groups of monodisperse polymers form the links. Transient networks in colloidal systems [@REF6; @REF7; @REF8] have been studied in experiments and by numerical simulation. For example colloidal membranes in a magnetic field show effects such as the growth of short chains, cross linking and network formation, induced by many-body polarization interactions between the particles [@Osterman]. A very recent study was aimed at the dynamics of the transient colloidal network itself [@Maier]. In this work, the authors show the influence of the mesh size of the network in the initial state on the mesh dynamics and give an explanation of the shrinking and growing process of the meshes based on the competition of first-order longßrange collective dipolar interactions and short-range second-order dipolar pair correlations. Dipolar colloidal systems are one of the primary realizations of transient networks. In recent years, progress in the theoretical description of dipolar colloidal gels has been made, supported by extensive molecular dynamics computer simulations [@Referee1; @Referee2; @Referee3]. These simulation studies on colloidal dumbbells show the crossover from a transient percolated network to a dynamical arrested state as a result of cooling, caused by the rapid increase of bond lifetime of the bonds between different dumbbells at low temperature. Simulation studies of the influence of solid content on the structure of forming networks of colloidal particles, e.g. the fractal dimension and the bond angle distribution, have been performed [@Hutter]. Patchy colloids [@REF9; @REF10; @REF11] posses bonding sites on their surface that develop strong short-ranged attractive interactions [@dani_patchy]. The dependence of the network growth on the opening angle of the patches of three-patched colloids has been investigated by [*Dias et al*]{} very recently [@Dias]. They found different regimes of network formation leading to networks with different structures and sizes. A systematic study of the transition from a fluid to a network in binary mixtures of patchy colloids with varying functionality [@dani_network] has shown the importance of network formation processes for the understanding of transient networks. Transient networks are an intermediate state between a fluid suspension and a fully developed, static, percolated network. In this article, we present a minimal model for transient network formation in colloidal systems. The model is based on a hysteretic process that describes the formation and annihilation of bonds between colloidal particles with repulsive cores. The bonds form the links of the network, while the particles represent the nodes. The bonds are treated as (linear) springs, inspired by the well-established bead and spring model of polymer physics [@Strobl]. Additionally, the bonding of a pair of particles is based on a hysteretic mechanism: the spring is formed when the surfaces of the two particles touch, and vanishes when the two particles separate above a critical distance, $r_c$. A similar model was proposed for wet granular particles, i.e. the minimal capillary model [@Krinninger; @Herminghaus]. For wet granular matter dissipative dynamics are considered in molecular dynamics simulations for the collisions, and the interaction between the grains due to capillary bridges is modeled by a constant force. We perform Brownian Dynamics (BD) computer simulations of the minimal model in order to study the deviation of static and dynamic properties from those of a simple suspension of repulsive particles. Moreover, we investigate the network formation properties of the model, from a fluid to a transient network and from a transient to a static network. The paper is organized as follows. In Sec. \[chap:model-method\] we introduce the model and the simulation technique, as well as the van Hove dynamic correlation function which we use as a mean to characterize the system. In Sec. \[chap:results\] we present our results. First, we study statistical properties of the bonding in Sec. \[sec:bonds\]. In particular, we are interested in the lifetime of bonds from formation to annihilation and the corresponding probability distribution. We then focus on static properties, namely the percolation transition and the fractal dimension of percolating clusters of colloids in Sec. \[sec:perc\]. In Sec. \[sec:hove\], we give an overview of the detailed studies of the van Hove function as a function of density $\rho$, the bond strength $k$, and the correlation time. We investigate the change of correlation with increasing the bond strength, up to the point where the system is no longer fluid. This crossover manifests itself in a non-Gaussian shape of the self part of the van Hove function and is discussed in detail in Sec. \[sec:ngauss\]. In Sec. \[sec:conclusion\] we conclude and give an outlook to possible future work within the framework of the proposed model. Model and Method {#chap:model-method} ================ We consider a three-dimensional system of $N$ interacting, spherical Brownian particles with spatial coordinates ${\mathbf{r} }_i$, $i= 1 \dotsc N$. We neglect hydrodynamic interactions and describe the dynamics with the overdamped Langevin equation $$\begin{aligned} \label{full_langevin} \dot{{\mathbf{r} }}_i = \gamma^{-1}{\mathbf{F} }_i + \boldsymbol \xi_i(t),\end{aligned}$$ where $\gamma$ is the friction coefficient. The deterministic force on particle $i$ is generated from the total potential energy $U_N$ according to ${\mathbf{F} }_i\!=\!-\nabla_i U_N$, where $\nabla_i$ denotes the derivative with respect to ${\mathbf{r} }_i$. The stochastic random force $\gamma \boldsymbol \xi_i(t)$ is Gaussian distributed with zero mean and autocorrelation $\langle \boldsymbol \xi_i(t) \boldsymbol \xi_j(t')\rangle=2D_0\mathds{1}\delta_{ij}\delta(t-t')$, where $D_0$ is the Stokes-Einstein diffusion coefficient, $\mathds{1}$ denotes the $3\times 3$ unit matrix, $\delta_{ij}$ is the Kronecker delta and $\delta(\cdot)$ indicates the Dirac distribution. The interaction potential $U_N$ is a pairwise, particle-particle interaction potential tailor-made for network formation. It combines a repulsive interaction $U_{\rm REP}$ with a harmonic potential $U_{\rm S}$ for the links between the particles: $$\begin{aligned} U_N= \frac{1}{2}\sum_{i=1}^N \sum_{j=1 \atop j\neq i}^N (U_{\rm REP}(r_{ij})+\nu_{ij} U_{\rm S}(r_{ij})),\end{aligned}$$ where $r_{ij}=|{\mathbf{r} }_i - {\mathbf{r} }_j|$ and $\nu_{ij}=0,1$ is a bonding degree of freedom that determines whether particles $i$ and $j$ interact at time $t$ via a spring ($\nu_{ij}=1$) or not ($\nu_{ij}=0$). The linking, and hence the value of $\nu_{ij}$, is history dependent, illustrated by Fig. \[fig:model\]: When the surfaces of two particles $i$ and $j$ touch, they become bonded by a spring ($\nu_{ij}=1$). When the particles separate above a critical distance $r_c$ the bond vanishes ($\nu_{ij}=0$). For the repulsive core we use $U_{\rm REP}=\epsilon( \sigma / r_{ij})^{12}$, where $\epsilon$ is the unit of energy, $\sigma$ is the particle diameter. $U_{\rm REP}$ is cut off and shifted at $r_{\rm cut} /\sigma =1.01$ to avoid discontinuities in the interaction potential. The harmonic potential is $U_{\rm S}(r_{ij})=\frac{k}{2}(r_{ij}-\sigma)^2$, where $k$ is the stiffness of the spring determining the bond strength. Here the equilibrium distance of the spring is chosen to be the core size of the repulsive interaction, $\sigma$. ![Sketch of the forming and vanishing of a bond between two particles. The equilibrium distance of the spring, $r_0$, is the contact distance of the particles. The arrows indicate the direction of the motion of the particle. (a) No interaction because the particles are too far apart from each other. (b) The distance of the particles is smaller than $r_c$, but still no interaction because no previous contact between the particles has occurred. (c) After the contact, the bond is formed and remains as long as the distance between the particles is smaller than $r_c$. (d) The spring vanishes because the particles are too far apart from each other.\[fig:model\]](Fig1.pdf){width="8cm"} We carry out Brownian dynamics (BD) simulations with a fixed time step of $\delta t/\tau_B=8\times 10^{-5}$, with the Brownian time $\tau_B=\sigma^2/D_0$. The fundamental units of the system are $\sigma$, $\gamma$ and $\epsilon$. All simulations are performed at a reduced temperature of $k_B T/\epsilon=2$, where $k_B$ is the Boltzmann constant, and a fixed critical distance of the hysteretic spring of $r_c/\sigma=1.5$. The particles are placed in a cubic, periodic box which side length $L=(N/\rho)^{1/3}$, where $\rho=N/V$, with $V$ being the volume of the simulation cube. We investigate the properties of the system as a function of the density $\rho$, and the strength of the hysteretic links, $k$. We carried out simulations with a density of $\rho \sigma^3 = 0.1$ to 0.5 in steps of 0.05 and $\rho \sigma^3=0.6$, and bond strengths of $k\sigma^2/\epsilon=0$, 10, 20, 40, 70. Furthermore, for $k\sigma^2/\epsilon=40$ and 70 the densities $\rho/\sigma^3=0.01$ and 0.05 were considered. van Hove Correlation function ----------------------------- We characterize the dynamical correlations using the van Hove function $G({\mathbf{r} }, t)$ [@vhove:vhove; @hansen:mcdonald]. It characterizes the spatial and the temporal distribution of pairs of particles, as is relevant for fluid states. $G({\mathbf{r} }, t)\rm d {\mathbf{r} }$ can be interpreted as the number of particles $j$ in a volume element $\rm d {\mathbf{r} }$ at position ${\mathbf{r} }$ under the condition that there was a particle $i$ at the origin at time $t=0$. $G({\mathbf{r} }, t)$ is related to the intermediate scattering function $F(k,t)$, which is measurable in x-ray or neutron scattering experiments, via spatial Fourier transform, and to the dynamic structure factor $S(k,\omega)$ via spatial and temporal Fourier transform. Further motivation for considering $G({\mathbf{r} },t)$ stems from recent theoretical progress in formulating an exact generalization of the Ornstein-Zernike relation to nonequilibrium situations [@NOZ1; @NOZ2]. Here dynamical correlation functions are related to functional derivatives of a generating (free power dissipation) functional [@PFT]. An alternative theoretical description rests on the dynamical test particle limit [@thevanhove; @ajarcher], which was recently treated within the power functional approach [@TPL]. The van Hove function is defined as [@vhove:vhove; @hansen:mcdonald] $$G({\mathbf{r} },t) = \frac{1}{N} \left\langle \sum_{i=1}^N \sum_{j=1}^N \delta ({\mathbf{r} } + {\mathbf{r} }_j(0) - {\mathbf{r} }_i (t)) \right\rangle ~, \label{eq:vanhove}$$ where $\langle \cdot \rangle$ indicates the ensemble average, $\delta ( \cdot )$ is the (three-dimensional) Dirac delta function. It is possible to split $G({\mathbf{r} } ,t)$ into a self and a distinct part. In the first case the double sum is restricted to $i=j$ and $G_{\rm self}({\mathbf{r} }, t)$ describes the average motion of a particle that was at the origin at the initial time. The distinct part, $G_{\rm dist}({\mathbf{r} }, t)$, where $i \neq j$, represents the remaining $N-1$ particles, considering that any arbitrary particle $j$ was located at ${\mathbf{r} }_j=0$ at $t=0$. Therefore $$\begin{split} G ( {\mathbf{r} }, t) =& \frac{1}{N} \left\langle \sum_{i=1}^N \delta ( {\mathbf{r} } + {\mathbf{r} }_i(0) - {\mathbf{r} }_j(t) ) \right\rangle \\ &+ \frac{1}{N} \left\langle \sum_{i,j=1 \atop i\neq j}^N \delta ( {\mathbf{r} } + {\mathbf{r} }_j (0) - {\mathbf{r} }_i(t)) \right\rangle \\ \equiv& G_{\rm self}({\mathbf{r} }, t) + G_{\rm dist}({\mathbf{r} }, t)~. \end{split} \label{eq:vanhove_self_distinct}$$ Hence, the self-part describes the dynamics of only one tagged particles, while $G_{\rm dist}$ represents the remaining $N-1$ particles. Therefore the normalization of the self and distinct parts is $$\begin{aligned} \label{eq:vanhove_norm1} \int \text d {\mathbf{r} }~ G_{\rm self}({\mathbf{r} },t)&=1~, \\ \int \text d {\mathbf{r} }~ G_{\rm dist}({\mathbf{r} }, t) &= N-1~. \label{eq:vanhove_norm2}\end{aligned}$$ The initial time behavior for $t=0$ of $G({\mathbf{r} }, t)$ is given by $$\begin{split} G( {\mathbf{r} }, 0) =& \delta ({\mathbf{r} }) + \frac{1}{N} \left\langle \sum_{i,j=1 \atop i\neq j}^N \delta ({\mathbf{r} } + {\mathbf{r} }_j(0) - {\mathbf{r} }_i (t))\right\rangle \\ =& \delta ({\mathbf{r} }) + \rho g({\mathbf{r} }) ~, \end{split} \label{eq:vanhove_t-0}$$ where $g({\mathbf{r} })$ is the pair correlation function. Hence $G_{\rm self}({\mathbf{r} },0) = \delta ({\mathbf{r} })$ and $G_{\rm dist}({\mathbf{r} },0) = \rho g({\mathbf{r} })$. As time passes, the $\delta$-function broadens into a bell-shaped curve, and the peaks of $G_{\rm dist}$ decrease and disappear. For $t\to \infty$ the correlation vanishes and $G({\mathbf{r} }, t)$ becomes a constant, where $G_{\rm self}({\mathbf{r} }, t \to \infty) = 0$, and $G_{\rm dist}({\mathbf{r} }, t\to \infty) = \rho$. One important property for a homogeneous bulk fluid is that the van Hove function only depends on the distance $r=|{\mathbf{r} }|$, because of the isotropy: $G(r,t) = G_{\rm self}(r,t) + G_{\rm dist}(r, t)$. The free motion of one single particle in Brownian dynamics is a random walk, hence free diffusion occurs with the diffusion coefficient $D_0$. In this situation the self-part of the van Hove function is given by the solution of the diffusion equation [@hansen:mcdonald; @thevanhove]: $$\begin{aligned} \frac{\partial}{\partial t}G_{\rm self}(r,t) = D_0 \nabla ^2 G_{\rm self}(r,t)~, \label{eq:diff}\end{aligned}$$ which is is $$\begin{aligned} G_{\rm self}(r,t) = (4\pi D_0 t)^{-3/2} \exp\left( -\frac{r^2}{4 D_0 t} \right)~. \label{eq:gaussian}\end{aligned}$$ For the many-body system this expression is exact for $\rho \to 0$, as the interactions between the particles can be neglected in this limit. In systems with finite density Eq. (\[eq:gaussian\]) is an approximation where $D_0$ becomes an effective diffusion coefficient, which is a function of density, $D(\rho)$. Increasing the interaction between the particles further, i.e. by strong bonding in the current work, can lead to the shape of $G_{\rm self}(r,t)$ deviating from a Gaussian. The deviation can be quantified (in three dimensions) by the non-Gaussian parameter $$\begin{aligned} \alpha_2 (t)=\frac{3 \langle r^4(t) \rangle}{5 \langle r^2(t) \rangle ^2}-1~, \label{eq:ngauss}\end{aligned}$$ where $\langle r ^{\mu}(t) \rangle = \int \text d {\mathbf{r} } r^{\mu} G_{\rm self}(r, t)$ is the $\mu$-th spatial moment of $G_{\rm self}(r,t)$ [@ngauss:kob; @ngauss:rahman]. For a strict Gaussian $\alpha_2=0$. Mean First Passage Time ----------------------- A simple theoretical description of bond lifetime is given by the mean first passage time $\tau$ for a particle in an external potential. In the framework of the Kramer’s problem in one dimension it is possible to calculate $\tau$ from the adjoint Smoluchowski equation [@Zwanzig]. In this approach the motion of a single Brownian particle in an external potential is considered. The purpose is to calculate the mean time it takes the particle to escape the potential, i.e. when it reaches a certain end point. The starting position of the particle, $x$, is between a reflective barrier, located at the point $a$ and the end point $b$, with $a<x<b$. With these assumptions one can calculate the mean first passage time in one dimension as a function of the starting position $x$ [@Zwanzig]. In order to adopt this theory to our model, we consider a pair of bonded particles in three dimensions. One particle serves as the origin of the coordinate system and the other particle escapes the harmonic potential generated by the bond between the colloids. Therefore we choose for the external potential $U=U_S$. Furthermore, we generalize the calculation of $\tau$ to three dimensions, starting with the three dimensional adjoint Smoluchowski equation $$\begin{aligned} D \exp\left( \frac{U({\mathbf{r} })}{k_BT} \right) \nabla \cdot \left[ \exp\left( -\frac{U({\mathbf{r} })}{k_BT} \right) \nabla \tau({\mathbf{r} }_0) \right] = -1 ~,\end{aligned}$$ where ${\mathbf{r} }_0$ is the starting point of the particle in the harmonic potential. Because the total interaction potential only depends on the distance between the particles it can be written as $$\begin{aligned} D \exp\left( \frac{U(r)}{k_BT} \right) \frac{1}{r^2} \frac{\partial}{\partial r} \left[ r^2 \exp\left( -\frac{U(r)}{k_BT} \right) \frac{\partial \tau(r_0)}{\partial r} \right] = -1 ~.\end{aligned}$$ Integrating twice leads to the mean first passage time $\tau$: $$\begin{aligned} \tau(r_0) = \frac{1}{D} \int_{r_0}^{r_b} \text d y \frac{1}{y^2} \exp\left( \frac{U(y)}{k_BT} \right) \int_{r_a}^y \text d z z^2 \exp\left( -\frac{U(z)}{k_BT} \right) ~ , \label{eq:meanfirstpassagetime_3d}\end{aligned}$$ with $r_0$ is the starting position, $r_a$ is the position of the reflecting barrier and $r_b$ is the end position. In the current work the values for $r_0$, $r_a$ and $r_b$ are $r_0/\sigma=1$, $r_a/\sigma =1$ and $r_b/\sigma=1.5$, and for $D$ we choose $D=2D_0$, as the origin is given by a diffusively moving particle, see e.g. Ref. [@0295-5075-102-2-28011]. Hence, Eq. (\[eq:meanfirstpassagetime\_3d\]) is only exact if there is one pair of bonded particles, i.e. $\rho \to 0$. At finite densities the collisions with surrounding particles lead to a different lifetime of the bonds. Results {#chap:results} ======= Bond statistics {#sec:bonds} --------------- ![Bond lifetime statistics: (a) Histogram of the number of bonds that break over time, $N_{BB}$ for parameters $\rho \sigma^3 = 0.4$ and $k \sigma^2 /\epsilon =10$. The black line is a fit to an exponentially decaying function. (b) Fit parameter $\tau_{\rm life}$ as a function of density, for different bond strengths: $k\sigma^2/\epsilon=0$ (black solid line), 10 (red dashed line), 20 (green dashed-dotted line), 40 (blue dashed-dashed-dotted line) and 70 (purple dotted line).\[fig:bonds\]](Fig2.pdf){width="8cm"} We start by investigating the properties of the dynamic bond formation process. We consider the time scale on which a spring is active, i.e. how much time passes between the formation and the annihilation of a certain bond. We study this process by varying systematically the mean density and the bond strength. In Fig. \[fig:bonds\](a), we present a histogram of the bond lifetime for the parameters $\rho \sigma^3 = 0.4$ and $k \sigma^2 /\epsilon=10$, where $N_{BB}(t)$ marks the number of broken bonds after they existed for a time $t$. The black curve is a fit to the function $N_{BB}(t)=N_0 \exp(-t/\tau_{\rm life})$, where $\tau_{\rm life}$ is the average lifetime of the bond. Results for $\tau_{\rm life}$ for further parameters are shown in Fig. \[fig:bonds\](b). We observe that either increasing $\rho$ or $k$ leads to an increase of the lifetime. A harder spring (increasing $k$) leads to a stronger attraction between the bonded pairs, which makes it harder for the particles to separate from each other above the critical distance $r_c$, resulting in a increased bond lifetime. The increase in density causes an increase in the number of collisions, and therefore makes it more unlikely for a particle to separate from its bonded partner, increasing the lifetime. The results for the mean first passage time, $\tau$ and $\tau_{\rm life}$, are summarized in Table \[tab:passagetimes\]. Comparing these values with the simulation results, we find some discrepancies, which are entirely expected. First, the calculation Eq. (\[eq:meanfirstpassagetime\_3d\]) neglects the repulsive core interaction, which is a small error, as the cut-off length is chosen rather short, compared to the maximal possible spring length. Second, Eq. (\[eq:meanfirstpassagetime\_3d\]) is only exact for $\rho \to 0$. Third, there is statistical error. Especially for $k \sigma^2 /\epsilon=40$ and 70, the particles get very sticky, and bond breaking becomes rare, making the statistical error the most dominant in these systems. For $k \sigma^2 /\epsilon = 0$ and $k \sigma^2 /\epsilon = 10$ the accordance of $\tau_{\rm life}$ at $\rho \sigma^3 = 0.1$ with the calculated mean first passage times is quite good. But as the density increases, the discrepancy between the theoretically predicted values and the one sampled from simulated data increases, as expected. For $k \sigma^2 /\epsilon = 20$ in the low density regime ($\rho \sigma^3 =0.1$) the deviation from the theory is higher than in the cases before. In the case of $k \sigma^2 /\epsilon = 40$ the comparison between calculation and simulation is only reasonable for low densities, where we find $\tau_{\rm life}/\tau_B=3.589$ for $\rho \sigma^3 = 0.01$ and $\tau_{\rm life}/\tau_B=3.786$ for $\rho \sigma^3 = 0.05$. For increasing density the differences between calculation and simulation increase further. As mentioned above, the comparison for $k \sigma^2 /\epsilon = 70$ is hardly possible and the differences between the values is large even in the low density case, where the simulations give $\tau_{\rm life}/\tau_B=822.596$ for $\rho \sigma^3 = 0.01$ and $\tau_{\rm life}/\tau_B=519.481$ for $\rho \sigma^3 = 0.05$. Despite quantitative discrepancies with the simulations, the theory captures the correct trend of increasing relaxation times for increasing density and bond strength. \[htbp\] $k \sigma^2/\epsilon$ 0 10 20 40 70 ---------------------------------------------------------- ------- ------- ------- -------- --------- $\tau/\tau_B$ from Eq. (\[eq:meanfirstpassagetime\_3d\]) 0.388 0.472 0.588 1.000 2.764 $\tau_{\rm life} /\tau_B$ for $\rho \sigma^3 =0.1$ 0.174 0.273 0.482 4.389 872.476 $\tau_{\rm life} /\tau_B$ for $\rho \sigma^3 =0.6$ 0.273 0.483 0.831 16.839 908.690 : Mean first passage times $\tau$ for different bond strengths $k$ as calculated by Eq. (\[eq:meanfirstpassagetime\_3d\]) and simulation results for $\rho \sigma^3 =0.1$ and 0.6.\[tab:passagetimes\] Percolation and fractal dimension {#sec:perc} --------------------------------- We further investigate the structural properties of the system by investigating the percolation transition and the fractal dimension of percolated systems. We are interested in the critical density $\rho_c$ above which 50% of the particles in the system belong to one cluster [@REF12], and especially in the dependence of $\rho_c$ on the hysteretic bond strength $k$. A cluster is an ensemble of particles that are connected so that it is possible to reach any particle in the cluster by following a path of bonds, regardless of the starting particle. Fig. \[fig:perc\] (a) shows the results for $P_L$, which is the ratio $N_{\rm CL}/N$, with $N_{CL}$ being the number of particles in the biggest cluster, and $N$ the total number of particles, as a function of density. The colors indicate different bond strengths. Clearly the percolation threshold $\rho_c$ decreases as $k$ is increased. The reason is the magnitude of the attractive pair interaction that increases with $k$; a particle bonded with a strong hysteretic spring to a cluster is more unlikely to break away from it, compared to system with smaller $k$. This suggests that strong bonding supports increased cluster growth and increased stability of the cluster over time. The latter means that strongly interacting particles form percolating clusters that are stable. The snapshots in Fig. \[fig:perc\] (b) and (c) show the system at $t /\tau_B = 80$. The colors indicate different clusters, where brown is the largest cluster and white particles do not belong to any cluster. In (b) the system with the parameters $k \sigma^2 /\epsilon = 10$ and $\rho \sigma^3 = 0.3$ is not percolated, i.e. the largest cluster does not contain 50% of the particles. In (c) the system is percolated. Almost all particles belong to the percolating cluster for $k \sigma^2 /\epsilon = 40$ and $\rho \sigma^3 = 0.3$, where the particles act sticky. The snapshots reveal voids in the cluster, and therefore suggest a fractal dimension of the percolating cluster of $d_f < 3$. In order to characterize the fractal structure, we calculate the cumulative sum of the radial distribution function $g(r)=G_{\rm dist}(r,t=0)$, $$\begin{aligned} n(r) = 4 \pi \rho \int_0^r r'^2 g(r') dr'~.\end{aligned}$$ It can be shown that $n(r)$ is related to the distance by a power law above a certain decay length $$\begin{aligned} n(r) \propto r^{d_f},\end{aligned}$$ with $d_f$ being the fractal dimension [@vicsek]. The result is shown in Fig \[fig:perc\] (e), while in Fig \[fig:perc\] (d) the corresponding result for $g(r)$ is displayed. The black curve, where $\rho \sigma^3=0.5$ and $k \sigma^2 /\epsilon = 10$, shows a percolated system, where the fractal dimension is $d_f=3$. In the double-log plot of $n(r)$ this manifests itself by a straight line with slope 3. The red curve represents a percolated system with $\rho \sigma^3=0.2$ and $k \sigma^2 /\epsilon = 70$. The percolating cluster has a fractal dimension of $d_f=2.31$, which is the slope of the red curve in Fig. \[fig:perc\] (e) when it starts to asymptotically approach the black line, around $3\lesssim r /\sigma \lesssim5$. Percolating clusters can be only found for systems with strong bonding, i.e. $k \sigma^2 / \epsilon=40$ and 70. With decreasing density, $d_f$ decreases. These values of the fractal dimension are consistent with fractal dimensions found in other colloidal systems [@Poon:1995ts; @REF0; @fortini:pickering], at intermediate densities and interaction strengths [^1]. The relative error of $d_f$ is rather large and can be estimated to be 15%. The reason is that it is not always clear how to estimate the decay length from the graphical representation. Another error source is the fitting of a line to the relevant part of $n(r)$. ![Percolation transition: (a) Probability that a particle belongs to the largest cluster, $P_L$, as a function of the particle density. The different colors represent different strengths of the hysteretic spring. (b) Simulation snapshot of a percolated system with $\rho/\sigma^3=0.3$ and $k\sigma^2/\epsilon=10$. The largest cluster is colored in brown while white particles are not part of any cluster. (c) Snapshot with $\rho/\sigma^3=0.3$ and $k\sigma^2/\epsilon=40$ where the percolating cluster shows a fractal dimension $<$ 3. The coloring is similar to (b). (d) Radial distribution function for parameters $\rho \sigma^3 =0.2$ and $k \sigma^2 /\epsilon =70$ (red dashed curve), and $\rho \sigma^3 =0.5$ and $k \sigma^2 /\epsilon =10$ (black solid curve), representing both percolated systems. (e)Cumulative sum, $n(r)$, of (d) in double-log representation.\[fig:perc\]](Fig3.pdf){width="8cm"} van Hove Correlation function {#sec:hove} ----------------------------- In Fig. \[fig:hove\_02\] we show the results for the van Hove function for the density $\rho \sigma^3 = 0.2$. The left column shows the self-part $G_{\rm self}$ in semi-logarithmic representation, while the right column shows $G_{\rm dist}$ on a linear scale. The different colors and line styles indicate the different correlation times, where black solid is $t/\tau_B =0.08$, red dashed is $t/\tau_B =0.8$ and green dotted is $t/\tau_B =8$. In Fig. \[fig:hove\_02\] in the first row $k \sigma^2 /\epsilon = 0$ (panels (a) and (b)), in the second row $k \sigma^2 /\epsilon = 10$, in the third row $k \sigma^2 /\epsilon = 20$, and in the last row $k \sigma^2 /\epsilon = 70$. For increasing $k$ we observe an increase of the maximum height of the self-part $G_{\rm self}$, as well as a decrease of its width (faster decay of the self-part of the correlation function). The reason is that at high $k$ the particles are more strongly bonded and they have a reduced mobility. Up to $k \sigma^2 /\epsilon = 20$ the shape of $G_{\rm self}$ is still a Gaussian, as expected for fluid systems [@hansen:mcdonald]. If $k$ is increased to $k \sigma^2 /\epsilon = 40$ and beyond the shape of the self-part changes. This indicates the transition from a fluid to a network behavior. The deviation is quantified in more detail in Sec. \[sec:ngauss\]. We point out that deviations of the self part of the van Hove function from a Gaussian behavior correspond to the presence of the $\alpha$ and $\beta$ relaxation processes in the self intermediate scattering function [@thevanhove]. ![The van Hove correlation function for $\rho \sigma^3=0.2$: The left column shows the self part of the van Hove function, the right column shows the distinct part at times $t/\tau_B = 0.08$ (black solid line), $t/\tau_B=0.8$ (red dashed line) and $t/\tau_B=8$ (green dotted line). (a), (b) $k\sigma^2/\epsilon=0$, (c), (d) $k\sigma^2/\epsilon=10$, (e), (f) $k\sigma^2/\epsilon=20$, (g), (h) $k\sigma^2/\epsilon=40$, (i), (j) $k\sigma^2/\epsilon=70$.\[fig:hove\_02\]](Fig4.pdf){width="8cm"} After the transition, the maximum of $G_{\rm self}$ increases by about two orders of magnitude and a fast decay of the correlation function, compared to fluid systems, indicates the presence of highly immobile particles in this region. Increasing the density has a similar effect on $G_{\rm self}$ as increasing $k$, though the reasons are different. The increase in $\rho$ leads to an increase of the number of collisions between particles, which also reduces their mobility. In comparable work, where only the density of a hard-sphere suspension is increased [@thevanhove], no crossover from a Gaussian shape was found, suggesting that the reduction of mobility is due to the hysteretic bonding drives this effect. The distinct part of the van Hove function shows an increase of the height of the first peak for $t /\tau_B=0.08$, as $k$ is increased. The probability of finding a particle in the first correlation shell is increased, as $k$ is increased. For $t /\tau_B = 0.8$ the peaks start to disappear and for $t /\tau_B=8$ $G_{\rm dist}$ is a constant in the fluid regime. For $k \sigma^2 /\epsilon \geq 40$, $G_{\rm dist}$ shows many oscillations, which only decrease in their amplitude, but do not vanish completely over time. This refers to a shell-like local structure, which is moderately stable over time, representing an arrested system. The dependency on density is comparable to that on $k$, but plays a more minor role. The reasons are similar to the ones given above. However, the transition to an arrested system is not observed, when only the density is increased. Figure \[fig:hove\_03\] shows the results for $G(r,t)$ for $\rho \sigma^3 =0.3$. The configuration of the panels and the color code are the same as in Fig. \[fig:hove\_02\]. The observations are comparable to those for $\rho \sigma^3 =0.2$ and again we find a deviation from a Gaussian shape for $G_{\rm self}$ for $k \sigma^2 /\epsilon =40$. This indicates that the the transition from a fluid to a transient network occurs around $k \sigma^2 /\epsilon \simeq 40$. This conclusion is supported by Fig. \[fig:hove\_04\] where the van Hove function for $\rho \sigma^3 = 0.4$ is shown. The configuration of the panels is similar to Fig. \[fig:hove\_03\]. As in the previous case, the deviation from a Gaussian in $G_{\rm self}$ and the arrested oscillations in $G_{\rm dist}$ are observed. This results show that the model allows the tuning of the system’s behavior from that of a fluid to that of a static network. The crossover is characterized by a transient network behavior. The system shows fluid-like dynamics with a reduced mobility of the particles, when bonds with finite strength are present. ![ Same Fig. \[fig:hove\_02\] but for $\rho \sigma^3=0.3$.\[fig:hove\_03\]](Fig5.pdf){width="8cm"} ![Same as Fig. \[fig:hove\_03\], but $\rho \sigma^3=0.4$.\[fig:hove\_04\]](Fig6.pdf){width="8cm"} Non-Gaussian parameter {#sec:ngauss} ---------------------- We next quantify the deviation of the self van Hove function from a fluid Gaussian behavior, by means of the parameter $\alpha_2$, defined in Eq. (\[eq:ngauss\]). The results for $\alpha_2$ are shown in Fig. \[fig:alpha2\] as a function of density and for different correlation times $t/\tau_B$ = 0.8, 3.2, 6.4, and 8. The color and symbol code refers to the bond strength. We observe that $\alpha_2<0.1$ for all systems with $k \sigma^2 /\epsilon < 40$, regardless of density and time. Hence these systems can be regarded as fluid in the limit of the statistical error. For $k \sigma^2 /\epsilon=40$ the accordance of $G_{\rm self}$ with a Gaussian is quite good for $\rho \sigma^3 = 0.1$ for all times, but decreases rapidly as $\rho$ is increased until $\rho \sigma^3 \approx 0.3$. Above this density the increase of $\alpha_2$ slows down and for $t /\tau_B =0.8$, $\alpha_2$ almost saturates. The same behavior occurs for $k \sigma^2 /\epsilon=70$, but the non-Gaussian parameter has always a higher value than for $k \sigma^2 /\epsilon=40$. The saturation can be explained by the rate of collisions in the system, as a collision results in strong bonding for $k \sigma^2 /\epsilon=40$ and 70. In systems with $\rho \sigma^3 < 0.3$ collision events are rarer than in denser systems, indicating that more particles remain mobile because they diffusive freely between the collision events. The second moment of $G_{\rm self}$, which determines its width, decreases as time passes, and $k$ and $\rho$ are increased (see e.g. Fig. \[fig:hove\_04\]). Moreover the fourth moment, the kurtosis, decreases too (see also Fig. \[fig:hove\_04\]), but more rapidly than the second moment, because in total $\alpha_2$ increases. Therefore $\alpha_2$ quantifies the immobility of the particles compared to a fluid. We observe that $\alpha_2$ increases gradually when $\rho$ is increased. By varying $k$ and $\rho$, we can tune the system in a way that the deviation from a Gaussian of $G_{\rm self}$, $\alpha_2$ covers the dynamics of the system from fluid to fully static. ![Parameter $\alpha_2$ as a function of density: The panels show different correlation times, while the colors and symbol style refer to different bond strengths. In (a) the correlation time is $\tau/\tau_B=0.8$, in (b) 3.2, in (c) 6.4 and in (d) 8. The color and symbol code is the same for all times, with black crosses for $k \sigma^2 / \epsilon = 0$, red stars for $k \sigma^2 / \epsilon = 10$, green crosses for 20, blue diamonds for 40 and purple triangles for $k \sigma^2 / \epsilon = 70$. \[fig:alpha2\]](Fig7.pdf){width="8cm"} Conclusion {#sec:conclusion} ========== In conclusion, we have shown that the proposed model of hysteretic bond formation displays a variety of properties that are consistent with network formation. Depending on the strength of the bonding springs, we observe a crossover from transient network formation to an arrested quasi static network. We have used the two-body time-dependent (van Hove) correlation function to characterize the dynamic structure. A clear crossover from a fluid behavior at low spring constants to an arrested liquid-like structure at high spring constants and high densities is observed. This manifests itself in a clearly non-Gaussian shape of the self part and an increased correlation length and time in the distinct part for high spring constants. Moreover the crossover is quantized by the non-Gaussian parameter $\alpha_2$, which allows a more precise study of the crossover in the parameter range. Moreover, we have find that the mobility of the particles can be tuned in the fluid regime by variation of the bond strength and the density. Our model can describe loose transient networks, where the rate of bonding and annihilation of bonds is high, as well as a strongly interacting network, in which new bonds last over a long period of time. This is supported by our statistical analysis of the bond forming and vanishing process. In future work, it would be very interesting to complement our simulation work by a theoretical approach that would describe network formation in fluids. One possible candidate for such a theory is the recent power functional approach, where the dynamics of a Brownian many-body systems is obtained from a variational principle on the one-body level [@PFT]. Generalizing this approach in order to introduce the hysteretic bond formation process constitutes an interesting research task for the future. The results presented in this paper pave the way to the analysis of the behavior of transient network formation with varying hysteretic behavior. The model allows one to change the critical parameters of the hysteretic interaction and evaluate the effects on network formation. Hence, one is able to identify the signature behavior of hysteretic systems. Known exampled of these systems, such as those governed by capillary forces, could also provide experimental confirmation. Discovering non-obvious hysteretic behavior could be of importance for characterizing the network formation behavior of polymers or polymer particles, such as those used in the paint and coating industries  [@Keddie:2010ta]. We thank Joseph M. Brader and Thomas M. Fischer for helpful discussions. PK acknowledges the Elitenetzwerk Bayern (ENB) for partial support. [48]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](http://stacks.iop.org/0953-8984/19/i=32/a=323101) [****,  ()](\doibase 10.1063/1.1768936) [****,  ()](\doibase 10.1038/nature06931) [****, ()](\doibase 10.1063/1.1724109) [****,  ()](\doibase 10.1103/PhysRevE.67.021302) [****, ()](\doibase 10.1103/PhysRevLett.92.054302) [****,  ()](\doibase 10.1140/epje/i2004-10022-4) [****,  ()](\doibase 10.1146/annurev.physiol.64.092501.114547) [****, ()](\doibase 10.1021/ma00031a024) [****,  ()](\doibase http://dx.doi.org/10.1016/S0377-0257(99)00046-4) [****,  ()](\doibase 10.1021/ma0001640) [****,  ()](\doibase http://dx.doi.org/10.1016/j.jmps.2011.05.005) [****,  ()](\doibase http://dx.doi.org/10.1122/1.550391) @noop [**]{} (, ) [****,  ()](\doibase 10.1021/la000317c) [****, ()](\doibase http://dx.doi.org/10.1016/S0378-4371(98)00420-8) [****,  ()](http://stacks.iop.org/0953-8984/17/i=15/a=L02) [****, ()](\doibase 10.1103/PhysRevLett.103.228301) F. J. Maier and T. M. Fischer, Soft Matter [**12**]{}, 614 (2016). [****,  ()](http://stacks.iop.org/0953-8984/15/i=1/a=306) [****,  ()](http://dx.doi.org/10.1209/0295-5075/78/26002) [****,  ()](http://scitation.aip.org/content/aip/journal/jcp/130/11/10.1063/1.3089620) [****,  ()](\doibase 10.1006/jcis.2000.7150) [****, ()](\doibase 10.1021/nl0493500) [****,  ()](\doibase 10.1021/la0513611) [****,  ()](\doibase 10.1103/PhysRevLett.97.168301) @noop [**** ()]{} [****,  ()](\doibase 10.1103/PhysRevE.90.032302) @noop [**** ()]{} @noop [**]{} (, ) [****,  ()](\doibase 10.1103/PhysRevE.90.012201) [****,  ()](\doibase 10.1080/00018730500167855) [****, ()](\doibase 10.1103/PhysRev.95.249) @noop [**]{} (, ) [****,  ()](\doibase http://dx.doi.org/10.1063/1.4820399) [****,  ()](\doibase http://dx.doi.org/10.1063/1.4861041) [****,  ()](\doibase http://dx.doi.org/10.1063/1.4807586) [****,  ()](\doibase 10.1063/1.3511719) [****,  ()](\doibase 10.1103/PhysRevE.75.040501) [****,  ()](http://stacks.iop.org/0953-8984/27/i=19/a=194106) [****,  ()](\doibase 10.1103/PhysRevE.60.3107) [****, ()](\doibase 10.1103/PhysRev.136.A405) @noop [**]{} (, ) [****, ()](http://stacks.iop.org/0295-5075/102/i=2/a=28011) @noop [**]{} (, ) @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{}, Processes and Properties (, ) [^1]: In the limit of low densities and large interaction strengths the system will reach a fractal dimension $d_f \simeq 1.7$, typical of systems formed from diffusion-limited cluster aggregation (DLCA)
--- abstract: 'We construct Bridgeland stability conditions on the derived category of smooth quasi-projective Deligne–Mumford surfaces whose coarse moduli spaces have ADE singularities. This unifies the construction for smooth surfaces and Bridgeland’s work on Kleinian singularities. The construction hinges on an orbifold version of the Bogomolov–Gieseker inequality for slope semistable sheaves on the stack, and makes use of the Toën–Hirzebruch–Riemann–Roch theorem.' address: - | BL: Department of Mathematics\ University of Utah\ Salt Lake City, UT 84102, USA - 'FR: Department of Mathematics, Rutgers University, Piscataway, NJ 08854, USA' author: - Bronson Lim - Franco Rota bibliography: - './bibliography.bib' title: Characteristic classes and stability conditions for projective Kleinian orbisurfaces --- Introduction ============ Preliminaries ============= Kleinian orbisurfaces --------------------- An *orbisurface* is a smooth and proper Deligne-Mumford surface such that the stacky locus has codimension 2. \[def:orbisurface\] For any orbisurface $\sS$, and geometric point $s\in \sS$, there is an étale local chart near $s$: $$j_s\colon [U/\mathrm{st}(s)]\to \sS$$ where $U\subset\mathbb{A}^2$ is open and $\mathrm{st}(s)$ is the stabilizer group of $s$ acting through $\mathrm{GL}_2$. The mapping $j_s$ induces a closed embedding $$j_s\colon [\ast/\mathrm{st}(s)]\to \sS$$ called the *residual gerbe* at $s$. We denote by $BG$ the quotient stack $[\ast/G]$. An orbisurface is *Kleinian* if for each $s\in \sS$, the stabilizer group acts through $\mathrm{SL}_2$. And an orbisurface is an $A_{N-1}$-orbisurface if it is Kleinian and the non-trivial stabilizer groups are cyclic of order $N$. If $S$ is a surface with Kleinian singularities, then there is a Kleinian orbisurface $\sS$ with a mapping $\pi\colon\sS\to S$ such that the induced morphism $$\sS\setminus\pi^{-1}(\mathrm{Sing}(S))\to S\setminus\mathrm{Sing}(S)$$ is an isomorphism and which is universal among all dominant, codimension preserving maps to $S$ [@FMN10]. The stack $\sS$ is called the *canonical stack* associated to the surface $S$. A line bundle on $\sS$ is *ample* if it is the pullback of an ample line bundle on the coarse space $S$. An orbisurface is *projective* if the coarse moduli is projective. The weighted projective plane $\mathbb{P}_{1,1,N}$ has canonical stack the stacky weighted projective plane $$\textbf{P}_{1,1,N} = [(\mathbb{C}^3_{1,1,N}\setminus\{0\})/\mathbb{C}_m]$$ where the subscript indicates the weights of the $\mathbb{C}^\ast$-action. That is, $\lambda\in\mathbb{C}^\ast$ acts by $\lambda(x,y,z) = (\lambda x,\lambda y,\lambda^Nz)$. There is a unique stacky point point where $x$ and $y$ are zero with residual gerbe $B\mu_N$. Thus the stacky weighted projective plane is a projective $A_{N-1}$-orbisurface. The local model for a surface with an $A_{N-1}$ singularity is the hypersurface $$S=\{x^2+y^2+z^N=0\}$$ in $\mathbb{C}^3$. The canonical stack is the $A_{N-1}$-orbisurface $$\sS = [\mathbb{C}^2/\mu_N]$$ where $\lambda\in\mu_N$ acts via $ \lambda(u,v) = (\lambda u,\lambda^{-1}v)$. Although we are primarily interested in the case where there is a unique stacky point, the following example should be kept in mind. Let $A$ be an Abelian surface and let $\mu_2=\langle -1\rangle$ act on $A$ via negation, i.e. $-1\cdot a = -a$ for all $a\in A$. There are sixteen fixed points of this action. Thus the quotient stack $[A/\mu_2]$ is an $A_1$-orbisurface with sixteen residual gerbes of type $B\mu_2$. The derived McKay correspondence {#sec_McKay_corr} --------------------------------- A Kleinian orbisurface $\sS$ can be interpreted as a *stacky resolution of singularities* of its coarse moduli space $S$. The derived McKay correspondence [@BKR01] exhibits an equivalence $\Phi$ between the derived category $D(\sS)$ and that of the minimal resolution $f\colon \tilde{S}\to S$ of $S$. Let $\tilde{\sC}$ be the abelian subcategory of $\Coh(\tilde{S})$ consisting of sheaves $E$ such that $\dR f_*(E)=0$, and define a torsion pair: $$\begin{aligned} \tilde{\sT}_0&\coloneqq\set{T\in\Coh(\tilde{S})\st \dR^1f_*(T)=0};\\ \tilde{\sF}_0&\coloneqq\set{F\in\Coh(\tilde{S})\st f_*(F)=0, \Hom(\tilde\sC,F)=0}.\end{aligned}$$ The heart of the bounded $t$-structure on $D(\tilde{S})$ obtained by tilting $\Coh(\tilde{S})$ along the pair above is denoted $\zPer(\tilde{S}/S)$, its objects are called *perverse sheaves*. The reader is referred to [@Bri02_flops] and [@VdB04] for the details on this construction. The derived Mckay correspondence, $\Phi$, satisfies: $$\Phi(\Coh(\sS))\simeq \pair{\tilde{\sF}_0[1],\tilde{\sT}_0} = \zPer(\tilde{S}/S).$$ More explicitly, suppose $S$ has a unique singular point $p$. Let $\sS$ be the associated canonical stack and, abusing notation, $p$ the lift of the point $p$ to $\sS$. Denote by $C$ the fundamental cycle of $\tilde{S}\to S$, and by $C_i$ its irreducible components. Then we have $$\begin{aligned} \Phi(\sO_\sS) &=\sO_{\tilde{S}};\\ \Phi(\sO_p) &= \omega_{C}[1];\\ \Phi(\sO_p\otimes \rho_i) &= \sO_{C_{i}}(-1), \quad i=1,...,M.\end{aligned}$$ We fix a quasi-inverse $\Phi^{-1}$ of $\Phi$ and write $\sF_0\coloneqq \Phi^{-1}(\tilde{\sF}_0[1])$ and $\sT_0\coloneqq \Phi^{-1}(\tilde(\sT_0))$, so that $$\Coh(\sS)=\pair{\sF_0,\sT_0}.$$ Moreover, the category $\sC$ of sheaves $E$ on $\sS$ such that $\dR\pi_*(E)=0$ satisfies $\sC=\Phi^{-1}\tilde\sC$, and is generated by the sheaves $\sO_p\otimes\rho_i$, $i\neq 0$. We finish this section by recalling a definition which will be useful later. \[def\_cluster\_constellation\] Let $W$ be a quasi-projective variety, on which a finite group $G$ is acting. A $G$-*constellation* on $W$ is a $G$-equivariant sheaf $E$ on $W$ with finite support such that $H^0(E)$ is isomorphic to the regular representation of $G$, as $G$-representation. A $G$-*cluster* is the structure sheaf $\sO_Z$ of a subscheme $Z\subset W$ which is also a $G$-constellation. If $G$ is a finite subgroup of $\mathrm{GL}_2$ acting on $\mathbb{C}^2$, then the space of $G$-clusters, denoted $G$-Hilb$(\C^2)$, is the minimal resolution of $\C^2/G$ [@BKR01]. The skyscraper sheaves of points in the exceptional locus correspond under $\Phi$ to clusters supported at the origin in $\C^2$. Characteristic classes {#sec_Chern_classes} ---------------------- From now on, we assume that $\sS$ is a projective Kleinian orbisurface with a unique stacky point $p\in \sS$ and residual gerbe $BG=[\ast/G]$. Set $\iota\colon BG\hookrightarrow \sS$ the corresponding closed substack. We use Vistoli’s intersection theory in what follows [@Vistoli]. In particular, Chern classes and Todd classes are defined, as well as a degree map. The Hodge index theorem still holds, i.e. the intersection form on $\mathrm{NS}(\sS)\otimes\mathbb{R}$ is of signature $ (1,r-1)$: Suppose $H$ is an ample Cartier divisor on $\sS$. If $D\not\equiv 0$ is a divisor such that $D\cdot H=0$ then $D^2<0$. \[thm:hodge-index\] For $E$ a sheaf, and $H$ an ample divisor class on $S$, the slope of $E$ with respect to $H$ is $$\mu(E) = \frac{H\cdot \ch 1(E)}{\mathrm{rk}(E)},$$ with the convention that $\mu(E)=+\infty$ if $\mathrm{rk}(E)=0$. We say that $E$ is $\mu$-(semi)stable if for all non-zero proper subsheaves $E'\subset E$ one has $\mu(E')<(\leq) \mu(E)$. Define also the discriminant of $E$ by $$\Delta(E) =(\ch 1(E))^2 - 2\mathrm{rk}(E)\ch 2(E).$$ If $E$ is a $\mu$-semistable sheaf, then $\Delta(E)\geq 0$, or equivalently $$\label{eq:bg-inequality} \ch 2(E)\leq \frac{(\ch 1(E))^2}{\mathrm{rk}(E)}.$$ \[thm:bg-inequality\] The results above only involve a part of the grothendieck group of $\sS$. In fact, the $K$-theory of $BG$ is free Abelian, generated by the irreducible representations of $G$ $\{ \rho_i \,|\,i=0,...,M \}$. For any perfect complex of sheaves $E$ on $\sS$, we have $$[{\dL}\iota^\ast E] = \sum_{i=0}^{M} a_i\rho_i.$$ \[def\_orbifold\_Chern\_char\] Given a perfect complex $E\in \sD(\sS)$, we define the *orbifold Chern character* $$\ch{orb}(E)=\left(\ch{}(E), a_0,...,a_{M}\right).$$ The Toën–Hirzebruch–Riemann–Roch theorem ---------------------------------------- We use a version of the Hirzebruch-Riemann-Roch theorem for smooth projective Deligne-Mumford stacks due to Toën [@Toe99]. The formula is analogous to the usual Hirzebruch-Riemann-Roch theorem, but it presents a correction term. For the convenience of the reader, we give a brief description of the formula, following [@Tse10 Appendix A]. Let $I\sS$ denote the inertia stack of $\sS$, and define a map $$\rho\colon K(I\sS) \to K(I\sS)\otimes \Q(\mu_\infty)$$ as follows: if $E$ is a bundle on $I\sS$ decomposing as a sum $\bigoplus\limits_{\zeta} E^{(\zeta)}$ of eigenbundles with eigenvalue $\zeta$, let $$\rho(E)=\sum\limits_{\zeta} \zeta E^{(\zeta)}.$$ One then defines the weighted Chern character as the composition $$\widetilde{\ch{}}\colon K(\sS)\xrightarrow{\sigma^*} K(I\sS) \xrightarrow{\rho} K(I\sS) \xrightarrow{\ch{}} H^*(I\sS)$$ where $\sigma\colon I\sS \to \sS$ is the projection and $\ch{}$ is the usual Chern character. The weighted Todd class $\widetilde{\mathrm{Td}}_\sS$ is defined in a similar way [@Tse10 Def. A.0.5]. Then we have Let $E$ be a perfect complex of sheaves on $\sS$, then $$\chi(E) = \int\limits_{I\sS}\widetilde{\ch{}}(E).\widetilde{\mathrm{Td}}_\sS = \int\limits_{\sS}\widetilde{\ch{}}(E).\widetilde{\mathrm{Td}}_\sS + \delta(E) %case N=2 %\chi(\sF) = \deg(\ch{}(\sF).\mathrm{Td}_X)+\frac{a}{4}-\frac{b}{8}$$ where $\delta(E)\coloneqq \int\limits_{I\sS\setminus \sS}\widetilde{\ch{}}(E).\widetilde{\mathrm{Td}}_\sS$ is the aforementioned correction term. \[thm:tgrr\] Our short term goal is now to investigate the term $\delta(E)$ in the case of a Kleinian orbisurface, by computing the weighted Chern characters of $[\dL\iota^*E]$. The inertia stack of $\sS$ is $$I\sS = \sS \sqcup (IBG \setminus BG)$$ where $$IBG \setminus BG = \bigsqcup\limits_{(g)\neq (1)}BC_G(g)$$ The degree of the Todd class on $IBG\setminus BG$ is given by the formula $$\int\limits_{IBG\setminus BG} \widetilde{\mathrm{Td}}_\sS = \sum\limits_{(g)\neq (1)}\frac{1}{\abs{C_G(g)}} \cdot \frac{1}{2-\xi_g - \xi_g^{-1}}$$ where $\xi_g$ and $\xi_g^{-1}$ are the eigenvalues of the action of $g$ on the tangent space $T_p\sS$ of the stacky point on $\sS$. This number is computed in [@CT19] to be $$\delta(\sO_\sS) = \frac{1}{12}\left(\chi_{top}(C_{red})-\frac{1}{\abs{G}}\right),$$ where $C$ is the fundamental cycle of the minimal resolution (see Section \[sec\_McKay\_corr\]). The fiber of a sheaf $E$ at $p$ decomposes as $[\dL\iota^*E]=\sum_{i=0}^M a_i \rho_i$ where the sum runs over all irreducible representations $\rho_i$ of $G$. On $BC_G(g)$, the element $g$ acts on $\rho_i$ with eigenvalues $\zeta_i^{(l)}$, to which correspond eigenspaces $\rho_i^{(l)}$. Therefore, $\dL\iota^*E$ decomposes on $BC_G(g)$ into weighted eigenbundles as $\sum\limits_{i=0}^M \sum\limits_{l=1}^{r_i} a_i \zeta_i^{(l)} \rho_i^{(l)}$. Then, the weighted Chern character of $\dL\iota^*E_{|BC_G(g)}$ is given by $$\label{eq_weighted_Chern_char} \widetilde{\ch{}}(\dL\iota^*E_{|BC_G(g)}) = \sum\limits_{i=0}^M \sum_{l=1}^{r_i} a_i \zeta_i^{(l)} = \sum_{i=0}^M a_i \chi_i(g),$$ where $\chi_i\coloneqq \chi_{\rho_i}= \Tr \circ \rho_i$ is the character of the representation $\rho_i$. Our main interest lies in the following computation: \[lem\_delta\_of\_skyscrapers\] Let $\rho$ be an irreducible representation of $G$ of dimension $r$. Then the second Chern character of $\sO_p\otimes \rho$ is $\frac{r}{N}$, and $$\delta(\sO_p\otimes \rho)=\begin{cases} 1-\frac{1}{N} \qquad \text{ if }\rho = \mathbbm 1;\\ -\frac{r}{N} \qquad \text{ if }\rho\neq \mathbbm 1. \end{cases}$$ This is a local computation and so we can assume $\sS = [U/G]$ where $U$ is an open subset of $\mathbb{A}^2$. In this case, we have the equivariant Koszul complex (write $V$ to denote $T_p\sS$ as a representation of $G$) $$0\to \sO_U\otimes \Lambda^2V\cong\sO_U \to \sO_U \otimes V \to \sO_U \to \sO_p,$$ which resolves $\sO_p$. Hence, $$[\dL\iota^*(\sO_p\otimes \rho)]=(2\cdot \mathbbm 1-V)\otimes \rho.$$ By Theorem \[thm:tgrr\] and multiplicativity of characters, the correction term is $$\begin{aligned} \delta(\sO_p\otimes \rho) &= \sum\limits_{(g)\neq (I)} \frac{1}{\abs{C_G(g)}}\cdot\frac{\widetilde{\ch{}}(\dL\iota^*\sO_{p|BC_G(g)})}{2-\xi_g - \xi_g^{-1}} \\ &= \sum\limits_{(g)\neq (I)} \frac{1}{\abs{C_G(g)}}\cdot\frac{(2\chi_{\mathbbm 1}(g) - \chi_V(g))\chi_\rho(g)}{2-\chi_V(g)}\\ &= \sum\limits_{(g)\neq (I)} \frac{\chi_\rho(g)}{\abs{C_G(g)}} \\ \end{aligned}$$ Denote by $N_g$ the cardinality of the conjugacy class of $g\in G$, and write the orthogonality relation between characters: $$\delta_{\mathbbm 1\rho}= \frac 1N\sum_{g\in G}\chi_\rho(g)\overline{\chi_{\mathbbm 1}(g)}= \frac{1}{N}\sum_{(g)}N_g\chi_\rho(g)= \frac{1}{N}\sum_{(g)}\frac{N}{\abs{C_G(g)}}\chi_\rho(g)= \sum_{(g)}\frac{\chi_\rho(g)}{\abs{C_G(g)}},$$ where $\delta_{\mathbbm 1\rho}=1$ if $\rho=\mathbbm 1$ and 0 otherwise. The summand corresponding to $(g)=(I)$ is $\frac{\chi_\rho(I)}{N}=\frac rN$. Isolating it, one obtains $$\sum\limits_{(g)\neq (I)} \frac{\chi_\rho(g)}{\abs{C_G(g)}} =\begin{cases} 1-\frac{1}{N} \qquad \text{ if }\rho = \mathbbm 1;\\ -\frac{r}{N} \qquad \text{ if }\rho\neq \mathbbm 1. \end{cases}$$ Since $\chi(\sS,\sO_p\otimes\rho)=\chi(S,\pi_*(\sO_p\otimes\rho))=\delta_{\mathbbm 1\rho}$, the statement about second Chern characters follows.
--- abstract: 'A class of finite GUTs in curved spacetime is considered in connection with the cosmological inflation scenario. It is confirmed that the use of the running scalar-gravitational coupling constant in these models helps realizing a successful chaotic inflation. The analyses are made for some different sets of the models.' --- HUPD-9716\ Sept 13, 1997\ [**Finite Grand Unified Theories and Inflation**]{} [S. Mukaigawa, T. Muta and S. D. Odintsov]{}\ Introduction ============ It is now a common understanding to assume the presence of the inflationary stage in the early universe (see Ref. 1 for a review). Among various models in the inflationary scenario the chaotic inflation model seems to be the most successful and promising.$^{1}$ In the chaotic inflation model, however, we need the fine-tuning of some coupling constants such as scalar-gravitational coupling constant$^{2}$ $\xi$. The scalar-gravitational term associated with this coupling constant is required in any quantum field theory in curved space-time in order to guarantee the multiplicative renormalizability of the theory$^{3}$ (see Ref. 4 for a general review). Applying the renormalization group argument we find that the coupling constant $\xi$ starts running.$^{3,4,5}$ The behavior of the running coupling constant at strong gravitational field has been investigated for various models in Refs. 3 and 5 (see Ref. 4 for a review). In a recent paper$^{6}$ an interesting observation has been made on an implication of the running coupling constant in realizing the inflation scenario. The authors of Ref. 6 report that the use of the running scalar-gravitational coupling constant in a specific field theory$^{7}$ helps constructing a successful model of the chaotic inflation. The behaviour of the running $\xi$ in their model is typical of $\lambda\varphi^{4}$-theory $^{3}$ so that $$\begin{aligned} \xi(t) &=& \frac{1}{6} + (\xi-\frac{1}{6})(1-a^{2}\lambda t)^{\alpha}, \nonumber \\ \lambda(t) &=& \frac{\lambda}{1-a^2 \lambda t}, \label{eq1}\end{aligned}$$ where $a^2$ and $\alpha$ are suitable constants, and $t$ is the RG parameter, $ t=\frac{1}{2} \ln \frac{\varphi^2}{\mu^2}$ with $\mu$ the renormalization scale. Depending on the sign of the exponent $\alpha$ it is possible to have an asymptotic conformal invariance or not.$^{3,4,5}$ In fact $\xi(t)\rightarrow 1/6$ (for $\alpha<0$) as $t \rightarrow \infty$ (the infrared limit ) which is the case of Ref. 6. For $\alpha > 0, |\xi(t)| \rightarrow \infty$ as $t \rightarrow \infty$.$^{3,4,5}$ In Ref. 6 one of the simplest supersymmetric model, i.e. the Wess-Zumino model, was taken into account and the effective potential in the flat direction of the model was considered in order to restrict oneself to the effect of quadratic terms of the model. Thus the model exhibits the typical behavior as mentioned above. There is an another type of supersymmetric models which are called finite GUTs in which the behaviour of $\xi(t)$ is qualitatively different from the one which we have seen in Eq. (\[eq1\]). The purpose of this note is to examine a possibility of the chaotic inflation in the finite GUTs (including some possible finite non-supersymmetric theories). Finite GUTs =========== Let us consider the typical (supersymmetric or non-supersymmetric) finite GUTs in curved spacetime. In curved space-time the theory is not completely finite$^{8}$ according to an appearence of the divergence in the vacuum energy (in the external gravitational field sector). Nevertheless, the matter sector remains unaffected.$^{8}$ The behavior of coupling constants in the matter sector is given as follows,$^{8}$ $$\begin{aligned} &&g^{2}(t)=g^2, \;\;\; h(t)^{2}=k_{1}g^{2}, \;\;\; f(t)=k_{2}g^2, \nonumber \\ && \xi(t) = \frac{1}{6}+ ( \xi - \frac{1}{6})\exp \left[ c g^{2} t \right], \label{eq2}\end{aligned}$$ where $g(t)$, $h(t)$ and $f(t)$ are running coupling constants respectively, $g_{2} \ll 1$, and $k_{1}, k_{2}$ and $c$ are certain numerical constants determined by the group structure of the theory. Note that $c$ may be positive, negative or zero depending on the nature of the theory. Let us consider a concrete example. The $SU(2)$ gauge theory with $SU(N)$ global invariance (the Lagrangian is written in the book previously mentioned;$^{4}$ see Eq. (3.130) therein.) includes gauge fields, Weyl spinors and scalars in the adjoint representation of $SU(2)$. With respect to the global group $SU(N)$ the spinors and scalars belong to the fundamental representation and six-dimensional antisymmetric adjoint representation, respectively. In the case of flat space-time the theory has been introduced in Ref. 9. The theory has two regimes. In the first regime it is $N=4$ extended supersymmetric gauge theory which is finite to all orders of perturbation theory in flat spacetime.The direct calculation yields$^{8}$ $c={3}/{(2\pi^2)}$. In the second regime the theory corresponds to the one-loop finite non-supersymmetric theory in flat spacetime and $c \approx {27}/{(4\pi)^2}$. There are some other finite theories where $c$ could be negative or zero.$^{8}$ Consider the scalar sector of the finite theory taken in the flat direction of the effective potential where interaction terms do not contribute. The renormalization-group-improved effective action in curved spacetime $^{4,10}$ coupled with the classical Einstein gravity is given by $$S= - \frac{1}{2} \int d^{4}x \sqrt{-g} \left[ \frac{M_{pl}}{8\pi} R +Z(t) \partial_{\mu}\varphi(t) \partial^{\mu}\varphi(t) +m^{2}(t)\varphi^{2}(t) -\xi(t) R \varphi^{2}(t) \right].$$ where $t=\frac{1}{2} \ln{ \frac{\varphi^{2}}{\mu^{2}} }$. The variation of the running mass as a function of $t$ is considered to be small, i.e. $m^{2}(t) \simeq m^{2}$. We choose the gauge in which the anomalous dimension for the scalar field vanishes (i.e. the renormalization for the scalar field is finite), then $$Z(t) = 1, \;\;\; \varphi(t) = \varphi. \label{eq4}$$ Under this circumstance the effective action reads $$S=-\frac{1}{2} \int d^{4}x \sqrt{-g} \left[ \frac{M_{pl}}{8\pi} R + \partial_{\mu}\varphi \partial^{\mu}\varphi + m^{2} \varphi^{2} -\xi(t) R \varphi^{2} \right]. \label{eq5}$$ It should be noted here that in our present model it is not necessary to incorporate the effect of the running $\varphi$ due to the property (4). In the model employed in Ref. 6 Eq. (\[eq4\]) does not hold so that the effect of the running $\varphi$ plays an important role in deriving the cosmological predictions while in Ref. 6 this effect is not fully taken into account. Field equations for the theory characterized by Eq. (\[eq5\]) are given by $$\begin{aligned} && \left( \Box - m^{2} + \xi{\varphi}R + \frac{1}{2}\frac{d\xi}{d\varphi}(\varphi) R \right) \varphi \;\; = \; 0, \nonumber \\ && R_{\mu\nu} -\frac{1}{2}R g_{\mu\nu} \;\; = \;\; \frac{8\pi}{M_{pl}} T_{\mu\nu}.\end{aligned}$$ Rewriting these equations in the Friedmann-Robertson-Walker Universe with scale factor $a$ we obtain$^{11}$ $$\begin{aligned} && \ddot{\varphi}+3H\dot{\varphi}+m^{2}\varphi + \left[ \xi (\varphi) \varphi + \frac{1}{2} \frac{ d \xi }{ d \varphi } (\varphi) \right] \times \nonumber \\ && \left[ \{ 6 \xi (\varphi) -1 + 12 \frac{ d \xi }{ d \varphi } (\varphi) \varphi + 6 \frac{ d^{2} \xi }{ d \varphi^{2} } (\varphi) \varphi^{2} \} \right. \dot{ \varphi }^{2} + 2m_{2}\varphi^{2} \\ && \left. + \{ 6 \xi (\varphi) \varphi + 3 \frac{d\xi}{d\varphi}(\varphi)\varphi^{2} \} \{ \ddot{\varphi} + 3H\dot{\varphi} \} \right] \left[ \frac{M_{pl}}{8\pi} - \xi(\varphi)\varphi^{2} \right]^{-1} = 0, \nonumber\end{aligned}$$ where $H={\dot{a}}/{a}$. Chaotic inflation ================= As it has been established in Refs. 11 there are two saddle points of Eq. (7) for negative $\xi$. They are given by $$\varphi_{\mbox{cr}} = \pm \frac{ M_{pl} }{ \sqrt{-8\pi\xi} }, \;\;\; \dot{\varphi}=0. \label{eq8}$$ It is discussed in Ref. 6 (and in the preceding works cited there) that the initial conditions for inflaton $\varphi$ in order to have a successful chaotic inflation as well as a sufficient period of the inflation are $$-\frac{M_{pl}}{\sqrt{8\pi |\xi|}} < \varphi < \frac{M_{pl}}{\sqrt{8\pi |\xi|}}, \label{eq9}$$ and $$|\varphi| \geq 5 M_{pl}. \label{eq10}$$ Actually for negative as well as positive $\xi$ we have two qualitatively different situations summarrized by the condition (\[eq9\]) and (\[eq10\]) Our purpose now is to examine initial conditions (\[eq9\]) and (\[eq10\]) for finite GUTs with running $\xi(t)$ given by Eq. (\[eq2\]). It should be noted that in Eq. (\[eq2\]) $\xi = \xi(\mu)$ and $g^2 = g^2(\mu)$ are initial values of the running coupling constants $\xi(t)$ and $g^2(t)$ respectively at a certain RG scale $\mu$ . We start with a very small initial value of $\xi(t)$ and so we may practically set $\xi(\mu)=0$. Then the total variation of $\xi(t)$ comes purely from the running effect. We start with the minimal theory at scale $\mu$. We wish to plot $\varphi_{\mbox{cr}}-\varphi$ as a function of $\varphi$. The relation between $\varphi_{\mbox{cr}}-\varphi$ and $\varphi$ is easily obtained by using Eq. (\[eq2\]) and Eq. (\[eq8\]) with $ t=\frac{1}{2} \ln \frac{\varphi^2}{\mu^2}$. In Figs.1, 2, 3 and 4 the behavior of $\varphi_{\mbox{cr}}-\varphi$ is shown in four typical cases with ($\mu=50M_{pl}, c<0$), ($\mu=50M_{pl}, c>0$), ($\mu=2M_{pl}, c<0$) and ($\mu=2M_{pl}, c>0$) respectively. (Note here that the behavior of $\varphi_{\mbox{cr}}-\varphi$ as a function of $\varphi$ is symmetric around $\varphi=0$ and so we need not to consider the case $\varphi<0$).\ Figs. 1, 2, 3 and 4\ Let us examine whether we can have any region where $\varphi_{\mbox{cr}}-\varphi$ is positive (i. e. $\varphi_{\mbox{cr}}>\varphi$) for small $\varphi$ starting with $\varphi\ge 5M_{pl}$. We clearly see that in all four cases $\varphi_{\mbox{cr}}-\varphi$ becomes positive if $|c|g^2\sim 10^{-3}$. It is important to note that the chaotic inflation is realized independent of the sign of $c$. Thus we conclude that for a wide class of the finite GUTs we have successful chaotic inflations. Conclusions =========== Working within the framework of the finite GUTs we examined the mechanism$^{6}$ that may lead to the successful chaotic inflation. We find that in a wide class of the finite GUTs the chaotic inflation is realized as far as the gauge coupling constant is kept sufficiently small. In this sense the finite GUTs are worth for further investigations in connection with the early universe scenario. In this regards it is very interesting to note that the finite GUTs in curved spacetime are one of the possible canditates to give a solution to the cosmological constant problem (see the second reference in Ref. 10) due to the exponential running of the effective cosmological constant. These faborable properties of the theories under discussion indicate that the cosmological applications of the finite GUTs (in particular the N=4 super Yang-Mills theory) should be considered more seriously. [**References**]{} 1. A. Linde, [*Particle Physics and Inflationary Cosmology*]{}, Harwood Academic Publishers,1990. 2. T. Futamase, T. Rothman and R. Matzner, Phys. Rev. (1989) 405; T. Futamase and K. Maeda, Phys.Rev. (1989) 405. 3. I. L. Buchbinder and S. D. Odintsov, Izw. Vuzov. Fizika (Sov. Phys. J.) N12 (1983) 108; Yad. Fiz. (Sov. J. Nucl. Phys.) (1984) 1338; Lett. Nuovo Cim. (1985) 379. 4. I. L. Buchbinder, S. D. Odintsov and I. L. Shapiro, [*Effective Action in Quantum Gravity*]{}, IOP Publishing, Bristol and Philadelphia, 1992. 5. T. Muta and S. D. Odintsov, Mod. Phys. Lett. A6 (1991) 3641. 6. T. Futamase and M. Tanaka, preprint OCHA-PP-95, 1997; hep-ph/9704303. 7. M. Tanaka, hep-th/9701063. 8. I. L. Buchbinder, S. D. Odintsov and I. M. Lichtzier, Class. Quant. Grav. (1989) 605. 9. M.Böhm and A. Denner, Nucl. Phys. (1987) 206. 10. I. L. Buchbinder, S. D. Odintsov, Class. Quant. Grav. (1985) 721; E. Elizalde and S. D. Odintsov, Phys. Lett. B333 (1994) 331. 11. L. Amendola, M. Litterio and F. Occhionero, Int. J. Mod. Phys. (1990) 3861; A. Barroso, J. Casasayas, P. Crawford, P. Moniz and A. Nunes, Phys.Lett. (1992)264. (1500,900)(0,0) (220,562)(1436,562) (220,113)(240,113) (1436,113)(1416,113) (198,113)[(0,0)\[r\][-10]{}]{} (220,203)(240,203) (1436,203)(1416,203) (198,203)[(0,0)\[r\][-8]{}]{} (220,293)(240,293) (1436,293)(1416,293) (198,293)[(0,0)\[r\][-6]{}]{} (220,383)(240,383) (1436,383)(1416,383) (198,383)[(0,0)\[r\][-4]{}]{} (220,473)(240,473) (1436,473)(1416,473) (198,473)[(0,0)\[r\][-2]{}]{} (220,562)(240,562) (1436,562)(1416,562) (198,562)[(0,0)\[r\][0]{}]{} (220,652)(240,652) (1436,652)(1416,652) (198,652)[(0,0)\[r\][2]{}]{} (220,742)(240,742) (1436,742)(1416,742) (198,742)[(0,0)\[r\][4]{}]{} (220,832)(240,832) (1436,832)(1416,832) (198,832)[(0,0)\[r\][6]{}]{} (220,113)(220,133) (220,832)(220,812) (220,68)[(0,0)[0]{}]{} (463,113)(463,133) (463,832)(463,812) (463,68)[(0,0)[2]{}]{} (706,113)(706,133) (706,832)(706,812) (706,68)[(0,0)[4]{}]{} (950,113)(950,133) (950,832)(950,812) (950,68)[(0,0)[6]{}]{} (1193,113)(1193,133) (1193,832)(1193,812) (1193,68)[(0,0)[8]{}]{} (1436,113)(1436,133) (1436,832)(1436,812) (1436,68)[(0,0)[10]{}]{} (220,113)(1436,113)(1436,832)(220,832)(220,113) (45,922)[(0,0)\[l\]]{} (828,-22)[(0,0)[$\varphi/M_{pl}$]{}]{} (828,877)[(0,0)[$\mu/M_{pl} = 50$ , $c < 0$]{}]{} (950,652)[(0,0)\[l\][$-cg^2 = 10^{-3}$]{}]{} (950,428)[(0,0)\[l\][$-cg^2 = 10^{-2}$]{}]{} (706,293)[(0,0)\[l\][$-cg^2 = 10^{-1}$]{}]{} (220,570)(220,570)(221,572)(222,573)(222,573)(223,573) (224,573)(225,573)(226,573)(226,573)(228,573)(229,572)(233,572)(245,569) (270,561)(295,553)(319,545)(344,537)(369,528)(394,520)(419,511)(443,502) (468,494)(493,485)(518,476)(543,468)(568,459)(592,450)(617,441)(642,433) (667,424)(692,415)(716,406)(741,397)(766,388)(791,380)(816,371)(840,362) (865,353)(890,344)(915,335)(940,326)(965,318)(989,309)(1014,300)(1039,291) (1064,282)(1089,273)(1113,264)(1138,255)(1163,247) (1163,247)(1188,238)(1213,229)(1237,220)(1262,211) (1287,202)(1312,193)(1337,184)(1362,175)(1386,166)(1411,158)(1436,149) (220,607)(220,607)(221,612)(222,614)(222,615)(223,616) (224,616)(225,617)(226,617)(226,617)(227,617)(228,618)(229,618)(229,618) (230,618)(231,618)(232,618)(233,618)(233,618)(234,618)(235,618)(236,618) (236,618)(237,618)(239,618)(240,618)(242,618)(245,618)(251,617)(257,616) (270,613)(295,607)(319,601)(344,594)(369,586)(394,579)(419,571)(443,564) (468,556)(493,548)(518,540)(543,532)(568,524)(592,516)(617,508)(642,500) (667,492)(692,484)(716,476)(741,468)(766,459) (766,459)(791,451)(816,443)(840,435)(865,426)(890,418) (915,410)(940,402)(965,393)(989,385)(1014,377)(1039,368)(1064,360) (1089,352)(1113,343)(1138,335)(1163,327)(1188,318)(1213,310)(1237,301) (1262,293)(1287,285)(1312,276)(1337,268)(1362,260)(1386,251)(1411,243) (1436,234) (220,711)(220,711)(221,727)(222,733)(222,736)(223,739) (225,744)(226,747)(229,751)(233,755)(236,757)(239,759)(242,761)(245,762) (251,764)(257,766)(260,766)(264,767)(267,767)(270,767)(273,768)(276,768) (279,768)(281,768)(282,768)(284,768)(285,768)(287,768)(288,768)(288,768) (289,768)(290,768)(291,768)(291,768)(292,768)(293,768)(294,768)(295,768) (296,768)(298,768)(301,768)(304,768)(307,768)(319,767)(332,766)(344,765) (369,762)(394,758)(419,754)(443,749)(468,745) (468,745)(493,740)(518,735)(543,729)(568,724)(592,718) (617,712)(642,707)(667,701)(692,695)(716,689)(741,683)(766,676)(791,670) (816,664)(840,658)(865,651)(890,645)(915,639)(940,632)(965,626)(989,619) (1014,613)(1039,606)(1064,600)(1089,593)(1113,586)(1138,580)(1163,573) (1188,567)(1213,560)(1237,553)(1262,547)(1287,540)(1312,533)(1337,527) (1362,520)(1386,513)(1411,506)(1436,500) (1500,900)(0,0) (220,562)(1436,562) (220,113)(240,113) (1436,113)(1416,113) (198,113)[(0,0)\[r\][-10]{}]{} (220,203)(240,203) (1436,203)(1416,203) (198,203)[(0,0)\[r\][-8]{}]{} (220,293)(240,293) (1436,293)(1416,293) (198,293)[(0,0)\[r\][-6]{}]{} (220,383)(240,383) (1436,383)(1416,383) (198,383)[(0,0)\[r\][-4]{}]{} (220,473)(240,473) (1436,473)(1416,473) (198,473)[(0,0)\[r\][-2]{}]{} (220,562)(240,562) (1436,562)(1416,562) (198,562)[(0,0)\[r\][0]{}]{} (220,652)(240,652) (1436,652)(1416,652) (198,652)[(0,0)\[r\][2]{}]{} (220,742)(240,742) (1436,742)(1416,742) (198,742)[(0,0)\[r\][4]{}]{} (220,832)(240,832) (1436,832)(1416,832) (198,832)[(0,0)\[r\][6]{}]{} (220,113)(220,133) (220,832)(220,812) (220,68)[(0,0)[0]{}]{} (463,113)(463,133) (463,832)(463,812) (463,68)[(0,0)[2]{}]{} (706,113)(706,133) (706,832)(706,812) (706,68)[(0,0)[4]{}]{} (950,113)(950,133) (950,832)(950,812) (950,68)[(0,0)[6]{}]{} (1193,113)(1193,133) (1193,832)(1193,812) (1193,68)[(0,0)[8]{}]{} (1436,113)(1436,133) (1436,832)(1436,812) (1436,68)[(0,0)[10]{}]{} (220,113)(1436,113)(1436,832)(220,832)(220,113) (45,922)[(0,0)\[l\]]{} (828,-22)[(0,0)[$\varphi/M_{pl}$]{}]{} (828,877)[(0,0)[$\mu/M_{pl} = 50$ , $c > 0$]{}]{} (950,652)[(0,0)\[l\][$cg^2 = 10^{-3}$]{}]{} (950,428)[(0,0)\[l\][$cg^2 = 10^{-2}$]{}]{} (706,293)[(0,0)\[l\][$cg^2 = 10^{-1}$]{}]{} (220,586)(220,586)(221,586)(222,586)(222,586) (223,586)(225,586)(226,585)(233,584)(245,580)(270,572)(295,564) (319,555)(344,546)(369,538)(394,529)(419,520)(443,511)(468,503) (493,494)(518,485)(543,476)(568,467)(592,458)(617,449)(642,441) (667,432)(692,423)(716,414)(741,405)(766,396)(791,387)(816,378) (840,369)(865,360)(890,352)(915,343)(940,334)(965,325)(989,316) (1014,307)(1039,298)(1064,289)(1089,280)(1113,271)(1138,262)(1163,253) (1188,244)(1213,235)(1237,226)(1262,217) (1262,217)(1287,209)(1312,200)(1337,191)(1362,182) (1386,173)(1411,164)(1436,155) (220,612)(220,612)(221,617)(222,618)(222,619) (223,620)(224,620)(225,621)(226,621)(226,621)(227,622)(228,622) (229,622)(229,622)(230,622)(231,622)(232,622)(233,622)(233,622) (234,622)(235,622)(236,622)(237,622)(239,622)(242,622)(245,621) (257,619)(270,617)(295,610)(319,604)(344,597)(369,589)(394,582) (419,574)(443,567)(468,559)(493,551)(518,543)(543,535)(568,527) (592,519)(617,511)(642,503)(667,495)(692,486)(716,478)(741,470) (766,462)(791,454)(816,445)(840,437) (840,437)(865,429)(890,420)(915,412)(940,404) (965,395)(989,387)(1014,379)(1039,370)(1064,362)(1089,354)(1113,345) (1138,337)(1163,329)(1188,320)(1213,312)(1237,304)(1262,295)(1287,287) (1312,278)(1337,270)(1362,262)(1386,253)(1411,245)(1436,236) (220,712)(220,712)(221,728)(222,734)(222,738) (223,741)(225,745)(226,748)(229,753)(233,756)(236,758)(239,760) (242,762)(245,763)(251,765)(257,767)(260,767)(264,768)(267,768) (270,768)(273,769)(276,769)(279,769)(281,769)(282,769)(284,769) (285,769)(287,769)(288,769)(288,769)(289,769)(290,769)(291,769) (291,769)(292,769)(293,769)(294,769)(295,769)(296,769)(298,769) (301,769)(304,769)(307,769)(319,768)(332,767)(344,766)(369,763) (394,759)(419,755)(443,750)(468,746) (468,746)(493,741)(518,735)(543,730)(568,725) (592,719)(617,713)(642,707)(667,702)(692,696)(716,689)(741,683) (766,677)(791,671)(816,665)(840,658)(865,652)(890,646)(915,639) (940,633)(965,626)(989,620)(1014,613)(1039,607)(1064,600)(1089,594) (1113,587)(1138,580)(1163,574)(1188,567)(1213,561)(1237,554)(1262,547) (1287,541)(1312,534)(1337,527)(1362,520)(1386,514)(1411,507)(1436,500) (1500,900)(0,0) (220,273)(1436,273) (220,113)(240,113) (1436,113)(1416,113) (198,113)[(0,0)\[r\][-10]{}]{} (220,193)(240,193) (1436,193)(1416,193) (198,193)[(0,0)\[r\][-5]{}]{} (220,273)(240,273) (1436,273)(1416,273) (198,273)[(0,0)\[r\][0]{}]{} (220,353)(240,353) (1436,353)(1416,353) (198,353)[(0,0)\[r\][5]{}]{} (220,433)(240,433) (1436,433)(1416,433) (198,433)[(0,0)\[r\][10]{}]{} (220,512)(240,512) (1436,512)(1416,512) (198,512)[(0,0)\[r\][15]{}]{} (220,592)(240,592) (1436,592)(1416,592) (198,592)[(0,0)\[r\][20]{}]{} (220,672)(240,672) (1436,672)(1416,672) (198,672)[(0,0)\[r\][25]{}]{} (220,752)(240,752) (1436,752)(1416,752) (198,752)[(0,0)\[r\][30]{}]{} (220,832)(240,832) (1436,832)(1416,832) (198,832)[(0,0)\[r\][35]{}]{} (220,113)(220,133) (220,832)(220,812) (220,68)[(0,0)[0]{}]{} (463,113)(463,133) (463,832)(463,812) (463,68)[(0,0)[2]{}]{} (706,113)(706,133) (706,832)(706,812) (706,68)[(0,0)[4]{}]{} (950,113)(950,133) (950,832)(950,812) (950,68)[(0,0)[6]{}]{} (1193,113)(1193,133) (1193,832)(1193,812) (1193,68)[(0,0)[8]{}]{} (1436,113)(1436,133) (1436,832)(1436,812) (1436,68)[(0,0)[10]{}]{} (220,113)(1436,113)(1436,832)(220,832)(220,113) (45,922)[(0,0)\[l\]]{} (828,-22)[(0,0)[$\varphi/M_{pl}$]{}]{} (828,877)[(0,0)[$\mu/M_{pl} = 2$ , $c < 0$]{}]{} (950,385)[(0,0)\[l\][$-cg^2 = 10^{-3}$]{}]{} (706,321)[(0,0)\[l\][$-cg^2 = 10^{-2}$]{}]{} (706,177)[(0,0)\[l\][$-cg^2 = 10^{-1}$]{}]{} (464,832)(464,557)(465,474)(465,434)(466,410) (466,393)(467,380)(468,362)(469,355)(470,349)(471,340)(472,332) (473,326)(476,317)(478,310)(481,305)(483,301)(488,294)(493,288) (503,281)(513,275)(523,271)(543,264)(563,258)(582,253)(602,249) (622,245)(642,241)(662,238)(682,235)(702,231)(721,228)(741,225) (761,222)(781,219)(801,216)(821,213)(840,210)(860,207)(880,204) (900,202)(920,199)(940,196)(960,193)(979,190)(999,188)(1019,185 )(1039,182)(1059,179)(1079,176)(1099,174) (1099,174)(1118,171)(1138,168)(1158,166)(1178,163) (1198,160)(1218,157)(1237,155)(1257,152)(1277,149)(1297,147)(1317,144) (1337,141)(1357,139)(1376,136)(1396,133)(1416,131)(1436,128) (465,832)(466,776)(466,722)(467,681)(468,624)(469,603) (470,585)(471,555)(472,532)(473,514)(476,486)(478,464)(481,448) (483,435)(488,414)(493,399)(498,387)(503,377)(513,362)(523,351) (543,334)(563,322)(582,313)(602,305)(622,298)(642,292)(662,286) (682,281)(702,277)(721,272)(741,268)(761,264)(781,260)(801,256) (821,252)(840,249)(860,245)(880,242)(900,238)(920,235)(940,231) (960,228)(979,225)(999,222)(1019,219)(1039,216)(1059,212)(1079,209) (1099,206)(1118,203)(1138,200) (1138,200)(1158,197)(1178,194)(1198,191)(1218,188) (1237,186)(1257,183)(1277,180)(1297,177)(1317,174)(1337,171)(1357,168) (1376,165)(1396,163)(1416,160)(1436,157) (485,832)(488,796)(493,750)(498,713)(503,684)(513,639) (523,606)(533,580)(543,559)(563,526)(582,502)(602,482)(622,466) (642,453)(662,441)(682,430)(702,421)(721,412)(741,404)(761,397) (781,390)(801,384)(821,378)(840,372)(860,366)(880,361)(900,356) (920,351)(940,346)(960,341)(979,337)(999,332)(1019,328)(1039,324) (1059,320)(1079,315)(1099,311)(1118,308)(1138,304)(1158,300)(1178,296) (1198,292)(1218,289)(1237,285)(1257,282)(1277,278)(1297,274)(1317,271) (1337,268)(1357,264)(1376,261) (1376,261)(1396,257)(1416,254)(1436,251) (220,277)(220,277)(220,277)(220,278)(221,278) (221,278)(221,278)(221,278)(222,279)(223,279)(223,279)(224,279) (225,279)(226,279)(228,280)(229,280)(230,280)(231,280)(233,280) (233,280)(234,280)(234,280)(235,280)(235,280)(236,280)(236,280) (236,280)(236,280)(237,280)(237,280)(237,280)(237,280)(237,280) (237,280)(237,280)(238,280)(238,280)(238,280)(238,280)(238,280) (239,280)(239,280)(240,280)(241,280)(242,280)(245,280)(250,280) (255,280)(260,279)(265,279)(270,279) (270,279)(275,279)(280,279)(285,278)(290,278)(294,278) (299,278)(304,278)(309,278)(314,277)(319,277)(324,277)(329,277)(332,277) (334,277)(337,277)(338,277)(339,277)(340,277)(341,277)(342,277)(342,277) (342,277)(343,277)(343,277)(343,277)(343,277)(343,277)(343,277)(344,277) (344,277)(344,277)(344,277)(344,277)(344,277)(345,277)(345,277)(345,277) (345,277)(346,277)(347,277)(348,277)(349,277)(350,277)(352,277)(354,277) (356,277)(359,277)(364,277)(369,278)(374,278) (374,278)(379,278)(384,278)(389,279)(394,280)(399,280) (404,281)(409,282)(413,283)(418,285)(423,287)(428,289)(433,293)(438,297) (441,300)(443,303)(446,307)(448,312)(451,318)(452,322)(453,327)(454,333) (456,340)(457,349)(458,355)(458,362)(459,380)(460,392)(460,400)(461,409) (461,420)(461,434)(462,451)(462,474)(462,488)(462,506)(462,528)(462,557) (463,596)(463,655)(463,758)(463,832) (220,292)(220,292)(220,293)(220,294)(221,295)(221,296) (221,296)(222,297)(223,298)(224,299)(225,300)(228,301)(230,302)(235,303) (240,305)(245,306)(250,307)(255,307)(260,308)(265,309)(270,310)(275,310) (280,311)(285,312)(290,313)(294,313)(299,314)(304,315)(309,316)(314,317) (319,318)(324,319)(329,320)(334,321)(339,322)(344,324)(349,325)(354,326) (359,328)(364,330)(369,332)(374,334)(379,336)(384,339)(389,342)(394,345) (399,349)(404,353)(409,357)(413,363)(418,369) (418,369)(423,377)(428,386)(433,397)(438,412)(441,421) (443,432)(446,445)(448,462)(451,483)(452,496)(453,511)(454,529)(456,552) (457,582)(458,600)(458,621)(459,679)(460,719)(460,744)(461,774)(461,809) (461,832) (220,336)(220,336)(220,340)(220,342)(221,345)(221,347) (221,349)(223,354)(223,356)(224,358)(225,361)(230,369)(235,375)(240,380) (245,385)(250,389)(255,393)(260,397)(265,401)(270,405)(275,408)(280,412) (285,416)(290,420)(294,423)(299,427)(304,431)(309,435)(314,440)(319,444) (324,449)(329,453)(334,458)(339,464)(344,469)(349,475)(354,481)(359,488) (364,495)(369,502)(374,510)(379,519)(384,529)(389,539)(394,551)(399,563) (404,578)(409,594)(413,612)(418,634)(423,659) (423,659)(428,689)(433,727)(438,775)(441,805)(443,832) (1500,900)(0,0) (220,273)(1436,273) (220,113)(240,113) (1436,113)(1416,113) (198,113)[(0,0)\[r\][-10]{}]{} (220,193)(240,193) (1436,193)(1416,193) (198,193)[(0,0)\[r\][-5]{}]{} (220,273)(240,273) (1436,273)(1416,273) (198,273)[(0,0)\[r\][0]{}]{} (220,353)(240,353) (1436,353)(1416,353) (198,353)[(0,0)\[r\][5]{}]{} (220,433)(240,433) (1436,433)(1416,433) (198,433)[(0,0)\[r\][10]{}]{} (220,512)(240,512) (1436,512)(1416,512) (198,512)[(0,0)\[r\][15]{}]{} (220,592)(240,592) (1436,592)(1416,592) (198,592)[(0,0)\[r\][20]{}]{} (220,672)(240,672) (1436,672)(1416,672) (198,672)[(0,0)\[r\][25]{}]{} (220,752)(240,752) (1436,752)(1416,752) (198,752)[(0,0)\[r\][30]{}]{} (220,832)(240,832) (1436,832)(1416,832) (198,832)[(0,0)\[r\][35]{}]{} (220,113)(220,133) (220,832)(220,812) (220,68)[(0,0)[0]{}]{} (463,113)(463,133) (463,832)(463,812) (463,68)[(0,0)[2]{}]{} (706,113)(706,133) (706,832)(706,812) (706,68)[(0,0)[4]{}]{} (950,113)(950,133) (950,832)(950,812) (950,68)[(0,0)[6]{}]{} (1193,113)(1193,133) (1193,832)(1193,812) (1193,68)[(0,0)[8]{}]{} (1436,113)(1436,133) (1436,832)(1436,812) (1436,68)[(0,0)[10]{}]{} (220,113)(1436,113)(1436,832)(220,832)(220,113) (45,922)[(0,0)\[l\]]{} (828,-22)[(0,0)[$\varphi/M_{pl}$]{}]{} (828,877)[(0,0)[$\mu/M_{pl} = 2$ , $c > 0$]{}]{} (950,385)[(0,0)\[l\][$cg^2 = 10^{-3}$]{}]{} (706,321)[(0,0)\[l\][$cg^2 = 10^{-2}$]{}]{} (706,177)[(0,0)\[l\][$cg^2 = 10^{-1}$]{}]{} (464,832)(464,557)(465,474)(465,434)(466,410)(466,392) (467,380)(468,361)(469,355)(470,349)(471,339)(472,332)(473,326)(476,317) (478,310)(481,305)(483,300)(488,293)(493,288)(503,280)(513,274)(523,270) (543,263)(563,257)(582,252)(602,248)(622,244)(642,240)(662,237)(682,233) (702,230)(721,227)(741,224)(761,220)(781,217)(801,214)(821,211)(840,208) (860,206)(880,203)(900,200)(920,197)(940,194)(960,191)(979,188)(999,186) (1019,183)(1039,180)(1059,177)(1079,175)(1099,172) (1099,172)(1118,169)(1138,166)(1158,164)(1178,161)(1198,158) (1218,155)(1237,153)(1257,150)(1277,147)(1297,145)(1317,142)(1337,139) (1357,136)(1376,134)(1396,131)(1416,128)(1436,126) (465,832)(466,775)(466,721)(467,681)(468,624)(469,603) (470,584)(471,555)(472,532)(473,514)(476,485)(478,464)(481,448)(483,435) (488,414)(493,399)(498,387)(503,377)(513,362)(523,351)(543,334)(563,322) (582,312)(602,304)(622,298)(642,291)(662,286)(682,281)(702,276)(721,272) (741,267)(761,263)(781,259)(801,255)(821,252)(840,248)(860,244)(880,241) (900,238)(920,234)(940,231)(960,228)(979,224)(999,221)(1019,218) (1039,215)(1059,212)(1079,209)(1099,206)(1118,203)(1138,200) (1138,200)(1158,197)(1178,194)(1198,191)(1218,188) (1237,185)(1257,182)(1277,179)(1297,176)(1317,173)(1337,170)(1357,168) (1376,165)(1396,162)(1416,159)(1436,156) (485,832)(488,796)(493,750)(498,713)(503,684)(513,639) (523,606)(533,580)(543,559)(563,526)(582,501)(602,482)(622,466)(642,452) (662,441)(682,430)(702,421)(721,412)(741,404)(761,397)(781,390)(801,383) (821,377)(840,372)(860,366)(880,361)(900,355)(920,351)(940,346)(960,341) (979,336)(999,332)(1019,328)(1039,323)(1059,319)(1079,315)(1099,311) (1118,307)(1138,303)(1158,300)(1178,296)(1198,292)(1218,289)(1237,285) (1257,281)(1277,278)(1297,274)(1317,271)(1337,267)(1357,264)(1376,261) (1376,261)(1396,257)(1416,254)(1436,250) (220,282)(220,282)(220,282)(220,282)(221,282)(221,282) (221,282)(221,282)(222,282)(223,283)(223,283)(224,283)(224,283)(225,283) (226,283)(226,283)(227,283)(227,283)(228,283)(228,283)(228,283)(228,283) (228,283)(229,283)(229,283)(229,283)(229,283)(229,283)(229,283)(230,283) (230,283)(230,283)(230,283)(230,283)(231,283)(231,283)(232,283)(233,283) (235,283)(237,283)(240,283)(245,282)(250,282)(255,282)(260,282)(265,281) (270,281)(275,281)(280,281)(285,280)(290,280) (290,280)(294,280)(299,280)(304,280)(309,279)(314,279) (319,279)(324,279)(329,279)(334,279)(337,279)(339,279)(342,278)(343,278) (344,278)(345,278)(347,278)(347,278)(348,278)(348,278)(349,278)(349,278) (349,278)(349,278)(350,278)(350,278)(350,278)(350,278)(350,278)(350,278) (350,278)(351,278)(351,278)(351,278)(351,278)(352,278)(352,278)(353,278) (353,278)(354,278)(355,278)(356,278)(359,279)(361,279)(364,279)(369,279) (374,279)(379,279)(384,280)(389,280)(394,281) (394,281)(399,281)(404,282)(409,283)(413,284)(418,286) (423,288)(428,290)(433,293)(438,297)(441,300)(443,303)(446,307)(448,312) (451,319)(452,323)(453,327)(454,333)(456,340)(457,349)(458,355)(458,362) (459,380)(460,393)(460,400)(461,410)(461,421)(461,434)(462,451)(462,474) (462,489)(462,506)(462,528)(462,557)(463,596)(463,655)(463,758)(463,832) (220,294)(220,294)(220,295)(220,295)(221,296)(221,297) (221,297)(222,298)(223,299)(224,300)(225,301)(228,302)(230,303)(235,304) (240,306)(245,306)(250,307)(255,308)(260,309)(265,310)(270,310)(275,311) (280,312)(285,313)(290,313)(294,314)(299,315)(304,316)(309,316)(314,317) (319,318)(324,319)(329,320)(334,321)(339,323)(344,324)(349,325)(354,327) (359,329)(364,330)(369,332)(374,334)(379,337)(384,339)(389,342)(394,345) (399,349)(404,353)(409,358)(413,363)(418,369) (418,369)(423,377)(428,386)(433,397)(438,412)(441,421) (443,432)(446,445)(448,462)(451,483)(452,496)(453,511)(454,530)(456,553) (457,582)(458,600)(458,621)(459,679)(460,719)(460,744)(461,774)(461,809) (461,832) (220,336)(220,336)(220,340)(220,342)(221,345)(221,348) (221,349)(223,355)(223,356)(224,358)(225,361)(230,369)(235,376)(240,381) (245,385)(250,390)(255,394)(260,397)(265,401)(270,405)(275,409)(280,412) (285,416)(290,420)(294,424)(299,427)(304,431)(309,436)(314,440)(319,444) (324,449)(329,454)(334,459)(339,464)(344,469)(349,475)(354,481)(359,488) (364,495)(369,502)(374,511)(379,519)(384,529)(389,539)(394,551)(399,564) (404,578)(409,594)(413,612)(418,634)(423,659) (423,659)(428,689)(433,727)(438,775)(441,805)(443,832)
--- abstract: 'Using a simple transfer matrix approach we have derived very long series expansions for the perimeter generating function of punctured staircase polygons (staircase polygons with a single internal staircase hole). We find that all the terms in the generating function can be reproduced from a linear Fuchsian differential equation of order 8. We perform an analysis of the properties of the differential equation.' address: | ARC Centre of Excellence for Mathematics and Statistics of Complex Systems,\ Department of Mathematics and Statistics, The University of Melbourne, Victoria 3010, Australia author: - 'Anthony J. Guttmann and Iwan Jensen' title: The perimeter generating function of punctured staircase polygons --- Introduction ============ A well-known long standing problem in combinatorics and statistical mechanics is to find the generating function for self-avoiding polygons (or walks) on a two-dimensional lattice, enumerated by perimeter. Recently, we have gained a greater understanding of the difficulty of this problem, as Rechnitzer [@AR03a] has proved that the (anisotropic) generating function for square lattice self-avoiding polygons is not differentiably finite [@RPS80a], as had been conjectured earlier on numerical grounds [@Guttmann2001]. That is to say, it cannot be expressed as the solution of an ordinary differential equation with polynomial coefficients. There are many simplifications of this problem that are solvable [@BM96a], but all the simpler models impose an effective directedness or equivalent constraint that reduces the problem, in essence, to a one-dimensional problem. ![\[fig:poly\] Examples of the types of polygons studied in this paper. ](polygons.eps) A staircase polygon can be viewed as the intersection of two directed walks starting at the origin, moving only to the right or up and terminating once the walks join at a vertex. It is well-known that the generating function for staircase polygons is $$P(x) = \frac{1-2x-\sqrt{1-4x}}{2} \propto (1-\mu x)^{2-\alpha},$$ where the connective constant $\mu=4$ and the critical exponent $\alpha=3/2$. Punctured staircase polygons [@GJWE00] are staircase polygons with internal holes which are also staircase polygons (the polygons are mutually- as well as self-avoiding). In [@GJWE00] it was proved that the connective constant $\mu$ of $k$-punctured polygons (polygons with $k$ holes) is the same as the connective constant of unpunctured polygons. Numerical evidence clearly indicated that the critical exponent $\alpha$ increased by $3/2$ per puncture. The closely related model of punctured discs was considered in [@JvRW90]. Punctured discs are counted by area and in this case it was proved that the critical exponent increases by 1 per puncture. Here we will study only the case with a [*single*]{} hole (see figure \[fig:poly\]), and we will refer to these objects as punctured staircase polygons. The perimeter length of staircase polygons is even and thus the total perimeter (the outer perimeter plus the perimeter of the hole) is also even. We denote by $p_n$ the number of punctured staircase polygons of perimeter $2n$. The results of [@GJWE00] imply that the half-perimeter generating function has a simple pole at $x=x_c=1/\mu=1/4$, though the analysis in [@GJWE00] clearly indicated that the critical behaviour is more complicated than a simple algebraic singularity. Recently we found that the perimeter generating function of three-choice polygons can be expressed as the solution of an 8th order linear ODE [@GJ06a]. Similarly, in this paper we report on work which has led to an exact Fuchsian linear differential equation of order 8 apparently satisfied by the perimeter generating function, ${\ensuremath{\mathcal{P}}}(x) = \sum_{n\geq 0} p_nx^n$, for punctured staircase polygons (that is, ${\ensuremath{\mathcal{P}}}(x)$ is one of the solutions of the ODE, expanded around the origin). The first few terms in the generating function are $${\ensuremath{\mathcal{P}}}(x) = x^8 + 12x^9+94x^{10}+604x^{11}+3463x^{12}+\cdots.$$ Our analysis of the ODE shows that the dominant singular behaviour is $${\ensuremath{\mathcal{P}}}(x) \sim \frac{A(x)}{(1-4x)} + \frac{B(x) + C(x) \log(1-4x)}{\sqrt{1-4x}}+D(x) (1+4x)^{13/2}.$$ So in the notation used above, the generating function has a dominant singularity at $x=x_c=1/\mu$ with exponent $\alpha=3$. This result confirms exactly the conjecture for the critical exponent [@GJWE00] in the case of a single puncture and the quite complicated corrections at the critical point explains why the analysis in [@GJWE00], based on a relatively short series, was so difficult. It is also possible to express the generating function ${\ensuremath{\mathcal{P}}}(x)$ as a sum of $4 \times 4$ Gessel-Viennot determinants [@GV89]. This is clear from figure \[fig:gv\], where the enumeration of the lattice paths between the dotted lines is just the classical problem of 4 vicious walkers, and these must be joined to two vicious walkers to the left, and to two vicious walkers to the right. Then one must sum over different possible geometries. However the fact that the generating function is so expressible implies that it is differentiably finite [@Lipshitz89]. ![ \[fig:gv\] The decomposition of a punctured staircase polygon into a sequence of 2-4-2 vicious walkers, each expressible as a Gessel-Viennot determinant](punctured_GV.eps) Unfortunately we cannot readily bound the size of the underlying ODE, otherwise we could use this observation to provide a proof of our results. As it is, we originally generated the counts of punctured staircase polygons up to perimeter 502 (251 coefficients), and found what we believe to be the underlying ODE experimentally from the first 195 coefficients. The ODE then correctly predicts the next 56 coefficients. While the possibility that the underlying ODE is not the correct one is extraordinarily small, our procedure still does not constitute a proof of course. We have since extended the count beyond perimeter 800 and still all coefficients are predicted by our ODE. Computer enumeration \[sec:enum\] ================================= The algorithm we use to count the number punctured staircase polygons is a modified version of the algorithm of Conway [@CGD97] for the enumeration of imperfect staircase polygons. The two problems are very similar and consequently there are only minor differences between the algorithms. A detailed description of the algorithm we used to count imperfect staircase polygons can be found in [@GJ06a]. The algorithm is based on transfer matrix techniques. This entails bisecting the polygons by a line (as illustrated in figure \[fig:poly\]) and enumerating the number of polygons by moving the line ‘forward’ one step at a time. Punctured staircase polygons start out as ordinary staircase polygons and the line bisects the polygon at two edges. Then at some vertex two additional directed walks (sharing the same starting point) are inserted between the two original walks. The line will thus intersect these polygon configurations at four edges (see figure \[fig:poly\]). The only difference between the algorithm in [@GJ06a] and the one used for this paper is in how the four directed walks intersected by the line are connected in order to produce a valid polygon. To produce a punctured staircase polygon we first connect the two innermost walks and then the two outermost walks are connected. Imperfect staircase polygons on the other hand are produced by connecting the first walk with the second walk and the third walks with the fourth walk. The updating rules used to count imperfect staircase polygons are given in [@GJ06a] and are easily amended to count punctured staircase polygons bearing in mind the different ‘closing’ criteria outlined above. We calculated the number of punctured staircase polygons up to perimeter 502. The integer coefficients become very large so the calculation was performed using modular arithmetic [@KnuthACPv2]. This involves performing the calculation [*modulo*]{} various prime numbers $p_i$ and then reconstructing the full integer coefficients at the end. We used primes of the form $p_i=2^{30}-r_i$, where $r_i$ are small positive integers, less than $1000,$ chosen so that $p_i$ is prime, and $p_i \ne p_j$ unless $i = j.$ 17 primes were needed to represent the coefficients correctly. The calculation for each prime used about 200Mb of memory and about 8 minutes of CPU time on a 2.8 GHz Xeon processor. Naturally we could have carried the calculation much further (and we have since done this) but as we shall demonstrate in the next section this number of coefficients more than sufficed to identify an exact differential equation satisfied by ${\ensuremath{\mathcal{P}}}(x)$. The Fuchsian differential equations \[sec:fde\] =============================================== In recent papers Zenine [*et al*]{} [@ZBHM04a; @ZBHM05a; @ZBHM05b] obtained the linear differential equations whose solutions give the 3- and 4-particle contributions $\chi^{(3)}$ and $\chi^{(4)}$ to the Ising model susceptibility. In [@GJ06a] we used their method to find a linear differential equation for three-choice polygons and in this paper we extend this work further to find a linear differential equation which has as a solution the generating function ${\ensuremath{\mathcal{P}}}(x)$ for punctured staircase polygons. We briefly outline the method here. Starting from a (long) series expansion for the function ${\ensuremath{\mathcal{P}}}(x)$ we look for a linear differential equation of order $m$ of the form $$\label{eq:de} \sum_{k=0}^m P_k(x) \frac{\rmd^k}{\rmd x^k}{\ensuremath{\mathcal{P}}}(x) = 0,$$ such that ${\ensuremath{\mathcal{P}}}(x)$ is a solution to this homogeneous linear differential equation, where the $P_k(x)$ are polynomials. In order to make it as simple as possible we start by searching for a Fuchsian [@Ince] equation. Such equations have only regular singular points. There are several reasons for searching for a Fuchsian equation, rather than a more general D-finite equation. Computationally the Fuchsian assumption simplifies the search for a solution. &gt;From the general theory of Fuchsian [@Ince] equations it follows that the degree of $P_k(x)$ is at most $n-m+k$ where $n$ is the degree of $P_m(x)$. To simplify matters further (reduce the order of the unknown polynomials) it is advantageous to explicitly assume that the origin and $x=x_c=1/4$ are regular singular points and set $P_k(x)=Q_k(x)S(x)^k$, where $S(x)=x(1-4x)$. Thus when searching for a solution of Fuchsian type there are only two parameters, namely the order $m$ of the differential equation and the degree $q_m$ of the polynomial $Q_m(x)$. One may also argue, less precisely, that for “sensible” combinatorial models one would expect Fuchsian equations, as irregular singular points are characterized by explosive, super-exponential behaviour. Such behaviour is not normally characteristic of combinatorial problems arising from statistical mechanics. The point at infinity may be an exception to this somewhat imprecise observation. We then search systematically for solutions by varying $m$ and $q_m$. In this way we first found a solution with $m=10$ and $q_m=11$, which required the determination of $L=195$ unknown coefficients. We have 251 terms in the half-perimeter series and thus have 56 additional terms with which to check the correctness of our solution. Having found this solution we then turned the ODE into a recurrence relation and used this to generate more series terms in order to search for a lower order Fuchsian equation. The lowest order equation we found was eighth order and with $q_m=27$, which requires the determination of $L=294$ unknown coefficients. Thus from our original 251 term series we could not have found this $8^{th}$ order solution since we did not have enough terms to determine all the unknown coefficients in the ODE. This raises the question as to whether perhaps there is an ODE of lower order than 8 that generates the coefficients? The short answer to this is no. Further study of our differential operator revealed that it can be factorised. In fact we found a factorization into three first-order linear operators, a second order and a third order. The generating function is a solution of the $8^{th}$ order operator, not of any of the smaller factors. The (half)-perimeter generating function ${\ensuremath{\mathcal{P}}}(x)$ for punctured staircase polygons is a solution to the linear differential equation of order 8 $$\sum_{k=0}^8 P_n(x) \frac{\rmd^k}{\rmd x^k}{\ensuremath{\mathcal{P}}}(x) = 0 \label{eq:PPfde}$$ with $$\begin{aligned} P_8(x)=x^4(1-4x)^8(1 + 4x)(1 + 4x^2)(1 + x + 7x^2)Q_8(x). \nonumber \\ P_7(x)=x^3(1-4x)^7 Q_7(x), \;\;\;\;\;\; P_6(x)=2x^2 (1-4x)^6 Q_6(x), \nonumber \\ P_5(x)=6x(1-4x)^5 Q_5(x), \,\;\;\;\;\; P_4(x)=120(1-4x)^4 Q_4(x), \label{eq:PPpol} \\ P_3(x)=120(1-4x)^3 Q_3(x), \;\;\;\; P_2(x)=720(1-4x)^2 Q_2(x), \nonumber \\ P_1(x)=720(1-4x) Q_1(x), \;\;\;\;\; P_0(x)=2880 Q_0(x), \nonumber\end{aligned}$$ where $Q_8(x)$, $Q_7(x)$, $\ldots$, $Q_0(x)$, are polynomials of degree 22, 28, 29, 30, 31, 31, 31, 31, and 31, respectively. The polynomials are listed in \[app:PPpol\]. The singular points of the differential equation are given by the roots of $P_8(x)$. One can easily check that all the singularities (including $x=\infty$) are [*regular singular points*]{} so equation (\[eq:PPfde\]) is indeed of the Fuchsian type. It is thus possible using the method of Frobenius to obtain from the indicial equation the critical exponents at the singular points. These are listed in Table \[tab:PPexp\]. Singularity & Exponents\ $x=0$ & $-1, \, 0, \, 0, \, 0, \, 1, \, 2, \, 3, \, 8$\ $x=1/4$ & $-1$, $-1/2, \, -1/2, \, 1/2, \, 1, \, 3/2, \, 2, \, 3$\ $x=-1/4$ & $0, \, 1, \, 2, \, 3, \, 4, \, 5, \, 6, \, 13/2$\ $x=\pm\, \rmi/2$ & $0, \, 1, \, 2, \, 3, \, 4, \, 5, \, 6, \, 13/2$\ $1+x+7x^2=0$ & $0, \, 1, \, 2, \, 2, \, 3, \, 4, \, 5, \, 6$\ $1/x=0$ & $-2, \, -3/2, \, -1, \, -1, \, -1/2, \, 1/2, \, 3/2, \, 5/2$\ $Q_8(x)=0$ & $0, \, 1, \, 2, \, 3, \, 4, \, 5, \, 6, \, 8$\ We shall now consider the local solutions to the differential equation around each singularity. Recall that in general it is known [@ForsythV4; @Ince] that if the indicial equation yields $k$ critical exponents which differ by an integer, then the local solutions [*may*]{} contain logarithmic terms up to $\log^{k-1}$. However, for the Fuchsian equation (\[eq:PPfde\]) [*only*]{} multiple roots of the indicial equation give rise to logarithmic terms in the local solution around a given singularity, so that a root of multiplicity $k$ gives rise to logarithmic terms up to $\log^{k-1}$. In particular this means that near any of the 22 roots of $Q_8(x)$ the local solutions have no logarithmic terms and the solutions are thus [*analytic*]{} since all the exponents are positive integers. The roots of $Q_8$ are thus [*apparent singularities*]{} [@ForsythV4; @Ince] of the Fuchsian equation (\[eq:PPfde\]). We briefly mention that as in our earlier study [@GJ06a] we can find a solution of order 14 of the same form as (\[eq:PPfde\]) but with $Q_{14}(x)$ being just a constant. So at this order none of the roots of $Q_8(x)$ appear. Clearly any real singularity of the system cannot be made to vanish and so we conclude that the 22 roots of $Q_8$ must indeed be apparent singularities. Assuming that only repeated roots give rise to $\log$-terms, and thus that a sequence of positive integers give rise to [*analytic*]{} terms, then near the physical critical point $x=x_c=1/4=1/\mu$ we expect the singular behaviour $$\label{eq:xc} {\ensuremath{\mathcal{P}}}(x) \sim \frac{A(x)}{(1-4x)} + \frac{B(x) + C(x) \log(1-4x)}{\sqrt{1-4x}},$$ where $A(x)$, $B(x)$ and $C(x)$ are analytic in a neighbourhood of $x_c$. Note that the terms associated with the exponents $1/2$ and $3/2$ become part of the analytic correction to the $(1-4x)^{-1/2}$ term. Near the singularity on the negative $x$-axis, $x=x_-=-1/4$ we expect the singular behaviour $$\label{eq:xm} {\ensuremath{\mathcal{P}}}(x) \sim D(x) (1+4x)^{13/2},$$ where again $D(x)$ is analytic near $x_-$. We expect similar behaviour near the pair of singularities $x=\pm \rmi/2$, and finally at the roots of $1+x+7x^2$ we expect the behaviour $E(x)(1+x+7x^2)^2\log (1+x+7x^2)$. We can simplify the $8^{th}$ order differential operator found above. We first found three very simple solutions of the ODE, each corresponding to an order one differential operator, $$F_1(x)=1-4x,$$ $$F_2(x)=\frac{1-4x-4x^3}{1-4x},$$ and $$F_3(x)=\frac{9-34x+14x^2}{\sqrt{1-4x}}.$$ The existense of these three linearly independent solutions implies that we can find three first order operators, which we denote by $L_i^{(1)},$ with $i =$ 1,2,3, such that the original 8’th order differential operator can be written as $L^{(8)}=L^{(5)}L_1^{(1)}L_2^{(1)}L_3^{(1)}$, where $L^{(5)}$ is a fifth order differential operator. We further found that this latter operator is decomposable as $L^{(5)}=L^{(3)}L^{(2)}$. This then allows one, in principle, to write down the form of the $8 \times 8$ matrix representing the differential Galois group of $L^{(8)}$, in an appropriate global solution basis. To determine the asymptotics one would need to calculate non-local connection matrices between solutions at different points. This is a huge task for such a large differential operator. Instead, we have developed a numerical technique that avoids all these difficulties, which we describe below. To standardise our asymptotic analysis, we assume that the critical point is at 1. The growth constant of punctured staircase staircase polygons is 4, so we normalise the series by considering the new series with coefficients $r_n$, defined by $r_n = p_{n+8}/4^n.$ Thus the generating function we study is ${\ensuremath{\mathcal{R}}}(y) = \sum_{n\geq 0} r_ny^n = 1 + 3y + 5.875y^2 + \cdots$. Using the recurrence relations for $p_n$ (derived from the ODE) it is easy and fast to generate many more terms $r_n$. We generated the first 100000 terms and saved them as floats with 500 digit accuracy (this calculation took less than 15 minutes). &gt;From equations (\[eq:xc\]) and (\[eq:xm\]) it follows that the asymptotic form of the coefficients is $$\label{eq:asymp} [y^n]{\ensuremath{\mathcal{R}}}(y) = r_n = \sum_{i \ge 0} \left( \frac{\tilde{a}_i}{n^i}\!+ \!\frac{\tilde{b}_i\log{n} \!+\! \tilde{c}_i}{n^{i+1/2}} \!+\! (-1)^n\left( \frac{\tilde{d}_i}{n^{15/2+i}} \right)\! \right ) + {\rm O}(\lambda^{-n}).$$ Any contributions from the other singularities are exponentially suppressed since their norm (in the scaled variable $y=x/4$) exceeds 1. Estimates for the amplitudes were obtained by fitting $r_n$ to the form given above using an increasing number of amplitudes. ‘Experimentally’ we find we need about the same total number of terms at $x_c$ and $-x_c$. So in the fits we used the terms with amplitudes $\tilde{a}_i$, $\tilde{b}_i$, and $\tilde{c}_i$, $i=0,\ldots,K$ and $\tilde{d}_i$, $i=0,\ldots,3K$. Going only to $K$ with the $\tilde{d}_i$ amplitudes results in much poorer convergence and going beyond $3K$ leads to no improvement. For a given $K$ we thus have to estimate $6K+4$ unknown amplitudes. So we use the last $6K+4$ terms $r_n$ with $n$ ranging from 100000 to $100000-6K-3$ and solve the resulting $6K+4$ system of linear equations. We can also add extra terms to the asymptotic form and check what happens to the amplitudes of the new terms. If these amplitudes are very small it is highly likely that the terms are not truly present (if the calculation could be done exactly these amplitudes would be zero). Doing this we found that all the amplitudes $\tilde{a}_i$ appear to be zero except that $\tilde{a}_0=1024$, e.g., with $K=20$ we find that the estimates for the amplitudes $\tilde{a}_1<10^{-70}$, $\tilde{a}_2<10^{-60}$, $\tilde{a}_3<10^{-50}$, etc. So in all likelihood the amplitudes $\tilde{a}_i=0$ for $i>0$. This then leads us to the refined asymptotic form $$\label{eq:asymptrue} \fl [y^n]{\ensuremath{\mathcal{R}}}(y) = r_n = 1024\left( 1+ \frac{1}{\sqrt{n}}\sum_{i \ge 0} \left( \frac{b_i\log{n} + c_i}{n^{i}} + (-1)^n\left( \frac{d_i}{n^{7+i}} \right) \right ) \right ) + {\rm O}(\lambda^{-n}).$$ In fits to this form we then used the terms with amplitudes $b_i$, and $c_i$, $i=0,\ldots,K$ and $d_i$, $i=0,\ldots,2K$. For a given $K$ we thus have to estimate $4K+3$ unknown amplitudes. We find that the amplitude estimates are fairly accurate up to around $2K/3$. We observed this by doing the calculation with $K=30$ and $K=40$ and then looking at the difference in the amplitude estimates. For $b_0$ and $c_0$ the difference is less than $10^{-120}$, while for $d_0$ the difference is less than $10^{-116}$. Each time we increase the amplitude index by 1 we lose around six significant digits in accuracy. With $i=18$ the differences are respectively around $10^{-14}$ and $10^{-11}$. &gt;From our very long series it is possible to obtain accurate numerical estimates of many of the amplitudes $b_i$, $c_i$, and $d_i$, with a precision of more than 100 digits for the dominant amplitudes, shrinking to around 10 digits for the the case when $i = 18$ (actually we could probably have pushed this further but there would be little point). In this way we found that $b_0 = -\frac{6\sqrt{3}}{\pi^{3/2}}$, $b_1=\frac{305}{4\sqrt{3}\pi^{3/2}}$, $b_2=\frac{86123}{192\sqrt{3}\pi^{3/2}}$, $c_0 = 1.55210340048879105374\ldots$ and $d_0 = \frac{48}{\pi^{3/2}}$,$d_1 = -\frac{2610}{\pi^{3/2}}$, $d_2 = \frac{640815}{8\pi^{3/2}}$, $d_3 = -\frac{116785575}{64\pi^{3/2}}$, $d_4 = \frac{70325480841}{2048\pi^{3/2}}$, though we have not been able to identify $c_0$. These amplitudes are known to at least 100 digits accuracy. The excellent convergence is solid evidence (though naturally not a proof) that the assumptions leading to equation (\[eq:asymp\]) are correct. Further evidence was obtained as follows: We have already argued that the terms of the form $1/n^i$, $i>0$ are absent. We found similar results if we added terms like $\log{n}/n^i$, $\log^2{n}/n^{i/2}$ or additional $\log{n}$ terms at $y=-1$. So this fitting procedure provides convincing evidence that the asymptotic form (\[eq:asymptrue\]), and thus the assumptions leading to this formula, are correct. Conclusion and Outlook ====================== We have developed an improved algorithm for enumerating punctured staircase polygons. The extended series, coupled with a search program that assumes the solution is a [*Fuchsian*]{} ODE, enabled us to discover the underlying ODE, which was of $10^{th}$ order. We did this without using 56 of the coefficients that we had generated. That is to say, 56 of the known coefficients were unused, and so their values provided a check on the solution. This leads us to believe that we have found the correct ODE, as it reproduces the known, unused coefficients. Subsequently we have extended this check to more than 200 unused coefficients. Further refinement allowed us to find an $8^{th}$ order ODE. A numerical technique we have developed specifically for such problems then allowed us to find accurate numerical estimates for the amplitudes of the first several terms in the asymptotic form of the coefficients, and then to conjecture their exact value. We have also initiated an investigation of the [*area*]{} generating function. We expect this to involve $q$-series, and thus far our investigations only lead us to believe that the area generating function $A(q)$ is of the form $$A(q) = (G(q) + H(q)\sqrt{1 - q/\eta})/[J_0(1,1,q)^2],$$ where $J_0(x,y,q)$ is a $q$-generalisation of the Bessel function, and occurs, for example, in the solution of the problem of staircase polygons enumerated by perimeter [@BM96a]. Here $q=\eta$ is the first zero of $J_0(1,1,q)$, and $G$ and $H$ are regular in the neighbourhood of $q = \eta.$ The coefficients thus behave asymptotically as $$a_n = [q^n]A(q) \sim const. \eta^{-n}n.$$ In a subsequent publication we propose to investigate the area generating function more fully, and hopefully obtain more insight into the properties of the ODE we have found for the perimeter generating function. Furthermore in work with C. Richard [@RJG06] we have conjectured the scaling function for punctured polygons with an arbitrary number of punctures. We briefly review the properties of the two-variable area-perimeter generating function for staircase polygons. Of special interest is the point [ $(x_c,1)$]{} where two lines of singularities meet. The behaviour of the singular part of the generating function about [ $(x_c,1)$]{} is expected to take the special form $${ P(x,q) \sim P^{(reg)}(x,q) + (1-q)^\theta F((x_c-x)(1-q)^{-\phi}), \qquad (x,q) \nearrow,}$$ where [ $F(s)$]{} is a [*scaling function*]{} of combined argument [ $s=(x_c-x)(1-q)^{-\phi}$,]{} commonly assumed to be regular at the origin, and [ $\theta = 1/3$]{} and [ $\phi = 2/3$]{} are [*critical exponents*]{}. For staircase polygons, we have $${ F(s) = \frac{1}{8}\frac{\rmd}{\rmd s}\log\mbox{Ai} \left( (4\sqrt{2})^\frac{2}{3} s \right)}.$$ In [@RJG06] we assumed that the limit distribution by area of staircase polygons with $r$ punctures (of arbitrary size) is that of staircase polygons with $r$ holes of unit area. From this and knowledge of $F(s)$ we then obtained [*exact*]{} predictions for $r$ punctured staircase polygons by taking the $r$-th derivative w.r.t $q$ of $P(x,q)$. We then study the area-moment generating function, $P_k (x) = \sum_{m,n} n^k p_{m,n}x^m$, where $p_{m,n}$ is the number of polygons with perimeter $m$ and area $n$. In particular we find that the leading amplitudes $A^{(r)}_{k}$ of the perimeter generating function of the $k$-th area-moment are given by $$A^{(r)}_{k}=\frac{(k+r)! x_c^r f_{k+r}}{r! x_c^{\gamma_{k+r}}\Gamma(\gamma_{k+r})}$$ Here $f_{k+r}$ are amplitudes occuring in the asymptotic expansion of $P(x,q)$ (these are known exactly for punctured staircase polygons) while $\gamma_{k+r}=3(k+r)/2-1/2$ are the critical exponents of the $k$th area-moment of $r$ punctured polygons. These predictions have been confirmed for once punctured staircase polygons to a very high level of accuracy for moments up to $k=10$. The numerical analysis of the area-moments relied crucially on our knowledge of the singularity structure of the perimeter generating function as detailed in this paper. E-mail or WWW retrieval of series {#e-mail-or-www-retrieval-of-series .unnumbered} ================================= The series for the generating functions studied in this paper can be obtained via e-mail by sending a request to I.Jensen@ms.unimelb.edu.au or via the world wide web on the URL http://www.ms.unimelb.edu.au/ iwan/ by following the instructions. We would like to thank N Zenine and J-M Maillard for access to their Mathematica routines for identifying differential equations and useful advice about their use. We gratefully acknowledge financial support from the Australian Research Council. \[app:PPpol\] Polynomials $Q_n(x)$ for punctured staircase polygons =================================================================== $$\begin{aligned} \fl Q_8(x) &=& -90720 + 1255590 x - 9538200 x^2 + 20394828 x^3 - 79106610 x^4 \\ \fl && + 1223958687 x^5 - 2943232056 x^6 + 17470357067 x^7 - 189472079743 x^8 \\ \fl && + 579172715823 x^9 - 1746461498616 x^{10} + 8400325324610 x^{11} \\ \fl && - 1591154327260 x^{12} - 111431714394808 x^{13} + 315517552430480 x^{14} \\ \fl && - 106489387477312 x^{15} - 938487878760384 x^{16} + 1628517397980288 x^{17} \\ \fl && - 2394531569420032 x^{18} + 2966185168205312 x^{19} \\ \fl && - 170238270849024 x^{20} - 699187344629760 x^{21} + 295462090506240 x^{22}\end{aligned}$$ $$\begin{aligned} \fl Q_7(x) &=& -1360800 + 23565780 x - 167569290 x^2 + 478254996 x^3 + 641052858 x^4 \\ \fl && + 14810951034 x^5 - 47034372339 x^6 - 166933659974 x^7 - 2552936187594 x^8 \\ \fl && + 6447911404224 x^9 + 14253364474478 x^{10} + 86598771199392 x^{11} \\ \fl && + 362131239586500 x^{12} - 3860712252484892 x^{13} + 8993313236994576 x^{14} \\ \fl && - 31235880957264960 x^{15} + 46429326957124912 x^{16} \\ \fl && + 155905775680790304 x^{17} - 807736441103822976 x^{18} \\ \fl && + 1835072857042276096 x^{19} - 1278888252797142528 x^{20} \\ \fl && - 293981468599460352 x^{21} + 14541716059525437440 x^{22} \\ \fl && - 26481815895022608384 x^{23} + 22483566008412450816 x^{24} \\ \fl && - 35911819535956066304 x^{25} + 3639680241277796352 x^{26} \\ \fl && + 7495959535363031040 x^{27} - 3507725938490081280 x^{28}\end{aligned}$$ $$\begin{aligned} \fl Q_6(x) &=& -1723680 + 69281730 x - 787195710 x^2 + 4886678970 x^3 - 10726639974 x^4 \\ \fl && + 11830409583 x^5 - 401281487235 x^6 + 343905413598 x^7 \\ \fl && + 1858137414650 x^8 + 44092692217413 x^9 - 36740412036168 x^{10} \\ \fl && - 135298590380414 x^{11} - 1279093006602396 x^{12} - 10004750418032976 x^{13} \\ \fl && + 61536871579988144 x^{14} -216281351081049504 x^{15} \\ \fl && + 1050287576547538488 x^{16} - 1795967175346626976 x^{17} \\ \fl && - 2572736181692580960 x^{18} + 18017037664470796032 x^{19} \\ \fl && - 45232775265352713472 x^{20} + 48709527110201501184 x^{21} \\ \fl && + 4770083118869915136 x^{22} - 322327838255331590144 x^{23} \\ \fl && + 541571044899035842560 x^{24} - 511926023257614434304 x^{25} \\ \fl && + 716375351150156644352 x^{26} - 69659801950830723072 x^{27} \\ \fl && - 136551990333116252160 x^{28} + 60094625512245166080 x^{29}\end{aligned}$$ $$\begin{aligned} \fl Q_5(x) &=& 1965600 - 6539400 x - 358033410 x^2 + 4831433820 x^3 - 30915098190 x^4 \\ \fl && + 60211846008 x^5 - 201764518161 x^6 + 2531858233470 x^7 \\ \fl && + 1380416576424 x^8 - 20212314275250 x^9 - 61506470769366 x^{10} \\ \fl && - 477804842150324 x^{11} + 608746761166938 x^{12} + 483723642457152 x^{13} \\ \fl && + 60127368616743592 x^{14} - 185780400624937008 x^{15} \\ \fl && + 1165835175099337288 x^{16} - 7175943616536571776 x^{17} \\ \fl && + 13745698284061066112 x^{18} + 4948349174336379840 x^{19} \\ \fl && - 89453290124304769024 x^{20} + 270104157697832561664 x^{21} \\ \fl && - 356324521463829808128 x^{22} - 41862184650482117632 x^{23} \\ \fl && + 1845216328946812827648 x^{24} - 2906213125616330383360 x^{25} \\ \fl && + 2943265956913569742848 x^{26} - 3723507915329643413504 x^{27} \\ \fl && + 405249143061461336064 x^{28} + 618215144006850969600 x^{29} \\ \fl && - 261821958729561538560 x^{30}\end{aligned}$$ $$\begin{aligned} \fl Q_4(x) &=& 241920 - 8017380 x + 88351704 x^2 - 590355612 x^3 + 2409400818 x^4 \\ \fl && - 8457027588 x^5 + 71232186468 x^6 - 288557341128 x^7 \\ \fl && + 524905454055 x^8 - 5046532132734 x^9 + 28114089314043 x^{10} \\ \fl && - 164508486596467 x^{11} + 869331744354740 x^{12} - 2401501341116904 x^{13} \\ \fl && + 12275987679372578 x^{14} - 50846889626226508 x^{15} \\ \fl && + 46258831828476364 x^{16} - 147764159295056304 x^{17} \\ \fl && + 1375769527659995736 x^{18} - 2625251094439093408 x^{19} \\ \fl && - 765792895039661984 x^{20} + 22951686058011476032 x^{21} \\ \fl && - 85054223999548283904 x^{22} + 126294091912315062016 x^{23} \\ \fl && + 19381267403906712064 x^{24} - 566287434634380073984 x^{25} \\ \fl && + 849895463062111623168 x^{26} - 892557255237919469568 x^{27} \\ \fl && + 1043719341871898804224 x^{28} - 142670999896790335488 x^{29} \\ \fl && - 140350544778022354944 x^{30} + 59234239904690995200 x^{31}\end{aligned}$$ $$\begin{aligned} \fl Q_3(x) &=& -4596480 + 112443660 x - 1327020156 x^2 + 11580963786 x^3 - 76436209584x^4 \\ \fl && + 426159579924 x^5 - 2350462539072 x^6 + 11395385983233 x^7 \\ \fl && - 44136036344190 x^8 + 145288111685523 x^9 - 559910802106640 x^{10} \\ \fl && + 3013037795053530 x^{11} - 13499762948930634x^{12} \\ \fl && + 50096716464628528 x^{13} - 217987216302493908 x^{14} \\ \fl && + 853439326193439492 x^{15} - 2363497210984795232 x^{16} \\ \fl && + 5096223845046539304 x^{17} - 8508469151526998016 x^{18} \\ \fl && + 9581930085552894304 x^{19} - 10241374665198721536 x^{20} \\ \fl && - 12641088914996048640 x^{21} + 118651673978481267200 x^{22} \\ \fl && - 208768950136609496064 x^{23} - 15400291418459486208 x^{24} \\ \fl && + 814317146169694152704 x^{25} -1202858442211165741056 x^{26} \\ \fl && + 1271933402411862171648 x^{27} - 1406355411740766470144 x^{28} \\ \fl && + 251165051564655771648 x^{29} + 137326949251639934976 x^{30} \\ \fl && - 61285928661166325760 x^{31}\end{aligned}$$ $$\begin{aligned} \fl Q_2(x) &=& 1209600 - 10784340 x + 25225200 x^2 - 192390408 x^3 + 1497608946 x^4 \\ \fl && - 3085618896 x^5 + 55270573062 x^6 - 674664767886 x^7 + 1891951243653 x^8 \\ \fl && + 6937954472784 x^9 - 19443421819978 x^{10} - 252270853719194 x^{11} \\ \fl && + 1421753108033868 x^{12} - 2280488850916676 x^{13} - 1040351739238056x^{14} \\ \fl && - 1519080794794788 x^{15} + 54144924827952720 x^{16} \\ \fl && - 143110935850986376 x^{17} - 63031554528921744 x^{18} \\ \fl && + 1125126938486807936 x^{19} - 2675665192031509504 x^{20} \\ \fl && + 3361130538055156224 x^{21} - 2669659667713374208 x^{22} \\ \fl && + 1996890960732463104 x^{23} - 4866848788151009280 x^{24} \\ \fl && + 3555378162093901824 x^{25} + 3193922372633202688 x^{26} \\ \fl && - 2642707373157531648 x^{27} + 2132642211038560256 x^{28} \\ \fl && - 3311881541411143680 x^{29} + 1596569887904366592 x^{30} \\ \fl && - 264734033093591040 x^{31}\end{aligned}$$ $$\begin{aligned} \fl Q_1(x) &=& -725760 + 19969740 x - 254689092 x^2 + 2329185726 x^3 - 17948325636x^4 \\ \fl && + 118028863386 x^5 - 679983561900 x^6 + 3637871524611 x^7 \\ \fl && - 17150360490738 x^8 + 62088405193554 x^9 -183555964459890 x^{10} \\ \fl && + 747009873725220 x^{11} - 4106684548673028 x^{12} + 18540613780587884 x^{13} \\ \fl && - 67936944600058776 x^{14} + 247341581626824360 x^{15} \\ \fl && - 939866071520217104 x^{16} + 3216462341735279616 x^{17} \\ \fl && - 8789133587934808704 x^{18} + 17976423995943224576 x^{19} \\ \fl && - 26625353996773725696 x^{20} + 29354499014436664320 x^{21} \\ \fl && - 26197184327864145920 x^{22} + 20118012206750361600 x^{23} \\ \fl && - 11595016904008224768 x^{24} - 12803308242930466816 x^{25} \\ \fl && + 49275320633035751424 x^{26} - 49679788190366564352 x^{27} \\ \fl && + 31169615491025600512 x^{28} - 29010025645678264320 x^{29} \\ \fl && + 12772559103234932736 x^{30} - 2117872264748728320 x^{31}\end{aligned}$$ $$\begin{aligned} Q_0(x) = Q_1(x)\end{aligned}$$ References {#references .unnumbered} ========== [10]{} Rechnitzer A 2003 Haruspicy and anisotropic generating functions [*Adv. Appl. Math.*]{} [**30**]{} 228–257 Stanley R P 1980 Differentiably finite power series [*Eur. J. Comb.*]{} [**1**]{} 175–188 Guttmann A J and Conway A R 2001 Square lattice self-avoiding walks and polygons [*Ann. Comb.*]{} [**5**]{} 319–345 Bousquet-Mélou M 1996 A method for the enumeration of various classes of column-convex polygons [*Disc. Math.*]{} [**154**]{} 1–25 Guttmann A J, Jensen I, Wong L H and Enting I G 2000 Punctured polygons and polyominoes on the square lattice [*J. Phys. A: Math. Gen.*]{} [**33**]{} 1735–1764 Janse van Rensburg E J and Whittington S G 1990 Punctured discs on the square lattice [*J. Phys. A: Math. Gen.*]{} [**23**]{} 1287–1294 Guttmann A J and Jensen I 2006 Fuchsian differential equation for the perimeter generating function of three-choice polygons [*Séminaire Lotharingien de Combinatoire*]{} [**54**]{} B54c. Preprint: math.CO/0506317 Gessel I and Viennot X G 1989 Determinants, paths and plane partitions. [ *Preprint at http://people.brandeis.edu/ gessel/*]{} Lipshitz L 1989 D-finite power series [*J. Algebra*]{} [**122**]{} 353–373 Conway A R, Guttmann A J and Delest M 1997 The number of three-choice polygons [*Mathl. Comput. Modelling*]{} [**26**]{} 51–58 Knuth D E 1969 [*Seminumerical Algorithms. [T]{}he Art of Computer Programming, [V]{}ol 2.*]{} (Reading, Mass: Addison Wesley) Zenine N, Boukraa S, Hassani S and Maillard J M 2004 The Fuchsian differential equation of the square lattice [I]{}sing model $\chi^{(3)}$ susceptibility [ *J. Phys. A: Math. Gen.*]{} [**37**]{} 9651–9668 Zenine N, Boukraa S, Hassani S and Maillard J M 2005 Square lattice [I]{}sing model susceptibility: series expansion method and differential equation for $\chi^{(3)}$ [*J. Phys. A: Math. Gen.*]{} [**38**]{} 1875–1899 Zenine N, Boukraa S, Hassani S and Maillard J M 2005 Ising model susceptibility: the [F]{}uchsian differential equation for $\chi^{(4)}$ and its factorization properties [*J. Phys. A: Math. Gen.*]{} [**38**]{} 4149–4173 Ince E L 1927 [*Ordinary differential equations*]{} (London: Longmans, Green and Co. Ltd.) Forsyth A R 1902 [*Part III. Ordinary linear equations*]{} vol. IV of [ *Theory of differential equations.*]{} (Cambridge: Cambridge University Press) Richard C, Jensen I and Guttmann A J 2006 Scaling function for punctured staircase and self-avoiding polygons [*in preparation*]{}
--- abstract: 'We introduce the [*tree evaluation problem*]{}, show that it is in [$\mathbf{LogDCFL}$]{}(and hence in [**P**]{}), and study its branching program complexity in the hope of eventually proving a superlogarithmic space lower bound. The input to the problem is a rooted, balanced $d$-ary tree of height $h$, whose internal nodes are labeled with $d$-ary functions on $[k]=\{1,\ldots,k\}$, and whose leaves are labeled with elements of $[k]$. Each node obtains a value in $[k]$ equal to its $d$-ary function applied to the values of its $d$ children. The output is the value of the root. We show that the standard black pebbling algorithm applied to the binary tree of height $h$ yields a deterministic $k$-way branching program with $O(k^h)$ states solving this problem, and we prove that this upper bound is tight for $h=2$ and $h=3$. We introduce a simple semantic restriction called [*thrifty*]{} on $k$-way branching programs solving tree evaluation problems and show that the same state bound of $\Theta(k^h)$ is tight for all $h\ge 2$ for deterministic thrifty programs. We introduce fractional pebbling for trees and show that this yields nondeterministic thrifty programs with $\Theta(k^{h/2+1})$ states solving the Boolean problem “determine whether the root has value 1”, and prove that this bound is tight for $h=2,3,4$. We also prove that this same bound is tight for unrestricted nondeterministic $k$-way branching programs solving the Boolean problem for $h=2,3$.' author: - Stephen Cook - Pierre McKenzie - Dustin Wehr - Mark Braverman - Rahul Santhanam bibliography: - 'paper.bib' title: Pebbles and Branching Programs for Tree Evaluation --- Introduction ============ Below is a nondecreasing sequence of standard complexity classes between [$\mathbf{AC}^0(6)$]{} and the polynomial hierarchy. $$\label{classes} {\ensuremath{\mathbf{AC}^0(6)}}\subseteq {\ensuremath{\mathbf{NC}^1}}\subseteq {\ensuremath{\mathbf{L}}}\subseteq {\ensuremath{\mathbf{NL}}}\subseteq {\ensuremath{\mathbf{LogCFL}}}\subseteq {\ensuremath{\mathbf{AC}^1}}\subseteq {\ensuremath{\mathbf{NC}^2}}\subseteq {\ensuremath{\mathbf{P}}}\subseteq {\ensuremath{\mathbf{NP}}}\subseteq {\ensuremath{\mathbf{PH}}}$$ A problem in [$\mathbf{AC}^0(6)$]{} is given by a uniform family of polynomial size bounded depth circuits with unbounded fan-in Boolean and mod 6 gates. As far as we know an [$\mathbf{AC}^0(6)$]{} circuit cannot determine whether a majority of its input bits are ones, and yet we cannot provably separate [$\mathbf{AC}^0(6)$]{} from any of the other classes in the sequence. This embarrassing state of affairs motivates this paper (as well as much of the lower bound work in complexity theory). We propose a candidate for separating [$\mathbf{NL}$]{} from [$\mathbf{LogCFL}$]{}. The *Tree Evaluation problem* [$FT_d(h,k)$]{} is defined as follows. The input to [$FT_d(h,k)$]{} is a balanced $d$-ary tree of height $h$, denoted $T^h_d$ (see Fig. \[sample\]). Attached to each internal node $i$ of the tree is some explicit function $f_i: [k]^d\rightarrow [k]$ specified as $k^d$ integers in $[k]=\{1,\ldots,k\}$. Attached to each leaf is a number in $[k]$. Each internal tree node takes a value in $[k]$ obtained by applying its attached function to the values of its children. The function problem [$FT_d(h,k)$]{} is to compute the value of the root, and the Boolean problem [$BT_d(h,k)$]{} is to determine whether this value is $1$. ![A height 3 binary tree $T_2^3$ with nodes numbered heap style.[]{data-label="sample"}](h3_labeled_tree.eps) It is not hard to show that a deterministic logspace-bounded polytime auxiliary pushdown automaton decides [$BT_d(h,k)$]{}, where $d$,$h$ and $k$ are input parameters. This implies by [@su78] that [$BT_d(h,k)$]{} belongs to the class [$\mathbf{LogDCFL}$]{} of languages logspace reducible to a deterministic context-free language. The latter class lies between [$\mathbf{L}$]{} and [$\mathbf{LogCFL}$]{}, but its relationship with [$\mathbf{NL}$]{} is unknown (see [@ma07] for a recent survey). We conjecture that [$BT_d(h,k)$]{} does not lie in [$\mathbf{NL}$]{}. A proof would separate [$\mathbf{NL}$]{} and [$\mathbf{LogCFL}$]{}, and hence (by (\[classes\])) separate [$\mathbf{NC}^1$]{} and [$\mathbf{NC}^2$]{}. Thus we are interested in proving superlogarithmic space upper and lower bounds (for fixed degree $d\ge 2$) for [$BT_d(h,k)$]{} and [$FT_d(h,k)$]{}. Notice that for each constant $k=k_0\ge 2$, $BT_d(h,k_0)$ is an easy generalization of the Boolean formula value problem for balanced formulas, and hence it is in [$\mathbf{NC}^1$]{}and [$\mathbf{L}$]{}. Thus it is important that $k$ be an unbounded input parameter. We use branching programs (BPs) as a nonuniform model of Turing machine space: A lower bound of $s(n)$ on the number of BP states implies a lower bound of $\Theta(\log s(n))$ on Turing machine space, but to prove the converse we would need to supply the machine with an advice string for each input length. Thus BP state lower bounds are stronger than TM space lower bounds, but we do not know how to take advantage of the uniformity of TMs to get the supposedly easier lower bounds on TM space. In this paper all of our lower bounds are nonuniform and all of our upper bounds are uniform. In the context of branching programs we think of $d$ and $h$ as fixed, and we are interested in how the number of states required grows with $k$. To indicate this point of view we write the function problem [$FT_d(h,k)$]{} as [$FT_d^h(k)$]{} and the Boolean problem [$BT_d(h,k)$]{} as [$BT_d^h(k)$]{}. For this it turns out that $k$-way BPs are a convenient model, since an input for [$BT_d^h(k)$]{} or [$FT_d^h(k)$]{} is naturally presented as a tuple of elements in $[k]$. Each nonfinal state in a $k$-way BP queries a specific element of the tuple, and branches $k$ possible ways according to the $k$ possible answers. It is natural to assume that the inputs to Turing machines are binary strings, so 2-way BPs are a closer model of TM space than are $k$-way BPs for $k>2$. But every 2-way BP is easily converted to a $k$-way BP with the same number of states, and every $k$-way BP can be converted to a 2-way BP with an increase of only a factor of $k$ in the number of states, so for the purpose of separating [$\mathbf{L}$]{} and [$\mathbf{P}$]{} we may as well use $k$-way BPs. Of course the number of states required by a $k$-way BP to solve the Boolean problem [$BT_d^h(k)$]{} is at most the number required to solve the function problem [$FT_d^h(k)$]{}. In the other direction it is easy to see (Lemma \[l:FvsB\]) that [$FT_d^h(k)$]{} requires at most a factor of $k$ more states than [$BT_d^h(k)$]{}. From the point of view of separating [$\mathbf{L}$]{}and [$\mathbf{P}$]{} a factor of $k$ is not important. Nevertheless it is interesting to compare the two numbers, and in some cases (Corollary \[c:HtThree\]) we can prove tight bounds for both: For deterministic BPs solving height 3 trees they differ by a factor of $\log k$ rather than $k$. The best (i.e. fewest states) algorithms that we know for deterministic $k$-way BPs solving [$FT_d^h(k)$]{} come from black pebbling algorithms for trees: If $p$ pebbles suffice to pebble the tree $T^h_d$ then $O(k^p)$ states suffice for a BP to solve [$FT_d^h(k)$]{} (Theorem \[t:pebSim\]). This upper bound on states is tight (up to a constant factor) for trees of height $h=2$ or $h=3$ (Corollary \[c:HtThree\]), and we suspect that it may be tight for trees of any height. There is a well-known generalization of black pebbling called black-white pebbling which naturally simulates nondeterministic algorithms. Indeed if $p$ pebbles suffice to black-white pebble $T^h_d$ then $O(k^p)$ states suffice for a nondeterministic BP to solve [$BT_d^h(k)$]{}. However the best lower bound we can obtain for nondeterministic BPs solving [$BT_2^3(k)$]{}(see Figure \[sample\]) is $\Omega(n^{2.5})$, whereas it takes 3 pebbles to black-white pebble the tree $T^3_2$. This led us to rethink the upper bound, and we discovered that there is indeed a nondeterministic BP with $O(k^{2.5})$ states which solves [$BT_2^3(k)$]{}. The algorithm comes from a black-white pebbling of $T^3_2$ using only 2.5 pebbles: It places a half-black pebble on node 2, a black pebble on node 3, and adds a half white pebble on node 2, allowing the root to be black-pebbled (see Figure \[f:bin\_h3\_fract\_ub\] on page ). This led us to the idea of fractional pebbling in general, a natural generalization of black-white pebbling. A fractional pebble configuration on a tree assigns two nonnegative real numbers $b(i)$ and $w(i)$ totalling at most 1, to each node $i$ in the tree, with appropriate rules for removing and adding pebbles. The idea is to minimize the maximum total pebble weight on the tree during a pebbling procedure which starts and ends with no pebbles and has a black pebble on the root at some point. It turns out that nondeterministic BPs nicely implement fractional pebbling procedures: If $p$ pebbles suffice to fractionally pebble $T^h_d$ then $O(k^p)$ states suffice for a nondeterministic BP to solve [$BT_d^h(k)$]{}. After much work we have not been able to improve upon this $O(k^p)$ upper bound for any $d,h\ge 2$. We prove it is optimal for trees of height 3 (Corollary \[c:HtThree\]). We can prove that for fixed degree $d$ the number of pebbles required to pebble (in any sense) the tree $T^h_d$ grows as $\Theta(h)$, so the $p$ in the above best-known upper bounds of $O(k^p)$ states grows as $\Theta(h)$. This and the following fact motivate further study of the complexity of [$FT_d^h(k)$]{}. \[f:unbounded\] A lower bound of $\Omega(k^{r(h)})$ for [*any*]{} unbounded function $r(h)$ on the number of states required to solve [$FT_d^h(k)$]{}implies that ${\ensuremath{\mathbf{L}}}\ne {\ensuremath{\mathbf{LogCFL}}}$ (Theorem \[t:logDCFL\] and Corollary \[c:thegoal\]). Proving tight bounds on the number of pebbles required to fractionally pebble a tree turns out to be much more difficult than for the case of whole black-white pebbling. However we can prove good upper and lower bounds. For binary trees of any height $h$ we prove an upper bound of $h/2 + 1$ and a lower bound of $h/2-1$ (the upper bound is optimal for $h\le 4$). These bounds can be generalized to $d$-ary trees (Theorem \[t:daryFract\]). We introduce a natural semantic restriction on BPs which solve [$BT_d^h(k)$]{} or [$FT_d^h(k)$]{}: A $k$-way BP is [*thrifty*]{} if it only queries the function $f(x_1,\ldots,x_d)$ associated with a node when $(x_1,\ldots,x_d)$ are the correct values of the children of the node. It is not hard to see that the deterministic BP algorithms that implement black pebbling are thrifty. With some effort we were able to prove a converse (for binary trees): If $p$ is the minimum number of pebbles required to black-pebble $T^h_2$ then every deterministic thrifty BP solving $BT^h_2(k)$ (or $FT^h_2(k)$) requires $\Omega(k^p)$ states. Thus any deterministic BP solving these problems with fewer states must query internal nodes $f_i(x,y)$ where $(x,y)$ are not the values of the children of node $i$. For the decision problem $BT^h_2(k)$ there is indeed a nonthrifty deterministic BP improving on the bound by a factor of $\log k$ (Theorem \[t:BPUpper\] (\[e:dBUpper\])), and this is tight for $h=3$ (Corollary \[c:HtThree\]). But we have not been able to improve on thrifty BPs for solving any function problem [$FT_d^h(k)$]{}. The nondeterministic BPs that implement fractional pebbling are indeed thrifty. However here the converse is far from clear: there is nothing in the definition of [*thrifty*]{} that hints at fractional pebbling. We have been able to prove that thrifty BPs cannot beat fractional pebbling for binary trees of height $h=4$ or less, but for general trees this is open. It is not hard to see that for black pebbling, fractional pebbles do not help. This may explain why we have been able to prove tight bounds for deterministic thrifty BPs for all binary trees, but only for trees of height 4 or less for nondeterministic thrifty BPs. We pose the following as another interesting open question: \[thriftyH\] [**Thrifty Hypothesis:**]{} Thrifty BPs are optimal among $k$-way BPs solving [$FT_d^h(k)$]{}. Proving this for deterministic BPs would show ${\ensuremath{\mathbf{L}}}\ne {\ensuremath{\mathbf{LogDCFL}}}$, and for nondeterministic BPs would show ${\ensuremath{\mathbf{NL}}}\ne{\ensuremath{\mathbf{LogCFL}}}$. Disproving this would provide interesting new space-efficient algorithms and might point the way to new approaches for proving lower bounds. The lower bounds mentioned above for unrestricted branching programs when the tree heights are small are obtained in two ways: First using the Neciporuk method [@ne66], and second using a method that analyzes the state sequences of the BP computations. Using the state sequence method we have not yet beat the $\Omega(n^2)$ deterministic branching program size barrier (neglecting log factors) inherent to the Neciporuk method for Boolean problems, but we can prove lower bounds for function problems which cannot be matched by the Neciporuk method (Theorems \[t:rootfunction\], \[t:lasttheorem\], \[t:childLB\], \[t:beatittwice\]). For nondeterministic branching programs with states of unbounded outdegree, we show that both methods yield a lower bound of $\Omega(n^{3/2})$ states (neglecting logs) for the decision problem $BT_2^3$, and this improves on the former $\Omega(n^{3/2})$ bound obtained for the number of edges [@pu87; @ra91] in such BPs. Summary of Contributions ------------------------ - We introduce a family of computation problems [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{}, $d,h \ge 2$, which we propose as good candidates for separating [$\mathbf{L}$]{} and [$\mathbf{NL}$]{} from apparently larger complexity classes in (\[classes\]). Our goal is to prove space lower bounds for these problems by proving state lower bounds for $k$-way branching programs which solve them. For $h=3$ we can prove tight bounds for each $d\ge 2$ on the number of states required by $k$-way BPs to solve them, namely (from Corollary \[c:HtThree\]) $$\begin{aligned} & \Theta(k^{(3/2)d - 1/2}) \mbox{ for nondeterministic BPs solving $BT^3_d(k)$}\\ & \Theta(k^{2d-1}/\log k) \mbox{ for deterministic BPs solving $BT^3_d(k)$}\\ & \Theta(k^{2d-1}) \mbox{ for deterministic BPs solving $FT^3_d(k)$}\end{aligned}$$ - We introduce a simple and natural restriction called [*thrifty*]{} on BPs solving [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{}. The best known upper bounds for deterministic BPs solving [$FT_d^h(k)$]{} and for nondeterministic BPs solving [$BT_d^h(k)$]{} are realized by thrifty BPs. Proving even much weaker lower bounds than these upper bounds for unrestricted BPs would separate [$\mathbf{L}$]{} from [$\mathbf{LogCFL}$]{} (see Fact \[f:unbounded\] above). We prove that for binary trees deterministic thrifty BPs cannot do better than implement black pebbling (this is far from obvious). - We formulate the [**Thrifty Hypothesis**]{} (see above). Either a proof or a disproof would have interesting consequences. - We introduce [*fractional pebbling*]{} as a natural generalization of black-white pebbling for simulating nondeterministic space bounded computations. We prove almost tight lower bounds for fractionally pebbling binary trees (Theorem \[t:daryFract\]). The best known upper bounds for nondeterministic BPs solving [$FT_d^h(k)$]{} come from fractional pebbling, and these can be implemented by thrifty BPs. An interesting open question is to prove that nondeterministic thrifty BPs cannot do better than implement fractional pebbling. (We prove this for $h=2,3,4$.) - We use a “state sequence” method for proving size lower bounds for branching programs solving [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{}, and show that it improves on the Neciporuk method for certain function problems. The next major step is to prove good lower bounds for trees of height $h=4$. If we can prove the above Thrifty Hypothesis for deterministic BPs solving the function problem (and hence the decision problem) for trees of height 4, then we would beat the $\Omega(n^2)$ limitation mentioned above on Neciporuk’s method. See Section \[s:conclu\] (Conclusion) for this argument, and a comment about the nondeterministic case. Relation to previous work ------------------------- Taitslin [@ta05] proposed a problem similar to [$BT_2^h(k)$]{}in which the functions attached to internal nodes are specific quasi groups, in an unsuccessful attempt to prove ${\ensuremath{\mathbf{NL}}}\ne{\ensuremath{\mathbf{P}}}$. Gal, Koucky and McKenzie [@gakomc08] proved exponential lower bounds on the size of restricted $n$-way branching programs solving versions of the problem GEN. Like our problems [$BT_d^h(k)$]{} and [$FT_d^h(k)$]{}, the best known upper bounds for solving GEN come from pebbling algorithms. As a concrete approach to separating [$\mathbf{NC}^1$]{} from [$\mathbf{NC}^2$]{}, Karchmer, Raz and Wigderson [@karawi95] suggested proving that the circuit depth required to compose a Boolean function with itself $h$ times grows appreciably with $h$. They proposed the *universal composition relation* conjecture, stating that an abstraction of the composition problem requires high communication complexity, as an intermediate goal to validate their approach. This conjecture was later proved in two ways, first [@edimrusg01] using innovative information-theoretic machinery and then [@hawi93] using a clever new complexity measure that generalizes the subadditivity property implicit in Neciporuk’s lower bound method [@ne66]. Proving the conjecture thus cleared the road for the approach, yet no sufficiently strong unrestricted circuit lower bounds could be proved using it so far. Edmonds, Impagliazzo, Rudich and Sgall [@edimrusg01] noted that the approach would in fact separate [$\mathbf{NC}^1$]{} from [$\mathbf{AC}^1$]{}. They also coined the name *Iterated Multiplexor* for the most general computational problem considered in [@karawi95], namely composing in a tree-like fashion a set of explicitly presented Boolean functions, one per tree node. Our problem [$FT_d^h(k)$]{} can be considered as a generalization of the Iterated Multiplexor problem in which the functions map $[k]^d$ to $[k]$ instead of $\{0,1\}^d$ to $\{0,1\}$. This generalization allows us to focus on getting lower bounds as a function of $k$ when the tree is fixed. For time-restricted branching programs, Borodin, Razborov and Smolensky [@borasm93] exhibited a family of Boolean functions that require exponential size to be computed by nondeterministic syntactic read-$k$ times BPs. Later Beame, Saks, Sun, and Vee [@BSSV03] exhibited such functions that require exponential size to be computed by randomized BPs whose computation time is limited to $o(n\sqrt{\log n/\log\log n})$, where $n$ is the input length. However all these functions can be computed by polynomial size BPs when time is unrestricted. In the present paper we consider branching programs with no time restriction such as read-$k$ times. However the smallest size deterministic BPs known to us that solve [$FT_d^h(k)$]{} implement the black pebbling algorithm, and these BPs happen to be (syntactic) read-once. Organization ------------ The paper is organized as follows. Section \[s:preliminaries\] defines the main notions used in this paper, including branching programs and pebbling. Section \[s:Connecting\] relates pebbling and branching programs to Turing machine space, noting in particular that a $k$-way BP size lower bound of $\Omega(k^{\mbox{\scriptsize function}(h)})$ for [$BT_d^h(k)$]{} would show ${\ensuremath{\mathbf{L}}}\neq {\ensuremath{\mathbf{LogCFL}}}$. Section \[s:PebBounds\] proves upper and lower bounds on the number of pebbles required to black, black-white and fractionally pebble the tree $T^h_d$. These pebbling bounds are exploited in Section \[s:PBbounds\] to prove upper bounds on the size of branching programs. BP lower bounds are obtained using the Neciporuk method in Subsection \[s:NecLB\]. Alternative proofs to some of these lower bounds using the “state sequence method” are given in Subsection \[s:beating\]. An example of a function problem for which the state sequence method beats the Neciporuk method is given in Theorems \[t:rootfunction\] and \[t:childLB\]. Subsection \[s:thriftyLB\] contains bounds for thrifty branching programs. Preliminaries {#s:preliminaries} ============= We assume some familiarity with complexity theory, such as can be found in [@go08]. We write $[k]$ for $\{1,2,\ldots,k\}$. For $d,h\ge 2$ we use $T_d^h$ to denote the balanced $d$-ary tree of height $h$. [**Warning:**]{} Here the [*height*]{} of a tree is the number of levels in the tree, as opposed to the distance from root to leaf. Thus $T^2_2$ has just 3 nodes. We number the nodes of $T_d^h$ as suggested by the heap data structure. Thus the root is node 1, and in general the children of node $i$ are (when $d=2$) nodes $2i,2i+1$ (see Figure \[sample\]). \[d:treeEval\] Given: The tree $T_d^h$ with each non-leaf node $i$ independently labeled with a function $f_i: [k]^d\rightarrow [k]$ and each leaf node independently labeled with an element from $[k]$, where $d,h,k\geq 2$. *Function evaluation problem* [$FT_d^h(k)$]{}: Compute the value $v_1\in[k]$ of the root $1$ of $T_d^h$, where in general $v_i=a$ if $i$ is a leaf labeled $a$ and $v_i=f_i(v_{j_1},\ldots,v_{j_d})$ if the children of $i$ are $j_1,\ldots,j_d$. *Boolean problem* [$BT_d^h(k)$]{}: Decide whether $v_1=1$. Branching programs ------------------ A family of branching programs serves as a nonuniform model of of a Turing machine. For each input size $n$ there is a BP $B_n$ in the family which models the machine on inputs of size $n$. The states (or nodes) of $B_n$ correspond to the possible configurations of the machine for inputs of size $n$. Thus if the machine computes in space $s(n)$ then $B_n$ has $2^{O(s(n))}$ states. Many variants of the branching program model have been studied (see in particular the survey by Razborov [@ra91] and the book by Ingo Wegener [@we00]). Our definition below is inspired by Wegener [@we00 p. 239], by the $k$-way branching program of Borodin and Cook [@boco82] and by its nondeterministic variant [@borasm93; @gakomc08]. We depart from the latter however in two ways: nondeterministic branching program labels are attached to states rather than edges (because we think of branching program states as Turing machine configurations) and cycles in branching programs are allowed (because our lower bounds apply to this more powerful model). A *nondeterministic $k$-way branching program* $B$ computing a total function $g:[k]^m\rightarrow R$, where $R$ is a finite set, is a directed rooted multi-graph whose nodes are called [*states*]{}. Every edge has a label from $[k]$. Every state has a label from $[m]$, except $|R|$ [*final*]{} sink states consecutively labelled with the elements from $R$. An input $(x_1,\ldots,x_m)\in [k]^m$ activates, for each $1\leq j\leq m$, every edge labelled $x_j$ out of every state labelled $j$. A [*computation*]{} on input $\vec{x}=(x_1,\ldots,x_m)\in [k]^m$ is a directed path consisting of edges activated by $\vec{x}$ which begins with the unique start state (the root), and either it is infinite, or it ends in the final state labelled $g(x_1,\ldots,x_m)$, or it ends in a nonfinal state labelled $j$ with no outedge labelled $x_j$ (in which case we say the computation [*aborts*]{}). At least one such computation must end in a final state. The *size* of $B$ is its number of states. $B$ is *deterministic $k$-way* if every non-final state has precisely $k$ outedges labelled $1,\ldots,k$. $B$ is *binary* if $k=2$. We say that $B$ solves a decision problem (relation) if it computes the characteristic function of the relation. A $k$-way branching program computing the function [$FT_d^h(k)$]{} requires $k^d$ $k$-ary arguments for each internal node $i$ of $T^h_d$ in order to specify the function $f_i$, together with one $k$-ary argument for each leaf. Thus in the notation of Definition \[d:treeEval\], [$FT_d^h(k)$]{}$: [k]^m \rightarrow R$ where $R=[k]$ and $m=\frac{d^{h-1}-1}{d-1}\cdot k^d + d^{h-1}$. Also [$BT_d^h(k)$]{}$: [k]^m \rightarrow \{0,1\}$. For fixed $d,h$ we are interested in how the number of states required for a $k$-way branching program to compute [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{}grows with $k$. We define [$\mathsf{\#detFstates}^h_d(k)$]{} (resp. [$\mathsf{\#ndetFstates}^h_d(k)$]{}) to be the minimum number of states required for a deterministic (resp. nondeterministic) $k$-way branching program to solve [$FT_d^h(k)$]{}. Similarly we define [$\mathsf{\#detBstates}^h_d(k)$]{} and [$\mathsf{\#ndetBstates}^h_d(k)$]{}to be the number of states for solving [$BT_d^h(k)$]{}. The next lemma shows that the function problem is not much harder to solve than the Boolean problem. \[l:FvsB\] $$\begin{aligned} {\ensuremath{\mathsf{\#detBstates}^h_d(k)}}\le {\ensuremath{\mathsf{\#detFstates}^h_d(k)}}\le k \cdot {\ensuremath{\mathsf{\#detBstates}^h_d(k)}}\\ {\ensuremath{\mathsf{\#ndetBstates}^h_d(k)}}\le {\ensuremath{\mathsf{\#ndetFstates}^h_d(k)}}\le k \cdot {\ensuremath{\mathsf{\#ndetBstates}^h_d(k)}}\end{aligned}$$ The left inequalities are obvious. For the others, we can construct a branching program solving the function problem from a sequence of $k$ programs solving Boolean problems, where the $i$th program determines whether the value of the root node is $i$. Next we introduce thrifty programs, a restricted form of $k$-way branching programs for solving tree evaluation problems. Thrifty programs efficiently simulate pebbling algorithms, and implement the best known upper bounds for [$\mathsf{\#ndetBstates}^h_d(k)$]{} and [$\mathsf{\#detFstates}^h_d(k)$]{}, and are within a factor of $\log k$ of the best known upper bounds for [$\mathsf{\#detBstates}^h_d(k)$]{}. In Section \[s:PBbounds\] we prove tight lower bounds for deterministic thrifty programs which solve [$BT_d^h(k)$]{} and [$FT_d^h(k)$]{}. \[d:thrifty\] A deterministic $k$-way branching program which solves [$FT_d^h(k)$]{}or [$BT_d^h(k)$]{} is [*thrifty*]{} if during the computation on any input every query $f_i(\vec{x})$ to an internal node $i$ of $T^h_d$ satisfies the condition that $\vec{x}$ is the tuple of correct values for the children of node $i$. A nondeterministic such program is [*thrifty*]{} if for every input every computation which ends in a final state satisfies the above restriction on queries. Note that the restriction in the above definition is semantic, rather than syntactic. It somewhat resembles the semantic restriction used to define incremental branching programs in [@gakomc08]. However we are able to prove strong lower bounds using our semantic restriction, but in [@gakomc08] a syntactic restriction was needed to prove lower bounds. One function is enough {#s:oneFunction} ---------------------- The theorem in this section is not used in the sequel. It turns out that the complexities of [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{} are not much different if we require all functions assigned to internal nodes to be the same.[^1] To denote this restricted version of the problems we replace $F$ by $\hat{F}$ and $B$ by $\hat{B}$. Thus $\hat{F}T_d^h(k)$ is the function problem for $T^h_d$ when all node functions are the same, and $\hat{B}T_d^h(k)$ is the corresponding Boolean problem. To specify an instance of one of these new problems we need only give one copy of the table for the common node function $\hat{f}$, together with the values for the leaves. \[t:single\] Let $N = (d^h-1)/(d-1)$ be the number of nodes in the tree $T^h_d$. Any $Nk$-way branching program $\hat{B}$ solving $\hat{F}T_d^h(Nk)$ (resp. $\hat{B}T_d^h(Nk)$) can be transformed to a $k$-way branching program $B$ solving [$FT_d^h(k)$]{} (resp. [$BT_d^h(k)$]{}), where $B$ has no more states than $\hat{B}$ and $B$ is deterministic iff $\hat{B}$ is deterministic. Also for each $d\ge 2$ the decision problem $BT_d(h,k)$ is log space reducible to $\hat{B}T_d(h,k)$ (where $h,k$ are input parameters). Given an instance $I$ of [$FT_d^h(k)$]{} (or [$BT_d^h(k)$]{}) we can find a corresponding instance $\hat{I}$ of $\hat{F}T_d^h(Nk)$ (or $\hat{B}T_d^h(Nk)$) by coding the set of all functions $f_i$ associated with internal nodes $i$ in $I$ by a single function $\hat{f}$ associated with each node of $\hat{I}$. Here we represent each element of $[Nk]$ by a pair $\langle i,x\rangle$, where $i\in [N]$ represents a node in $T^h_d$ and $x\in [k]$. We want to satisfy the following Claim: [**Claim:**]{} If a node $i$ has a value $x$ in $I$ then node $i$ has value $\langle i,x\rangle$ in $\hat{I}$. Thus if $i$ is a leaf node, then we define the leaf value for node $i$ in $\hat{I}$ to be $\langle i,x\rangle$, where $x$ is the value of leaf $i$ in $I$. We define the common internal node function $\hat{f}$ as follows. If nodes $i_1,\ldots,i_d$ are the children of node $j$ in $T^h_d$, then $$\label{e:fhat} \hat{f}(\langle i_1,x_1\rangle, \ldots,\langle i_d,x_d\rangle)= \langle j,f_j(x_1,\ldots,x_d)\rangle$$ The value of $\hat{f}$ is irrelevant (make it $\langle 1,1\rangle$) if nodes $i_1,\ldots,i_d$ are not the children of $j$. An easy induction on the height of a node $i$ shows that the above [**Claim**]{} is satisfied. Note that the value $x$ of the root node $1$ in $I$ is easily determined by the value $\langle 1,x\rangle$ of the root in $\hat{I}$. We specify that the pair $\langle 1,1\rangle$ has value 1 in $[N]$, so $I$ is a YES instance of the decision problem [$BT_d^h(k)$]{} iff $\hat{I}$ is a YES instance of $\hat{B}T_d^h(Nk)$. To complete the proof of the last sentence in the theorem we note that the number of bits needed to specify $I$ is $\Theta(Nk^d\log k)$, and the number of bits to specify $\hat{I}$ is dominated by the number to specify $\hat{f}$, which is $O((Nk)^d\log(Nk))$. Thus the transformation from $I$ to $\hat{I}$ is length-bounded by a polynomial in length of its argument, and it is not hard to see that it can be carried out in log space. Now we prove the first part of the theorem. Given an $Nk$-way BP $\hat{B}$ solving $\hat{B}T_d^h(Nk)$ (resp. $\hat{F}T_d^h(Nk)$) we can find a corresponding $k$-way BP $B$ solving [$BT_d^h(k)$]{} (resp. [$FT_d^h(k)$]{}) as follows. The idea is that on input instance $I$, $B$ acts like $\hat{B}$ on input $\hat{I}$. Thus for each state $\hat{q}$ in $\hat{B}$ that queries a leaf node $i$, the corresponding state $q$ in $B$ queries $i$, and for each possible answer $x\in [k]$, $B$ has an outedge labelled $x$ corresponding to the edge from $\hat{q}$ labelled $\langle i,x\rangle$. If $\hat{q}$ queries $\hat{f}$ at arguments as in (\[e:fhat\]) (where $i_1,\ldots,i_d$ are the children of node $j$) then $q$ queries $f_j(x_1,\ldots,x_d)$ and for each $x\in [k]$, $q$ has an outedge labelled $x$ corresponding to the edge from $\hat{q}$ labelled $\langle j,x\rangle$. If $i_1,\ldots,i_d$ are not the children of $j$, then the node $q$ is not necessary in $B$, since the answer to the query is always the default $\langle 1,1\rangle$. In case $\hat{B}$ is solving the function problem $\hat{F}T_d^h(Nk)$ then each output state labelled $\langle 1,x\rangle$ is relabelled $x$ in $B$ (recall that the root of $T^h_d$ is number 1). Any output state $q$ labelled $\langle i,x\rangle$ where $i>1$ will never be reached in $B$ (since the value of the root node of $\hat{I}$ always has the form $\langle 1,x\rangle$) so $q$ can be deleted. For any edge in $\hat{B}$ leading to $q$ the corresponding edge in $B$ can lead anywhere. One goal of this paper is to motivate trying to show $BT_d(h,k) \notin {\ensuremath{\mathbf{L}}}$. By Theorem \[t:single\] this is equivalent to showing $\hat{B}T_d(h,k)\notin {\ensuremath{\mathbf{L}}}$. Further our suggested method is to try proving for each fixed $h$ a lower bound of $\Omega(k^{r(h})$ on the number of states required for a $k$-way BP to solve [$FT_d^h(k)$]{}, where $r(h)$ is any unbounded function (see Corollary \[c:thegoal\] below). Again acording to Theorem \[t:single\] (since $N$ is a constant) technically speaking we may as well assume that all the node functions in the instance of [$FT_d^h(k)$]{} are the same. However in practice this assumption is not helpful in proving a lower bound. For example Theorem \[t:beatittwice\] states that $k^3$ states are required for a deterministic $k$-way BP to solve $FT^3_2(k)$, and the proof assigns three different functions to the three internal nodes of the binary tree of height 3. Pebbling -------- The pebbling game for dags was defined by Paterson and Hewitt [@pahe70] and was used as an abstraction for deterministic Turing machine space in [@co74]. Black-white pebbling was introduced in [@cose76] as an abstraction of nondeterministic Turing machine space (see [@nordstrom] for a recent survey). Here we define and use three versions of the pebbling game. The first is a simple ‘black pebbling’ game: A black pebble can be placed on any leaf node, and in general if all children of a node $i$ have pebbles, then one of the pebbles on the children can be slid to $i$ (this is a “black sliding move’)’. Any black pebble can be removed at any time. The goal is to pebble the root, using as few pebbles as possible. The second version is ‘whole’ black-white pebbling as defined in [@cose76] with the restriction that we do not allow “white sliding moves”. Thus if node $i$ has a white pebble and each child of $i$ has a pebble (either black or white) then the white pebble can be removed. (A white sliding move would apply if one of the children had no pebble, and the white pebble on $i$ was slid to the empty child. We do not allow this.) A white pebble can be placed on any node at any time. The goal is to start and end with no pebbles, but to have a black pebble on the root at some time. The third is a new game called [*fractional pebbling*]{}, which generalizes whole black-white pebbling by allowing the black and white pebble value of a node to be any real number between 0 and 1. However the total pebble value of each child of a node $i$ must be 1 before the black value of $i$ is increased or the white value of $i$ is decreased. Figure \[f:bin\_h3\_fract\_ub\] illustrates two configurations in an optimal fractional pebbling of the binary tree of height three using 2.5 pebbles. Our motivation for choosing these definitions is that we want pebbling algorithms for trees to closely correspond to $k$-way branching program algorithms for the tree evaluation problem. We start by defining fractional pebbling, and then define the other two notions as restrictions on fractional pebbling. \[d:pebbling\] A [*fractional pebble configuration*]{} on a rooted $d$-ary tree $T$ is an assignment of a pair of real numbers $(b(i),w(i))$ to each node $i$ of the tree, where $$\begin{aligned} & 0\le b(i),w(i) \label{e:consOne} \\ & b(i)+w(i)\le 1 \label{e:consTwo}\end{aligned}$$ Here $b(i)$ and $w(i)$ are the [*black pebble value*]{} and the [*white pebble value*]{}, respectively, of $i$, and $b(i)+w(i)$ is the [*pebble value*]{} of $i$. The number of pebbles in the configuration is the sum over all nodes $i$ of the pebble value of $i$. The legal pebble moves are as follows (always subject to maintaining the constraints (\[e:consOne\]), (\[e:consTwo\])): (i) For any node $i$, decrease $b(i)$ arbitrarily, (ii) For any node $i$, increase $w(i)$ arbitrarily, (iii) For every node $i$, if each child of $i$ has pebble value 1, then decrease $w(i)$ to 0, increase $b(i)$ arbitrarily, and simultaneously decrease the black pebble values of the children of $i$ arbitrarily. A [*fractional pebbling*]{} of $T$ using $p$ pebbles is any sequence of (fractional) pebbling moves on nodes of $T$ which starts and ends with every node having pebble value 0, and at some point the root has black pebble value 1, and no configuration has more than $p$ pebbles. A [*whole black-white pebbling*]{} of $T$ is a fractional pebbling of $T$ such that $b(i)$ and $w(i)$ take values in $\{0,1\}$ for every node $i$ and every configuration. A [*black pebbling*]{} is a black-white pebbling in which $w(i)$ is always 0. Notice that rule (iii) does not quite treat black and white pebbles dually, since the pebble values of the children must each be 1 before any decrease of $w(i)$ is allowed. A true dual move would allow increasing the white pebble values of the children so they all have pebble value 1 while simultaneously decreasing $w(i)$. In other words, we allow black sliding moves, but disallow white sliding moves. The reason for this (as mentioned above) is that nondeterministic branching programs can simulate the former, but not the latter. We use ${\ensuremath{\mathsf{\#pebbles}}}(T)$, ${\ensuremath{\mathsf{\#BWpebbles}}}(T)$, and ${\ensuremath{\mathsf{\#FRpebbles}}}(T)$ respectively to denote the minimum number of pebbles required to black pebble $T$, black-white pebble $T$, and fractional pebble $T$. Bounds for these values are given in Section \[s:PebBounds\]. For example for $d=2$ we have ${\ensuremath{\mathsf{\#pebbles}}}(T^h_2)= h$, ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_2)= \lceil h/2\rceil +1$, and ${\ensuremath{\mathsf{\#FRpebbles}}}(T^h_2) \le h/2+1$. In particular ${\ensuremath{\mathsf{\#FRpebbles}}}(T^3_2) = 2.5$ (see Figure \[f:bin\_h3\_fract\_ub\]). Connecting TMs, BPs, and Pebbling {#s:Connecting} ================================= Let [$FT_d(h,k)$]{} be the same as [$FT_d^h(k)$]{} except now the inputs vary with both $h$ and $k$, and we assume the input to [$FT_d(h,k)$]{}is a binary string $X$ which codes $h$ and $k$ and codes each node function $f_i$ for the tree $T^h_d$ by a sequence of $k^d$ binary numbers and each leaf value by a binary number in $[k]$, so $X$ has length $$\label{e:Flength} |X| = \Theta(d^hk^d\log k)$$ The output is a binary number in $[k]$ giving the value of the root. The problem [$BT_d(h,k)$]{} is the Boolean version of [$FT_d(h,k)$]{}: The input is the same, and the instance is true iff the value of the root is 1. Obviously [$BT_d(h,k)$]{} and [$FT_d(h,k)$]{} can be solved in polynomial time, but we can prove a stronger result. \[t:logDCFL\] The problem [$BT_d(h,k)$]{} is in [$\mathbf{LogDCFL}$]{}, even when $d$ is given as an input parameter. By [@su78] if suffices to show that [$BT_d(h,k)$]{} is solved by some deterministic auxiliary pushdown automaton $M$ in $\log$ space and polynomial time. The algorithm for $M$ is to use its stack to perform a depth-first search of the tree $T^h_d$, where for each node $i$ it keeps a partial list of the values of the children of $i$, until it obtains all $d$ values, at which point it computes the value of $i$ and pops its stack, adding that value to the list for the parent node. Note that the length $n$ of an input instance is about $d^k k^d\log k$ bits, so $\log n > d\log k$, so $M$ has ample space on its work tape to write all $d$ values of the children of a node $i$. The best known upper bounds on branching program size for [$FT_d^h(k)$]{}grow as $k^{\Omega(h)}$. The next result shows (Corollary \[c:thegoal\]) that any lower bound with a nontrivial dependency on $h$ in the exponent of $k$ for deterministic (resp. nondeterministic) BP size would separate [$\mathbf{L}$]{}(resp. [$\mathbf{NL}$]{}) from [$\mathbf{LogDCFL}$]{}. \[t:goal\] For each $d \ge 2$, if [$BT_d(h,k)$]{} is in [$\mathbf{L}$]{} (resp. [$\mathbf{NL}$]{}) then there is a constant $c_d$ and a function $f_d(h)$ such that ${\ensuremath{\mathsf{\#detFstates}^h_d(k)}}\le f_d(h)k^{c_d}$ (resp. ${\ensuremath{\mathsf{\#ndetFstates}^h_d(k)}}\le f_d(h)k^{c_d}$) for all $h,k\ge 2$. By Lemma \[l:FvsB\] it suffices to prove this for [$\mathsf{\#detBstates}^h_d(k)$]{}and [$\mathsf{\#ndetBstates}^h_d(k)$]{} instead of [$\mathsf{\#detFstates}^h_d(k)$]{} and [$\mathsf{\#ndetFstates}^h_d(k)$]{}. In general a Turing machine which can enter at most $C$ different configurations on all inputs of a given length $n$ can be simulated (for inputs of length $n$) by a binary (and hence $k$-ary) branching program with $C$ states. Each Turing machine using space $O(\log n)$ has at most $n^c$ possible configurations on any input of length $n \ge 2$, for some constant $c$. By (\[e:Flength\]) the input for [$BT_d(h,k)$]{} has length $n=\Theta(d^hk^d\log k)$, so there are at most $(d^hk^d\log k)^{c'}$ possible configurations for a log space Turing machine solving [$BT_d(h,k)$]{}, for some constant $c'$. So we can take $f_d(h) = d^{c'h}$ and $c_d = c'(d+1)$. \[c:thegoal\] Fix $d \ge 2$ and any unbounded function $r(h)$. If ${\ensuremath{\mathsf{\#detFstates}^h_d(k)}}\in\Omega(k^{r(h)})$ then ${\ensuremath{BT_d(h,k)}}\notin{\ensuremath{\mathbf{L}}}$. If ${\ensuremath{\mathsf{\#ndetFstates}^h_d(k)}}\in\Omega(k^{r(h)})$ then ${\ensuremath{BT_d(h,k)}}\notin{\ensuremath{\mathbf{NL}}}$. The next result connects pebbling upper bounds with upper bounds for thrifty branching programs. \[t:pebSim\] (i) If $T^h_d$ can be black pebbled with $p$ pebbles, then deterministic thrifty branching programs with $O(k^p)$ states can solve [$FT_d^h(k)$]{} and [$BT_d^h(k)$]{}. \(ii) If $T^h_d$ can be fractionally pebbled with $p$ pebbles then nondeterministic thrifty branching programs can solve [$BT_d^h(k)$]{} with $O(k^p)$ states. Consider the sequence $C_0,C_1,\ldots C_\tau$ of pebble configurations for a black pebbling of $T^h_d$ using $p$ pebbles. We may as well assume that the root is pebbled in configuration $C_\tau$, since all pebbles could be removed in one more step at no extra cost in pebbles. We design a thrifty branching program $B$ for solving [$FT_d^h(k)$]{} as follows. For each pebble configuration $C_t$, program $B$ has $k^p$ states; one state for each possible assignment of a value from $[k]$ to each of the $p$ pebbles. Hence $B$ has $O(k^p)$ states, since $\tau$ is a constant independent of $k$. Consider an input $I$ to [$FT_d^h(k)$]{}, and let $v_i$ be the value in $[k]$ which $I$ assigns to node $i$ in $T^h_d$ (see Definition \[d:treeEval\]). We design $B$ so that on $I$ the computation of $B$ will be a state sequence $\alpha_0,\alpha_1,\ldots,\alpha_\tau$, where the state $\alpha_t$ assigns to each pebble the value $v_i$ of the node $i$ that it is on. (If a pebble is not on any node, then its value is 1.) For the initial pebble configuration no pebbles have been assigned to nodes, so the initial state of $B$ assigns the value 1 to each pebble. In general if $B$ is in a state $\alpha$ corresponding to configuration $C_t$, and the next configuration $C_{t+1}$ places a pebble $j$ on node $i$, then the state $\alpha$ queries the node $i$ to determine $v_i$, and moves to a new state which assigns $v_i$ to the pebble $j$ and assigns 1 to any pebble which is removed from the tree. Note that if $i$ is an internal node, then all children of $i$ must be pebbled at $C_t$, so the state $\alpha$ ‘knows’ the values $v_{j_1},\ldots,v_{j_d}$ of the children of $i$, so $\alpha$ queries $f_i(v_{j_1},\ldots,v_{j_d})$. When the computation of $B$ reaches a state $\alpha_\tau$ corresponding to $C_\tau$, then $\alpha_\tau$ determines the value of the root (since $C_\tau$ has a pebble on the root), so $B$ moves to a final state corresponding to the value of the root. The argument for the case of whole black-white pebbling is similar, except now the value for each white pebble represents a guess for the value $v_i$ of the node it is on. If the pebbling algorithm places a white pebble $j$ on a node at some step, then the corresponding state of $B$ nondeterministically moves to any state in which the values of all pebbles except $j$ are the same as before, but the value of $j$ can be any value in $[k]$. If the pebbling algorithm removes a white pebble $j$ from a node $i$, then the corresponding state has a guess $v'_i$ for the value of $i$, and either $i$ is a leaf, or all children of $i$ must be pebbled. The corresponding state of $B$ queries $i$ to determine its true value $v_i$. If $v_i \ne v'_i$ then the computation aborts (i.e. all outedges from the state have label $v'_i$). Otherwise $B$ assigns $j$ the value 1 and continues. When $B$ reaches a state $\alpha$ corresponding to a pebble configuration $C_t$ for which the root has a black pebble $j$, then $\alpha$ knows whether or not the tentative value assigned to the root is 1. All future states remember whether the tentative value is 1. If the computation successfully (without aborting) reaches a state $\alpha_\tau$ corresponding to the final pebble configuration $C_\tau$, then $B$ moves to the final state corresponding to output 1 or output 0, depending on whether the tentative root value is 1. Now we consider the case in which $C_0,\ldots,C_\tau$ represents a fractional pebbling computation. If $b(i),w(i)$ are the black and white pebbled values of node $i$ in configuration $C_t$, then a state $\alpha$ of $B$ corresponding to $C_t$ will remember a fraction $b(i) + w(i)$ of the $\log k$ bits specifying the value $v_i$ of the node $i$, where the fraction $b(i)$ of bits are verified, and the fraction $w(i)$ of bits are conjectured. In general these numbers of bits are not integers, so they are rounded up to the next integer. This rounding introduces at most two extra bits for each node in $T^h_d$, for a total of at most $2T$ extra bits, where $T$ is the number of nodes in $T^h_d$. Since the sum over all nodes of all pebble values is at most $p$, the total number of bits that need to be remembered for a given pebble configuration is at most $p \log k + 2T$, where $T$ is a constant. Associated with each step in the fractional pebbling there are $2^{p\log k +2T} = O(k^p)$ states in the branching program, one for each setting of these bits. These bits can be updated for each of the three possible fractional pebbling moves (i), (ii), (iii) in Definition \[d:pebbling\] in a manner similar to that for whole black-white pebbling. It is easy to see that in all cases the branching programs described satisfy the thrifty requirement that an internal node is queried only at the correct values for its children (or, in the black-white and fractional cases, the program aborts if an incorrect query is made because of an incorrect guess for the value of a white-pebbled node). ${\ensuremath{\mathsf{\#detFstates}^h_d(k)}}= O(k^{{\ensuremath{\mathsf{\#pebbles}}}(T^h_d)})$ and ${\ensuremath{\mathsf{\#ndetFstates}^h_d(k)}}= O(k^{{\ensuremath{\mathsf{\#FRpebbles}}}(T^h_d)})$. Pebbling Bounds {#s:PebBounds} =============== Previous results ---------------- We start by summarizing what is known about whole black and black-white pebbling numbers as defined at the end of Definition \[d:pebbling\] (i.e. we allow black sliding moves but not white sliding moves). The following are minor adaptations of results and techniques that have been known since work of Loui, Meyer auf der Heide and Lengauer-Tarjan [@loui; @meyeraufderheide; @lengauer-tarjan] in the late ’70s. They considered pebbling games where sliding moves were either disallowed or permitted for both black and white pebbles, in contrast to our results below. We always assume $h\ge 2$ and $d\ge 2$. \[t:blackSliding\] ${\ensuremath{\mathsf{\#pebbles}}}(T^{h}_{d}) = (d-1)h - d +2$. For $h=2$ this gives ${\ensuremath{\mathsf{\#pebbles}}}(T^{2}_{d}) = d$, which is obviously correct. In general we show ${\ensuremath{\mathsf{\#pebbles}}}(T^{h+1}_{d}) = {\ensuremath{\mathsf{\#pebbles}}}(T^h_{d}) + d - 1$, from which the theorem follows. The following pebbling strategy gives the upper bound: Let the root be node $1$ and the children be $2 \ldots d+1$. Pebble the nodes $2 \ldots d+1$ in order using the optimal number of pebbles for $T^{h-1}_{d}$, leaving a black pebble at each node. Note that for the black pebble game, the complexity of pebbling in the game where a pebble remains on the root is the same as for the game where the root has a black pebble on it at some point. The maximum number of pebbles at any point on the tree is $d-1 + {\ensuremath{\mathsf{\#pebbles}}}(T^{h-1}_{d})$. Now slide the black pebble from node $1$ to the root, and then remove all pebbles. For the lower bound, consider the time $t$ at which the children of the root all have black pebbles on them. There must be a final time $t'$ before $t$ at which one of the sub-trees rooted at $2,3, \ldots d+1$ had $T^{h}_{d}$ pebbles on it. This is because pebbling any of these subtrees requires at least $T^{h}_{d}$ pebbles, by definition. At time $t'$, all the other subtrees must have at least 1 black pebble each on them. If not, then there is a subtree $T$ which does not, and it would have to be pebbled before time $t$, which contradicts the definition of $t'$. Thus at time $t'$, there are at least $T^{h}_{d} + d - 1$ pebbles on the tree. \[t:BSlideW\] For $d=2$ and $d$ odd: $$\label{e:dOdd} {\ensuremath{\mathsf{\#BWpebbles}}}(T^{h}_{d}) = \lceil(d-1)h/2\rceil +1$$ For $d$ even: $$\label{e:dEven} {\ensuremath{\mathsf{\#BWpebbles}}}(T^{h}_{d}) \leq \lceil(d-1)h/2\rceil +1$$ When $d$ is odd, this number is the same as when white sliding moves are allowed. We divide the proof into three parts. [**Part I:**]{}\ We show (\[e:dOdd\]) when $d$ is odd. For $h=2$ this gives ${\ensuremath{\mathsf{\#BWpebbles}}}(T^{2}_{d}) = d$, which is obviously correct. In general for odd $d$ we show $$\label{e:BSWrec} {\ensuremath{\mathsf{\#BWpebbles}}}(T^{h+1}_{d}) = {\ensuremath{\mathsf{\#BWpebbles}}}(T^h_{d}) + (d - 1)/2$$ from which the theorem follows for this case. For the upper bound for the left hand side, we strengthen the induction hypothesis by asserting that during the pebbling there is a [*critical time*]{} at which the root has a black pebble and there are at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)-(d-1)/2$ pebbles on the tree (counting the pebble on the root). This can be made true when $h=2$ by removing all the pebbles on the leaves after the root is pebbled. To pebble the tree $T^{h+1}_d$, note that we are allowed $(d-1)/2$ extra pebbles over those required to pebble $T^h_d$. Start by placing black pebbles on the left-most $(d-1)/2$ children of the root, and removing all other pebbles. Now go through the procedure for pebbling the middle principal subtree, stopping at the critical time, so that there is a black pebble on the middle child of the root and at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)-(d-1)/2$ pebbles on the middle subtree. Now place white pebbles on the remaining $(d-1)/2$ children of the root, slide a black pebble to the root, and remove all black pebbles on the children of the root. This is the critical time for pebbling $T^{h+1}_d$: note that there are at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)$ pebbles on the tree (we removed the black pebble on the root of the middle subtree). Now remove the pebble on the root and remove all pebbles on the middle subtree by completing its pebbling (keeping the $(d-1)/2$ white pebbles on the children in place). Finally remove the remaining $(d-1)/2$ white pebbles one by one, simply by pebbling each subtree, and removing the white pebble at the root of the subtree instead of black-pebbling it. To prove the lower bound for the left hand side of (\[e:BSWrec\]), we strengthen the induction hypothesis so that now a black-white pebbling allows white sliding moves, and the root may be pebbled by either a black pebble or a white pebble. (Note that for the base case the tree $T^2_d$ still requires $d$ pebbles.) Consider such a pebbling of $T^{h+1}_d$ which uses as few moves as possible. Consider a time $t$ at which all children of the root have pebbles on them (i.e. just before the root is black pebbled or just after a white pebble on the root is removed). For each child $i$, let $t_i$ be a time at which the tree rooted at $i$ has ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)$ pebbles on it. We may assume $$t_2<t_3< \ldots < t_{d+1}$$ Let $m = (d+3)/2$ be the middle child. If $t_m < t$ then each of the $(d-1)/2$ subtrees rooted at $i$ for $i<m$ has at least one pebble on it at time $t_m$, since otherwise the effort made to place ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)$ pebbles on it earlier is wasted. Hence (\[e:BSWrec\]) holds for this case. Similarly if $t_m > t$ then each of the $(d-1)/2$ subtrees rooted at $i$ for $i>m$ has at least one pebble on it at time $t_m$, since otherwise the effort to place $T^h_d$ pebbles on it later is wasted, so again (\[e:BSWrec\]) holds. [**Part II:**]{}\ We prove (\[e:dEven\]) for even degree $d$: $${\ensuremath{\mathsf{\#BWpebbles}}}(T^{h}_{d}) \leq \lceil(d-1)h/2\rceil +1$$ For $h=2$ the formula gives ${\ensuremath{\mathsf{\#BWpebbles}}}(T^2_d) = d$, which is obviously correct. For $h=3$ the formula gives ${\ensuremath{\mathsf{\#BWpebbles}}}(T^3_d) = (3/2)d$, which can be realized by black-pebbling $d/2+1$ of the root’s children and white-pebbling the rest. In general it suffices to prove the following recurrence: $$\label{e:BSWrecEven} {\ensuremath{\mathsf{\#BWpebbles}}}(T^{h+2}_{d}) \le {\ensuremath{\mathsf{\#BWpebbles}}}(T^h_{d}) + d-1$$ We strengthen the induction hypothesis by asserting that during the pebbling of $T^{h}_d$ there is a [*critical time*]{} at which the root has a black pebble and there are at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)-(d-1)$ pebbles on the tree (counting the pebble on the root). This is easy to see when $h=2$ and $h=3$. We prove the recurrence as follows. We want to pebble $T^{h+2}_d$ using $d-1$ more pebbles than is required to pebble $T^h_d$. Let us call the children of the root $c_1,\ldots,c_d$. We start by placing black pebbles on $c_1,\ldots c_{d/2}$. We illustrate how to do this by showing how to place a black pebble on $c_{d/2}$ after there are black pebbles on nodes $c_1,\ldots c_{d/2-1}$. At this point we still have $d/2$ extra pebbles left among the original $d-1$. Let us assign the names $c'_1,\ldots c'_d$ to the children of $c_{d/2}$. Use the $d/2$ extra pebbles to put black pebbles on $c'_1,\ldots,c'_{d/2}$. Now run the procedure for pebbling the subtree rooted at $c'_{d/2+1}$ up to the critical time, so there is a black pebble on $c'_{d/2+1}$. Now place white pebbles on the remaining $d/2-1$ children of $c_{d/2}$, slide a black pebble up to $c_{d/2}$, remove the remaining black pebbles on the children of $c_{d/2}$, and complete the pebbling procedure for the subtree rooted at $c'_{d/2+1}$, so that subtree has no pebbles. Now remove the white pebbles on the remaining $d/2-1$ children of $c_{d/2}$ using the remaining $d/2-1$ extra pebbles. At this point there are black pebbles on nodes $c_1,\ldots,c_{d/2}$, and no other pebbles on the tree. We now place a black pebble on $c_{d/2+1}$ as follows. Let us assign the names $c''_1,\ldots c''_d$ to the children of $c_{d/2+1}$. Use the remaining $d/2-1$ extra pebbles to place black pebbles on $c''_1,\ldots,c''_{d/2-1}$. Now run the pebble procedure on the subtree rooted at $c''_{d/2}$ up to the critical time, so $c''_{d/2}$ has a black pebble. Now place white pebbles on the remaining $d/2$ children of $c_{d/2+1}$, slide a black pebble up to $c_{d/2+1}$, remove the remaining black pebbles on the children of $c_{d/2+1}$, place white pebbles on the remaining $d/2-1$ children of the root, slide a black pebble up to the root, and remove the remaining black pebbles from the children of the root. This is now the critical time for the procedure pebbling $T^{h+2}_d$. There is a black pebble on the root, $d/2-1$ white pebbles on the children of the root, $d/2$ white pebbles on the children of $c_{d/2+1}$, and at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d) -d$ pebbles on the subtree rooted at $c''_{d/2}$ (we’ve removed the black pebble on $c''_{d/2}$), making a total of at most ${\ensuremath{\mathsf{\#BWpebbles}}}(T^h_d)$ pebbles on the tree. Now remove the black pebble from the root and complete the pebble procedure for the subtree rooted at $c''_{d/2}$ to remove all pebbles from that subtree. There remain $d/2-1$ white pebbles on the children of the root and $d/2$ white pebbles on the children of $c_{d/2+1}$, making a total of $d-1$ white pebbles. Now remove each of the white pebbles on the children of $c_{d/2+1}$ by pebbling each of these subtrees in turn. Finally we can remove each of the remaining $d/2-1$ white pebbles on the children of the root by a process similar to the one used to place $d/2$ black pebbles on the children of the root at the beginning of the procedure (we now in effect have one more pebble to work with). [**Part III:**]{}\ Finally we give the lower bound for the case $d=2$: $${\ensuremath{\mathsf{\#BWpebbles}}}(T^{h}_{2}) \geq \lceil h/2 \rceil +1$$ Clearly 2 pebbles are required for the tree of height 2, and it is easy to show that 3 pebbles are required for the height 3 tree. In general it suffices to show that the binary tree $T$ of height $h+2$ requires at least one more pebble than the binary tree of height $h$. Suppose otherwise, and consider a pebbling of $T$ that uses the minimum number of pebbles required for the tree of height $h$, and assume that the pebbling is as short as possible. Let $t_1$ be a time when the root has a black pebble. For $i=3,4,5$ there must be a time $t_i$ when all the pebbles are on the subtree rooted at node $i$. This is because node $i$ must be pebbled at some point, and if the pebble is white then right after the white pebble is removed we could have placed a black pebble in its place (since we do not allow white sliding moves). Suppose that $\{t_1,t_3,t_4,t_5\}$ are ordered such that $$t_{i_1}<t_{i_2}<t_{i_3}<t_{i_4}$$ Then $t_1$ cannot be either $t_{i_3}$ or $t_{i_4}$ since otherwise at time $t_{i_2}$ there are no pebbles on the subtree rooted at node $i_1$ and hence its earlier pebbling was wasted (since the root has yet to be pebbled). Similarly if $t_1$ is either $t_{i_1}$ or $t_{i_2}$ then at time $t_{i_3}$ there are no pebbles on the subtree rooted at $i_4$, and since the root has already been pebbled the later pebbling of this subtree is wasted. Results for fractional pebbling ------------------------------- The concept of fractional pebbling is new. Determining the minimum number $p$ of pebbles required to fractionally pebble $T^h_d$ is important since $O(k^p)$ is the best known upper bound on the number of states required by a nondeterministic BP to solve [$FT_d^h(k)$]{} (see Theorem \[t:pebSim\]). It turns out that proving fractional pebbling lower bounds is much more difficult than proving whole black-white pebbling lower bounds. We are able to get exact fractional pebbling numbers for the binary tree of height 4 and less, but the best general lower bound comes from a nontrivial reduction to a paper by Klawe [@klawe] which proves bounds for the pyramid graph. This bound is within $d/2+1$ pebbles of optimal for degree $d$ trees (at most 2 pebbles from optimal for binary trees). Our proof of the exact value of ${\ensuremath{\mathsf{\#FRpebbles}}}(T^4_2) = 3$ led us to conjecture that any nondeterministic BP computing $BT_2^4(k)$ requires $\Omega(k^3)$ states. In section \[s:PBbounds\] we provide evidence for that conjecture by proving that any nondeterministic *thrifty* BP requires $O(k^3)$ states. The lower bound for height 3 and any degree follows from the lower bound of $\Omega(k^{\frac{3}{2}d-\frac{1}{2}})$ states for nondeterministic branching programs computing $BT_d^3(k)$ (Corollary \[c:HtThree\]). ![An optimal fractional pebbling sequence for the height 3 tree using 2.5 pebbles, all configurations included. The grey half circle means the *white* value of that node is $.5$, whereas unshaded area means absence of pebble value. So for example in the seventh configuration, node 2 has black value .5 and white value .5, node 3 has black value 1, and the remaining nodes all have black and white value 0. []{data-label="f:bin_h3_fract_ub"}](bin_h3_fract_ub_full.eps) We start by presenting a general result showing that fractional pebbling can save at most a factor of two over whole black-white pebbling for any DAG (directed acyclic graph). (Here the pebbling rules for a DAG are the same as for a tree, where we require that every sink node (i.e. every ‘root’) must have a whole black pebble at some point.) We will not use this result, but it does provide a simple proof of weaker lower bounds than those given in Theorem \[t:daryFract\] below. \[r:factorTwo\] If a DAG $D$ has a fractional pebbling using $p$ pebbles, then it has a black-white pebbling using $2p$ pebbles. Given a sequence $P$ of fractional pebbling moves for a DAG $D$ in which at most $p$ pebbles are used, we define a corresponding sequence $P'$ of pebbling moves in which at most $2p$ pebbles are used. The sequence $P'$ satisfies the following invariant with respect to $P$. ($\spadesuit$) A node $v$ has a black pebble (resp. a white pebble) on it at time $t$ with respect to $P'$ iff $b(v) \geq 1/2$ (resp. $w(v) > 1/2$) at time $t$ with respect to $P$. An important consequence of this invariant is that if at time $t$ in $P$ node $v$ satisfies $b(v)+w(v)=1$ then at time $t$ in $P'$ node $v$ is pebbled. We describe when a pebble is placed or removed in $P$. At the beginning, there are no pebbles on any nodes. $P'$ simulates $P$ as follows. Assume there is a certain configuration of pebbles on $D$, placed according to $P'$ after time $t-1$; we describe how $P$’s move at time $t$ is reflected in $P'$. If in the current move of $P$, $b(v)$ (resp. $w(v)$) increases to $1/2$ or greater (resp. greater than $1/2$) for some node $v$, then the current pebble, if any, on $v$, is removed and a black pebble (resp. a white pebble) is placed on $v$ in $P'$. Note that this is always consistent with the pebbling rules. If in the current configuration of $P'$ there is a black (resp. white) pebble on a vertex $v$, and in the current move of $P$, $b(v)$ (resp. $w(v)$) falls below $1/2$, then the pebble on $v$ is removed. Again, this is always consistent with the pebbling rules for the black-white pebble game and the fractional black-white pebble game. For all other kinds of moves of $P$, the configuration in $P'$ does not change. If $P$ is a valid sequence of fractional pebbling moves, then $P'$ is a valid sequence of pebbling moves. We argue that the cost of $P'$ is at most twice the cost of $P$, and that if there is a point at which the root has black pebble value $1$ with respect to $P$, then there is a point at which the root is black-pebbled in $P'$. These facts together establish the theorem. To demonstrate these facts, we simply observe that the invariant ($\spadesuit$) holds by induction on the time $t$ for the simulation we defined. This implies that at any point $t$, the number of pebbles on $D$ with respect to $P'$ is at most the number of nodes $v$ for which $b(v) + w(v) \geq 1/2$ with respect to $P$, and is therefore at most twice the total value of pebbles with respect to $P$ at time $t$. Hence the cost of pebbling $D$ using $P'$ is at most twice the cost of pebbling $D$ using $P$. Also, if there is a time $t$ at which the root $r$ has black pebble value $1$ with respect to $P$, then $b(r) \ge 1/2$ at time $t$, so there is a black pebble on $r$ with respect to $P'$ at time $t$. The next result presents our best-known bounds for fractionally pebbling trees $T^h_d$. \[t:daryFract\] $$(d-1)h/2 - d/2 \leq {\ensuremath{\mathsf{\#FRpebbles}}}(T^h_d) \leq (d-1)h/2 +1$$ $${\ensuremath{\mathsf{\#FRpebbles}}}(T^3_d) = (3/2)d - 1/2$$ $${\ensuremath{\mathsf{\#FRpebbles}}}(T^4_2) = 3$$ We divide the proof into several parts. First we prove the upper bound: $${\ensuremath{\mathsf{\#FRpebbles}}}(T^h_d) \leq (d-1)h/2 +1$$ Let $A_h$ be the algorithm for height $h \ge 2$. It is composed of two parts, $B_h$ and $C_h$. $B_h$ is run on the empty tree, and finishes with a black pebble on the root and $(d-1)(h-2)$ white half pebbles below the root (and of these $(d-1)(h-3)$ lie below the right child of the root). Next, the black pebble on the root is removed. Then $C_h$ is run on the result, and finishes with the empty tree. $B_h$ and $C_h$ both use $(d-1)h/2 + 1$ pebbles. $A_h'$ is the same as $A_h$ except that it finishes with a black half pebble on the root. It does this in the most straight-forward way, by leaving a black half pebble after the root is pebbled, and so it uses $(d-1)h/2 + 1.5$ pebbles for all $h \ge 3$. $B_2$: Pebble the tree of height 2 using $d$ black pebbles. $B_h, h>2$: Run $A_{h-1}'$ on node 2 using $(d-1)(h-1)/2 + 1.5$ pebbles, and then on node 3 (if $3 \leq d$) using a total of $(d-1)(h-1)/2 + 2$ pebbles (counting the half pebble on node 2), and so on for nodes $2,3 \ldots, d$. So $(d-1)(h-1)/2 + 1 + (d-1)/2 = (d-1)h/2 + 1$ pebbles are used when $A_{h-1}'$ is run on node $d$. Next run $B_{h-1}$ on node $d+1$, using $(d-1)(h-1)/2 + 1$ pebbles on the subtree rooted at $d+1$, for $(d-1)h/2 + 1$ pebbles in total (counting the black half pebbles on node $2, \ldots, d$). The result is a black pebble on node ${d+1}$,  $(d-1)(h-3)$ white half pebbles under $d+1$, and from earlier $d-1$ black half pebbles on $2, \ldots, d$, for a total of $(d-1)(h-2)/2 + 1$ pebbles. Add a white half pebble to each of $2, \ldots, d$, then slide the black pebble from $d+1$ onto the root. Remove the black half pebbles from $2, \ldots, d$. Now there are $(d-1)(h-2)$ white half pebbles under the root, and a black pebble on the root. $C_2$: The tree of height 2 is empty, so return. $C_h$: The tree has no black pebbles and $(d-1)(h-2)$ white half pebbles. Note that if a sequence can pebble a tree with $p$ pebbles, then essentially the same sequence can be used to remove a white half pebble from the root with $p + .5$ pebbles. $C_h$ runs $C_{h-1}$ on node $d+1$, resulting in a tree with only a half white pebble on each of $2, \ldots, d$. This takes $(d-1)h/2+1$ pebbles. Then $A_{h-1}$ is run on each of $2,\ldots,d$ in turn, to remove the white half pebbles. The first such call of $A_{h-1}$ is the most expensive, using $(d-1)(h-1)/2+1+(d-1)/2 = (d-1)h/2 + 1$ pebbles. ### {#section .unnumbered} As noted earlier, the tight lower bound for height 3 and any degree: $${\ensuremath{\mathsf{\#FRpebbles}}}(T^3_d) \ge 3/2d - 1/2$$ follows from the asymptotically tight lower bound of $\Omega(k^{\frac{3}{2}d-\frac{1}{2}})$ states for nondeterministic branching programs computing $BT_d^3(k)$ (Corollary \[c:HtThree\]). We do, however, have a direct proof of ${\ensuremath{\mathsf{\#FRpebbles}}}(T^3_2) \ge 5/2$: Assume to the contrary that there is a fractional pebbling with fewer than $2.5$ pebbles. It follows that no non-leaf node $i$ can ever have $w(i)\ge 0.5$, since the children of $i$ must each have pebble value 1 in order to decrease $w(i)$. Since there must be some time $t_1$ during the pebbling sequence such that both the nodes $2$ and $3$ (the two children of the root) have pebble value 1, it follows that at time $t_1$, $b(2) > 0.5$ and $b(3)>0.5$. Hence for $i=2,3$ there is a largest $t_i \le t_1$ such that node $i$ is black-pebbled at time $t_i$ and $b(i)>0.5$ during the time interval $[t_i,t_1]$. (By ‘black-pebbled’ we mean at time $t_i-1$ both children of $i$ have pebble value 1, so that at time $t_i$ the value of $b(i)$ can be increased.) Assume w.l.o.g. that $t_2< t_3$. Then at time $t_3-1$ both children of node $3$ have pebble value 1 and $b(2)>0.5$, so the total pebble value exceeds $2.5$. ### {#section-1 .unnumbered} Before we prove the lower bound for all heights, which we do not believe is tight, we prove one more tight lower bound: $${\ensuremath{\mathsf{\#FRpebbles}}}(T^4_2) \ge 3$$ Let $C_0,C_1,\ldots,C_m$ be the sequence of pebble configurations in a fractional pebbling of the binary tree of height 4. We say that $C_t$ is the configuration at time $t$. Thus $C_0$ and $C_m$ have no pebbles, and there is a first time $t_1$ such that $C_{t_1+1}$ has a black pebble on the root. In general we say that step $t$ in the pebbling is the move form $C_t$ to $C_{t+1}$. In particular, if an internal node $i$ is black-pebbled at step $t$ then both children of $i$ have pebble value 1 in $C_t$ and node $i$ has a positive black pebble value in $C_{t+1}$. Note that if any configuration $C_t$ has a whole white pebble on some internal node then both children must have pebble value 1 to remove that pebble, so some configuration will have at least pebble value 3, which is what we are to prove. Hence we may assume that no node in any $C_t$ has white pebble value 1, and hence every node must be black-pebbled at some step. For each node $i$ we associate a critical time $t_i$ such that $i$ is black-pebbled at step $t_i$ and hence the children of $i$ each have pebble value 1 in configuration $C_{t_i}$. The time $t_1$ associated with the root (as above) is the first step at which the root is black-pebbled, and hence nodes 2 and 3 each have pebble value 1 in $C_{t_1}$. In general if $t_i$ is the critical time for internal node $i$, and $j$ is a child of $i$, then the critical time $t_j$ for $j$ is the largest $t<t_i$ such that $j$ is black-pebbled at step $t$. [**Sibling Assumption:**]{} We may assume w.l.o.g. (by applying an isomorphism to the tree) that if $i$ and $j$ are siblings and $i<j$ then $t_i<t_j$. In general the critical times for a path from root to leaf form a descending chain. In particular $$t_7< t_3 < t_1$$ For each $i>1$ we define $b_i$ and $w_i$ to be the black and white pebble values of node $i$ at the critical time of its parent. Thus for all $i>1$ $$\label{e:bwsum} b_i + w_i =1$$ Now let $p$ be the maximum pebble value of any configuration $C_t$ in the pebbling. Our task is to prove that $p\ge 3$ After the critical time of an internal node $i$ the white pebble values of its two children must be removed. When the first one is removed both white values are present along with pebble value 1 on two children, so $$w_{2i} + w_{2i+1} +2 \le p$$ In particular for $i = 1,3$ we have $$\begin{aligned} w_2 + w_3 +2 & \le & p \label{e:wtwothree} \\ w_6 + w_7 +2 & \le & p \label{e:wsevtwo}\end{aligned}$$ Now we consider two cases, depending on the order of $t_2$ and $t_7$. [**CASE I:**]{} $t_2<t_7$ Then by the Sibling Assumption, at time $t_7$ (when node 7 is black-pebbled) we have $$\label{e:twosix} b_2+b_6 +2 \le p$$ Now if we also suppose that $w_6$ is not removed until after $t_1$ (CASE IA) then when the first of $w_2,w_6$ is removed we have $$w_2+ w_6 +2 \le p$$ so adding this equation with (\[e:twosix\]) and using (\[e:bwsum\]) we see that $p\ge 3$ as required. However if we suppose that $w_6$ is removed before $t_1$ (CASE IB) (but necessarily after $t_2 < t_3$) then we have $$b_2+b_3 + w_6 +2 \le p$$ then we can add this to (\[e:wtwothree\]) to again obtain $p \ge 3$. [**CASE II:**]{} $t_7< t_2$ Then $t_6<t_7<t_2<t_3$ so at time $t_2$ we have $$b_6+b_7 +2 \le p$$ so adding this to (\[e:wsevtwo\]) we again obtain $p\ge 3$. ### {#section-2 .unnumbered} To prove the general lower bound, we need the following lemma: \[l:rational\] For every finite DAG there is an optimal fractional B/W pebbling in which all pebble values are rational numbers. (This result is robust independent of various definitions of pebbling; for example with or without sliding moves, and whether or not we require the root to end up pebbled.) Consider an optimal B/W fractional pebbling algorithm. Let the variables $b_{v,t}$ and $w_{v,t}$ stand for the black and white pebble values of node $v$ at step $t$ of the algorithm. [**Claim:**]{} We can define a set of linear inequalities with 0 - 1 coefficients which suffice to ensure that the pebbling is legal. For example, all variables are non-negative, $b_{v,t} + w_{b,t} \le 1$, initially all variables are 0, and finally the nodes have the values that we want, node values remain the same on steps in which nothing is added or subtracted, and if the black value of a node is increased at a step then all its children must be 1 in the previous step, etc. Now let $p$ be a new variable representing the maximum pebble value of the algorithm. We add an inequality for each step $t$ that says the sum of all pebble values at step $t$ is at most $p$. Any solution to the linear programming problem: Minimize $p$ subject to all of the above inequalities gives an optimal pebbling algorithm for the graph. But every LP program with rational coefficients has a rational optimal solution (if it has any optimal solution). ### {#section-3 .unnumbered} Now we can prove the lower bound for all heights: $${\ensuremath{\mathsf{\#FRpebbles}}}(T^h_d) \geq (d-1)h/2 - d/2$$ The high-level strategy for the proof is as follows. Given $d$ and $h$, we transform the tree $T_{d}^{h}$ into a DAG $G_{d,h}$ such that a lower bound on ${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h})$ gives a lower bound for ${\ensuremath{\mathsf{\#FRpebbles}}}(T_{d}^{h})$. To analyze ${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h})$, we use a result of Klawe [@klawe], who shows that for a DAG $G$ that satisfies a certain “niceness” property, ${\ensuremath{\mathsf{\#BWpebbles}}}(G)$ can be given in terms of ${\ensuremath{\mathsf{\#pebbles}}}(G)$ (and the relationship is tight to within a constant less than one). The black pebbling cost is typically easier to analyze. In our case, $G_{d,h}$ does not satisfy the niceness property as-is, but just by removing some edges from $G_{d,h}$, we get a new DAG $G'_{d,h}$ which is nice. We then show how to exactly compute ${\ensuremath{\mathsf{\#pebbles}}}(G'_{d,h})$ which yields a lower bound on ${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h})$, and hence on ${\ensuremath{\mathsf{\#FRpebbles}}}(T_{d}^{h})$. We first motivate the construction $G_{d,h}$ and show that the whole black-white pebbling number of $G_{d,h}$ is related to the fractional pebbling number of $T_{d}^{h}$. We first use Lemma \[l:rational\] to “discretize” the fractional pebble game. The following are the rules for the discretized game, where $c$ is a parameter: For any node $v$, decrease $b(v)$ or increase $w(v)$ by $1/c$. For any node $v$, including leaf nodes, if all the children of $v$ have value 1, then increase $b(v)$ or decrease $w(v)$ by $1/c$. By Lemma \[l:rational\], we can assume all pebble values are rational, and if we choose $c$ large enough it is not a restriction that pebble values can only be changed by $1/c$. Since sliding moves are not allowed, the pebbling cost for this game is at most one more than the cost of fractional pebbling with black sliding moves. Now we show how to construct $G_{d,h}$ (for an example, see figure \[f:reductionG\]). We will split up each node of $T_{d}^{h}$ into $c$ nodes, so that the discretized game corresponds to the whole black-white pebble game on the new graph. Specifically, the cost of the whole black-white pebble game on the new graph will be exactly $c$ times the cost of the discretized game on $T_{d}^{h}$. In place of each node $v$ of $T_{d}^{h}$, $G_{d,h}$ has $c$ nodes $v[1], \ldots, v[c]$; having $c'$ of the $v[i]$ pebbled simulates $v$ having value $c'/c$. In place of each edge $(u,v)$ of $T_{d}^{h}$ is a copy of the complete bipartite graph $(U,V)$, where $U$ contains nodes $u[1] \ldots u[c]$ and $V$ contains nodes $v[1] \ldots v[c]$. If $u$ was a parent of $v$ in the tree, then all the edges go from $V$ to $U$ in the corresponding complete bipartite graph. Finally, a new “root” is added at height $h+1$ with edges from each of the $c$ nodes at height $h$[^2]. So every node at height $h-1$ and lower has $c$ parents, and every internal node except for the root has $dc$ children. ![$G_{2,3}$ with $c=3$[]{data-label="f:reductionG"}](fractpeb_lb_G_c3.eps) To lower bound ${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h})$, we will use Klawe’s result [@klawe]. Klawe showed that for “nice” graphs $G$, the black-white pebbling cost of $G$ (with black and white sliding moves) is at least $\lfloor {\ensuremath{\mathsf{\#pebbles}}}/2 \rfloor + 1$. Of course, the black-white pebbling cost without sliding moves is at least the cost with them. We define what it means for a graph to be nice in Klawe’s sense. \[d:nice\] A DAG $G$ is nice if the following conditions hold: 1. If $u_1$, $u_2$ and $u$ are nodes of $G$ such that $u_1$ and $u_2$ are children of $u$ (i.e., there are edges from $u_1$ and $u_2$ to $u$), then the cost of black pebbling $u_1$ is equal to the cost of black pebbling $u_2$ 2. If $u_1$ and $u_2$ are children of $u$, then there is no path from $u_1$ to $u_2$ or from $u_2$ to $u_1$. 3. If $u, u_1, \ldots, u_m$ are nodes none of which has a path to another, then there are node-disjoint paths $P_1, \ldots, P_m$ such that $P_i$ is a path from a leaf (a node with in-degree 0) to $u_i$ and there is no path between $u$ and any node in $P_i$. $G_{d,h}$ is not nice in Klawe’s sense. We will delete some edges from $G_{d,h}$ to produce a nice graph $G'_{d,h}$ and we will analyze ${\ensuremath{\mathsf{\#pebbles}}}(G'_{d,h})$. Note that a lower bound on ${\ensuremath{\mathsf{\#BWpebbles}}}(G'_{d,h})$ is also a lower bound on ${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h})$. The following definition will help in explaining the construction of $G'_{d,h}$ as well as for specifying and proving properties of certain paths. For $u \in G_{d,h}$, let $T_{d}^{h}(u)$ be the node in $T_{d}^{h}$ such that $T_{d}^{h}(u)[i] = u$ for some $i \leq c$. For $v,v' \in T_{d}^{h}$, we say $v < v'$ if $v$ is visited before $v'$ in an inorder traversal of $T_{d}^{h}$. For $u,u' \in G_{d,h}$, we say $u < u'$ if $T_{d}^{h}(u) < T_{d}^{h}(u')$ or if for some $v \in T_{d}^{h}$, $u = v[i]$, $u' = v[j]$, and $i < j$. $G_{d,h}'$ is obtained from $G_{d,h}$ by removing $c-1$ edges from each internal node except the root, as follows (for an example, see figure \[f:reductionGprime\]). For each internal node $v$ of $T$, consider the corresponding nodes $v[1], v[2], \ldots, v[c]$ of $G_{d,h}$. Remove the edges from $v[i]$ to its $i-1$ smallest and $c-i$ largest children. So in the end each internal node except the root has $c(d-1)+1$ children. ![$G_{2,3}'$ with $c=3$[]{data-label="f:reductionGprime"}](fractpeb_lb_Gprime_c3.eps) We first analyze [$\mathsf{\#pebbles}$]{}($G'_{d,h})$ and then show that it is nice. We show that ${\ensuremath{\mathsf{\#pebbles}}}(G_{d,h}') = c[(d-1)(h-1) + 1]$. Note that an upper bound of $c[(d-1)(h-1) + 1]$ is attained using a simple recursive algorithm similar to that used for the binary tree. For the lower bound, consider the earliest time $t$ when all paths from a leaf to the root are blocked. Figure \[f:fract\_bottleneck\] is an example of the type of pebbling configuration that we are about to analyze. The last pebble placed must have been placed at a leaf, since otherwise $t-1$ would be an earlier time when all paths from a leaf to the root are blocked. Let $P$ be the newly-blocked path from a leaf to the root. Consider the set $S = \{ u \in G_{d,h}' \ \vert\ u \not \in P \text{ and $u$ is a child of a node in } P \}$ of size $c (d-1)(h-1) + (c-1) = c[(d-1)(h-1) + 1] - 1$ (the $c-1$ is contributed by nodes at height $h$). We will give a set of mutually node-disjoint paths $\{P_u\}_{u \in S}$ such that $P_u$ is a path from a leaf to $u$ and $P_u$ does not intersect $P$. At time $t-1$, there must be at least one pebble on each $P_u$, since otherwise there would still be an open path from a leaf to the root at time $t$. Also counting the leaf node that is pebbled at $t$ gives c\[(d-1)(h-1) + 1\] pebbles. The left-most (right-most) path to $u$ is the unique path ending at $u$ determined by choosing the smallest (largest) child at every level. $P(l)$ is the node of path $P$ at height $l$, if it exists. For each $u \in S$ at height $l$, if $u$ is less than (greater than) $P(l)$ then make $P_u$ the left-most (right-most) path to $u$. Now we need to show that the paths $\{P_u\}_{u \in S} \cup \{P\}$ are disjoint. The following fact is clear from the definition of $G_{d,h}'$. \[l:thefact\] For any $u,u' \in G_{d,h}'$, if $u < u'$ then the smallest child of $u$ is not a child of $u'$, and the largest child of $u'$ is not a child of $u$. First we show that $P_u$ and $P$ are disjoint. The following lemma will help now and in the proof that $G'_{d,h}$ is nice. \[l:paths\] For $u,v \in G_{d,h}'$ with $u < v$, if there is no path from $u$ to $v$ or from $v$ to $u$ then the left-most path to $u$ does not intersect any path to $v$ from a leaf, and the right-most path to $v$ does not intersect any path to $u$ from a leaf. Suppose otherwise and let $P_u'$ be the left-most path to $u$, and $P_v'$ a path to $v$ that intersects $P_u'$. Since there is no path between $u$ and $v$, there is a height $l$, one greater than the height where the two paths first intersect, such that $P_u'(l), P_v'(l)$ are defined and $P_u'(l) < P_v'(l)$. But then from Lemma \[l:thefact\] $P_u'(l-1) \not = P_v'(l-1)$, a contradiction. The proof for the second part of the lemma is similar. That $P_u$ and $P$ are disjoint follows from using Lemma \[l:paths\] on $u$ and the sibling of $u$ in $P$. Next we show that for distinct $u,u' \in S$, $P_u$ does not contain $u'$. Suppose it does. Assume $P_u$ is the left-most path to $u$ (the other case is similar). Since $u \not = u'$, there must be a height $l$ such that $P_u(l)$ is defined and $P_u(l-1) = u'$. From the definition of $S$, we know $P(l)$ is also a parent of $u'$. From the construction of $P_u$, since we assumed $P_u$ is the left-most path to $u$, it must be that $P_u(l) < P(l)$. But then Lemma \[l:thefact\] tells us that $u'$ cannot be a child of $P(l)$, a contradiction. The proof that $P_u$ and $P_{u'}$ do not intersect is by contradiction. Assuming that there are $u,u' \in S$ such that $P_u$ and $P_{u'}$ intersect, there is a height $l$, one greater than the height where they first intersect, such that $P_u(l) \not = P_{u'}(l)$. Note that $P_u$ and $P_{u'}$ are both left-most paths or both right-most paths, since otherwise in order for them to intersect they would need to cross $P$. But then from Lemma \[l:thefact\] $P_u(l-1) \not = P_{u'}(l-1)$, a contradiction. This is an example of a bottleneck of the specified structure for $G_{d,h}'$ corresponding to the height 3 binary tree, with $c=3$: ![A possible black pebbling bottleneck of $G_{2,3}'$, with $c=3$[]{data-label="f:fract_bottleneck"}](fractpeb_lb_bottleneck_height3_c3.eps) The last step is to prove that $G_{d,h}'$ is nice. There are three properties specified in Definition \[d:nice\]. Property 2 is obviously satisfied. For property 1, the argument used to give the black pebbling lower bound of $c[(d-1)(h-1) + 1]$ can be used to give a black pebbling lower bound of $c(d-1)(l-1) + 1$ for any node at height $l \leq h$ (the 1 is for the last node pebbled, and recall the root is at height $h+1$), and that bound is tight. For property 3, choose $P_i$ to be the left-most (right-most) path from $u_i$ if $u_i$ is less than (greater than) $u$. Then use Lemma \[l:paths\] on each pair of nodes in $\{u, u_1,\ldots, u_m\}$. Since ${\ensuremath{\mathsf{\#pebbles}}}(G'_{d,h}) = c[(d-1)(h-1)+ 1]$, we have $${\ensuremath{\mathsf{\#BWpebbles}}}(G_{d,h}) \geq {\ensuremath{\mathsf{\#BWpebbles}}}(G'_{d,h}) \geq c[(d-1)(h-1) + 1]/2$$ and thus that the pebbling cost for the discretized game on $T_{d}^{h}$ is at least $(d-1)(h-1)/2 + .5$, which implies ${\ensuremath{\mathsf{\#FRpebbles}}}(T_{d}^{h}) \geq (d-1)(h-1)/2 - .5$. White sliding moves {#s:wSlide} ------------------- In the definition of fractional pebbling (Definition \[d:pebbling\]) we allow black sliding moves but not white sliding moves. To allow white sliding moves we would add a clause \(iv) For every internal node $i$, if $w(i)=1$ and $j$ is a child of $i$ and every child of $i$ except $j$ has total pebble value 1, then decrease $w(i)$ to 0 and increase $w(j)$ so that node $j$ has total pebble value 1. We did not include this move in the original definition because a nondeterministic $k$-way BP solving [$FT_d^h(k)$]{} or [$BT_d^h(k)$]{}does not naturally simulate it. The natural way to simulate such a move would be to verify the conjectured value of node $i$ (conjectured when the white pebble was placed on $i$) by comparing it with $f_i(v_{j_1},\ldots,v_{j_d})$, where $j_1,\ldots,j_d$ are the children of $i$. But this would require the BP to remember a $(d+1)$-tuple of values, whereas potentially only $d$ pebbles are involved. White sliding moves definitely reduce the number of pebbles required to pebble some trees. For example the binary tree $T^3_2$ can easily be pebbled with 2 pebbles using white sliding moves, but requires 2.5 pebbles without (Theorem \[t:daryFract\]). The next result shows that $8/3$ pebbles suffice for pebbling $T^4_2$ with white sliding moves, whereas 3 pebbles are required without (Theorem \[t:daryFract\]). \[t:eight-thirds\] The binary tree of height 4 can be pebbled with $8/3$ pebbles using white sliding moves. The height 3 binary tree can be pebbled with 2 pebbles. Use that sequence on node 2, but leave a third black pebble on node 2. That takes 7/3 pebbles. Put black pebbles on nodes 12 and 13. Slide a third black pebble up to node 6. Remove the pebbles on nodes 12 and 13. Put black pebbles on nodes 14 and 15 – this is the first configuration with 8/3 pebbles. Slide the pebble on node 14 up to node 7. Remove the pebble from 15. Put 2/3 of a white pebble on node 6. Slide the black pebble on node 7 up to node 3. Remove a third black pebble from node 6. Put 2/3 of a white pebble on node 2 – the resulting configuration has 8/3 pebbles. Slide the black pebble on node 3 up to the root. Remove all black pebbles. At this point there is 2/3 of a white pebble on both node 2 and node 6. Put a black pebble on node 12 and a third black pebble on node 13 – another bottleneck. Slide the 2/3 white pebble on node 6 down to node 13. Remove the pebbles from nodes 12 and 13. Finally, use 8/3 pebbles to remove the 2/3 white pebble from node 2. Branching Program Bounds {#s:PBbounds} ======================== In this section we prove tight bounds (up to a constant factor) for the number of states required for both deterministic and nondeterministic $k$-way branching programs to solve the Boolean problems [$BT_d^3(k)$]{} for all trees of height 2 and 3. (The bound is obviously $\Theta(k^d)$ for trees of height 2, because there are $d + k^d$ input variables.) For every height $h\ge 2$ we prove upper bounds for deterministic [*thrifty*]{} programs which solve [$FT_d^h(k)$]{}(Theorem \[t:BPUpper\], (\[e:dFUpper\])), and show that these bounds are optimal for degree $d=2$ even for the Boolean problem [$BT_d^h(k)$]{}(Theorem \[t:detThriftLB\]). We prove upper bounds for nondeterministic thrifty programs solving [$BT_d^h(k)$]{} in general, and show that these are optimal for binary trees of height 4 or less (Theorems \[t:BPUpper\] and \[t:thrifFourtwo\]). For the nondeterministic case our best BP upper bounds for every $h\ge 2$ come from fractional pebbling algorithms via Theorem \[t:pebSim\]. For the deterministic case our best bounds for the function problem [$FT_d^h(k)$]{} come from black pebbling via the same theorem, although we can improve on them for the Boolean problem [$BT_2^h(k)$]{} by a factor of $\log k$ (for $h\ge 3)$. \[t:BPUpper\] For all $h,d\ge 2$ $$\begin{aligned} {\ensuremath{\mathsf{\#detFstates}^h_d(k)}}& = & O(k^{(d-1)h-d+2}) \label{e:dFUpper} \\ {\ensuremath{\mathsf{\#detBstates}^h_d(k)}}& = & O(k^{(d-1)h-d+2} / \log k), \mbox{ for $h\ge 3$} \label{e:dBUpper} \\ {\ensuremath{\mathsf{\#ndetBstates}^h_d(k)}}& = & O(k^{(d-1)(h/2)+1}) \label{e:nBUpper} \end{aligned}$$ The first and third bounds are realized by thrifty programs. The first and third bounds follow from Theorem \[t:pebSim\] (which states that pebbling upper bounds give rise to upper bounds for the size of thrifty BPs) and from Theorems \[t:blackSliding\] and \[t:daryFract\] (which give the required pebbling upper bounds). To prove (\[e:dBUpper\]) we use a branching program which implements the algorithm below. Here we have a parameter $m$, and choosing $m= \lceil \log k^{d-1} - \log\log k^{d-1} \rceil$ suffices to show ${\ensuremath{\mathsf{\#detBstates}^h_d(k)}}= O(k^{(d-1)(h-1)+1} / \log k^{d-1})$, from which (\[e:dBUpper\]) follows. We estimate the number of states required up to a constant factor. 1\) Compute $v_2$ (the value of node 2 in the heap ordering), using the black pebbling algorithm for the principal left subtree. This requires $k^{(d-1)(h-2)+1}$ states. Divide the $k$ possible values for $v_2$ into $\lceil k/m \rceil$ blocks of size $m$. 2\) Remember the block number for $v_2$, and compute $v_3,\ldots,v_{d+1}$. This requires $k/m \times k^{d-2} \times k^{(d-1)(h-2) + 1} = k^{(d-1)(h-1)+1}/m$ states. 3\) Remember $v_3,\ldots,v_{d+1}$ and the block number for $v_2$. Compute $f_1(a, v_3, \ldots, v_{d+1})$ for each of the $m$ possible values $a$ for $v_2$ in its block number, and keep track of the set of $a$’s for which $f_1 = 1$. This requires $k^{d-1} \times k/m \times m \times 2^m = k^d 2^m$ states. 4\) Remember just the set of possible $a$’s (within its block) from above (there are $2^m$ possibilities). Compute $v_2$ again and accept or reject depending on whether $v_2$ is in the subset. This requires $k^{(d-1)(h-2)+1} 2^m$ states. The total number of states has order the maximum of $ k^{(d-1)(h-1)+1}/m$ and $k^{(d-1)(h-2)+1} 2^m$, which is at most $$k^{(d-1)(h-1)+1} / (\log k^{d-1} - \log \log k^{d-1})$$ for $m = \log k^{d-1} - \log \log k^{d-1}$. We combine the above upper bounds with the Neciporuk lower bounds in Subsection \[s:NecLB\], Figure \[neci\], to obtain the following. \[c:HtThree\] For all $d\ge 2$ $$\begin{aligned} {\ensuremath{\mathsf{\#detFstates}^3_d(k)}}& = & \Theta(k^{2d-1}) \\ {\ensuremath{\mathsf{\#detBstates}^3_d(k)}}& = & \Theta(k^{2d-1}/\log k) \\ {\ensuremath{\mathsf{\#ndetBstates}^3_d(k)}}& = & \Theta(k^{(3/2)d - 1/2}) \end{aligned}$$ The Neciporuk method {#s:NecLB} -------------------- The Neciporuk method still yields the strongest explicit binary branching program size lower bounds known today, namely $\Omega(\frac{n^2}{(\log n)^2})$ for deterministic [@ne66] and $\Omega(\frac{n^{3/2}}{\log n})$ for nondeterministic (albeit for a weaker nondeterministic model in which states have bounded outdegree [@pu87], see [@ra91]). By *applying the Neciporuk method* to a $k$-way branching program $B$ computing a function $f:[k]^m \rightarrow R$, we mean the following well known steps [@ne66]: 1. Upper bound the number $N(s,v)$ of (syntactically) distinct branching programs of type $B$ having $s$ non-final states, each labelled by one of $v$ variables. 2. Pick a partition $\{V_1,\ldots, V_p\}$ of $[m]$. 3. For $1\leq i\leq p$, lower bound the number $r_{V_i}(f)$ of restrictions $f_{V_i}: [k]^{|V_i|} \rightarrow R$ of $f$ obtainable by fixing values of the variables in $[m]\setminus V_i$. 4. Then size($B$) $\geq$ $|R|+\sum_{1\leq i\leq p} s_i$, where $s_i = \min \{\ s : N(s,|V_i|) \geq r_{V_i}(f)\ \}$. \[necilowerbound\] Applying the Neciporuk method yields Figure \[neci\]. [Model]{} ----------------------------------------- -- -- Deterministic $k$-way branching program Deterministic binary branching program Nondeterministic $k$-way BP Nondeterministic binary BP The $\Omega(n^{3/2}/(\log n)^{3/2})$ binary nondeterministic BP lower bound for the [$BT_d^h(k)$]{} problem and in particular for [$BT_2^3(k)$]{} applies to the number of *states* when these can have arbitrary outdegree. This seems to improve on the best known former bound of $\Omega(n^{3/2}/\log n)$, slightly larger but obtained for the weaker model in which states have bounded degree, or equivalently, for the switching and rectifier network model in which size is defined as the number of edges [@pu87; @ra91]. We have ${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v)\leq v^s \cdot (s+|R|)^{sk}$ for the number of deterministic BPs and ${\ensuremath{N_{\mbox{\scriptsize nondet}}^{\mbox{\scriptsize $k$-way}}}}(s,v) \leq v^s \cdot (|R|+1)^{sk}\cdot(2^s)^{sk}$ for nondeterministic BPs having $s$ non-final states, each labelled with one of $v$ variables. To see ${\ensuremath{N_{\mbox{\scriptsize nondet}}^{\mbox{\scriptsize $k$-way}}}}(s,v)$, note that edges labelled $i\in[k]$ can connect a state $S$ to zero or one state among the final states and can connect $S$ independently to any number of states among the non-final states. The only decision to make when applying the Neciporuk method is the choice of the partition of the input variables. Here every entry in Figure \[neci\] is obtained using the same partition (with the proviso that a $k$-ary variable in the partition is replaced by $\log k$ binary variables when we treat $2$-way branching programs). We will only partition the set $V$ of $k$-ary [$FT_d^h(k)$]{} or [$BT_d^h(k)$]{} variables that pertain to internal tree nodes other than the root (we will neglect the root and leaf variables). Each internal tree node has $d-1$ siblings and each sibling involves $k^d$ variables. By a *litter* we will mean any set of $d$ $k$-ary variables that pertain to precisely $d$ such siblings. We obtain our partition by writing $V$ as a union of $$k^d \cdot \Sigma_{i=0}^{h-3}d^i = k^d \cdot \frac{d^{h-2}-1}{d-1}$$ litters. (Specifically, each litter can be defined as $$\{f_i(j_1,j_2,\ldots,j_d),f_{i+1}(j_1,j_2,\ldots,j_d),\ldots,f_{i+d-1}(j_1,j_2,\ldots,j_d)\}$$ for some $1\leq j_1,j_2,\ldots,j_d\leq k$ and some $d$ siblings $i,i+1,\ldots,i+d-1$.) Consider such a litter $L$. We claim that $|R|^{k^d}$ distinct functions $f_L : [k]^d \rightarrow R$ can be induced by setting the variables outside of $L$, where $|R|=k$ in the case of [$FT_d^h(k)$]{} and $|R|=2$ in the case of [$BT_d^h(k)$]{}. Indeed, to induce any such function, fix the “descendants of the litter $L$” to make each variable in $L$ relevant to the output; then, set the variables pertaining to the immediate ancestor node $\nu$ of the siblings forming $L$ to the appropriate $k^d$ values, as if those were the final output desired; finally, set all the remaining variables in a way such that the values in $\nu$ percolate from $\nu$ to the root. It remains to do the calculations. We illustrate two cases. Similar calculations yield the other entries in Figure \[neci\]. *Nondeterministic $k$-way branching programs computing [$FT_d^h(k)$]{}*. Here $|R|=k$. In a correct program, the number $s$ of states querying one of the $d$ litter $L$ variables must satisfy $$k^{k^d} \leq {\ensuremath{N_{\mbox{\scriptsize nondet}}^{\mbox{\scriptsize $k$-way}}}}(s,d) \leq d^s \cdot (k+1)^{sk}\cdot(2^s)^{sk} \leq s^s \cdot k^{2sk}\cdot(2^s)^{sk}$$ since $d\leq s$ (because [$FT_d^h(k)$]{}depends on all its variables), and thus $$k^d\log k \leq s(\log s + 2k\log k) + s^2k.$$ Suppose to the contrary that $s< (k^{\frac{d-1}{2}}\sqrt{\log k})/2$. Then $$s(\log s + 2k\log k) + s^2k < s (\frac{d-1}{2}\log k + \frac{\log\log k}{2} + 2k\log k) + s^2k < s(sk) + s^2k < k^d\log k$$ for large $k$ and all $d\geq 2$, a contradiction. Hence $s\geq (k^{\frac{d-1}{2}}\sqrt{\log k})/2$. Since this holds for every litter, recalling step 4 in the Neciporuk method as described prior to Theorem \[necilowerbound\], the total number of states in the program is at least $$k + k^d \cdot \frac{d^{h-2}-1}{d-1} \cdot (k^{\frac{d-1}{2}}\sqrt{\log k})/2 \geq \frac{d^{h-2}-1}{2d-2} \cdot k^{\frac{3d}{2}-\frac{1}{2}}\sqrt{\log k}.$$ *Nondeterministic binary (ie $2$-way) branching programs deciding [$BT_d^h(k)$]{}*. Here $|R|=2$. When the program is binary, the $d$ variables in the litter $L$ become $d \log k$ Boolean variables. The number $s$ of states querying one of these $d \log k$ variables then verifies $$2^{k^d} \leq {\ensuremath{N_{\mbox{\scriptsize nondet}}^{\mbox{\scriptsize $2$-way}}}}(s,d\log k) \leq (d\log k)^s\cdot (2+1)^{2s} \cdot (2^s)^{2s} < (s\log k)^s \cdot 2^{4s+2s^2}$$ since $d\leq s$ and thus $$k^d \leq s\log s + s\log\log k + 4s + 2s^2 \leq 3s^2 + 5s \log\log k.$$ It follows that $s\geq k^{\frac{d}{2}}/2$. Hence the total number of states in a binary nondeterministic program deciding [$BT_d^h(k)$]{} is at least $$k^d \cdot \frac{d^{h-2}-1}{d-1} \cdot \frac{k^{d/2}}{2} \geq \frac{d^{h-2}-1}{2(d-1)} \cdot k^{\frac{3d}{2}} = \frac{d^{h-2}-1}{2(d-1)} \cdot \frac{(k^d\log k)^{3/2}}{(\log k)^{3/2}} = \Omega(n^{3/2}/(\log n)^{3/2})$$ where $n=\Theta(k^d\log k)$ is the length of the binary encoding of [$BT_d^h(k)$]{}. The next two results show limitations on the Neciporuk method that are not necessarily present in the state sequence method (see Theorems \[t:childLB\] and \[t:beatittwice\]). Let [$Children_d^h(k)$]{} have the same input as [$FT_d^h(k)$]{} with the exception that the root function is deleted. The output is the tuple $(v_2, v_3,\ldots, v_{d+1})$ of values for the children of the root. [$Children_d^h(k)$]{} can be computed by a $k$-way deterministic BP with $O(k^{(d-1)h-d+2})$ states using the same black pebbling method which yields the bound (\[e:dFUpper\]) in Theorem \[t:BPUpper\]. \[t:rootfunction\] For any $d,h\geq 2$, the best $k$-way deterministic BP size lower bound attainable for [$Children_d^h(k)$]{} by applying the Neciporuk method is $\Omega(k^{2d-1})$. The function ${\ensuremath{Children_d^h(k)}}: [k]^m \rightarrow R$ has $m=\Theta(k^d)$. Any partition $\{V_1,\ldots,V_p\}$ of the set of $k$-ary input variables thus has $p=O(k^d)$. Claim: for each $i$, the best attainable lower bound on the number of states querying variables from $V_i$ is $O(k^{d-1})$. Consider such a set $V_i$, $|V_i|=v\geq 1$. Here $|R|=k^d$, so the number ${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v)$ of distinct deterministic BPs having $s$ non-final states querying variables from $V_i$ satisfies $${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v) \geq 1^s \cdot (s+|R|)^{sk} \geq (1+k^d)^{sk} \geq k^{dsk}.$$ Hence the estimate used in the Neciporuk method to upper bound ${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v)$ will be at least $k^{dsk}$. On the other hand, the number of functions $f_{V_i}:[k]^v \rightarrow R$ obtained by fixing variables outside of $V_i$ cannot exceed $k^{O(k^d)}$ since the number of variables outside $V_i$ is $\Theta(k^d)$. Hence the best lower bound on the number of states querying variables from $V_i$ obtained by applying the method will be no larger than the smallest $s$ verifying $k^{ck^d}\leq k^{dsk}$ for some $c$ depending on $d$ and $k$. This proves our claim since then this number is at most $s = O(k^{d-1})$. Let [$\mathit{SumMod}_d^h(k)$]{} have the same input as [$FT_d^h(k)$]{} with the exception that the root function is preset to the sum modulo $k$. In other words the output is $v_2+ v_3+ \cdots + v_{d+1}\mod k$. \[t:lasttheorem\] The best $k$-way deterministic BP size lower bound attainable for [$\mathit{SumMod}_2^3(k)$]{} by applying the Neciporuk method is $\Omega(k^2)$. The function ${\ensuremath{\mathit{SumMod}_2^3(k)}}: [k]^m \rightarrow R$ has $m=\Theta(k^2)$. Consider a set $V_i$ in any partition $\{V_1,\ldots,V_p\}$ of the set of $k$-ary input variables, $|V_i|=v$. Here $|R|=k$, so the number ${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v)$ of distinct deterministic BPs having $s$ non-sink states querying variables from $V_i$ satisfies $${\ensuremath{N_{\mbox{\scriptsize det}}^{\mbox{\scriptsize $k$-way}}}}(s,v) \geq 1^s \cdot (s+|R|)^{sk} \geq (1+k)^{sk} \geq k^{sk}.$$ If $V_i$ contains a leaf variable, then perhaps the number of functions induced by setting variables complementary to $V_i$ can reach the maximum $k^{k^2}$. Neciporuk would conclude that $k$ states querying the variables from such a $V_i$ are necessary. Note that there are at most $4$ sets $V_i$ containing a leaf variable (hence a total of $4k$ states required to account for the variables in these $4$ sets). Now suppose that $V_i$ does not contain a leaf variable. Then setting the variables complementary to $V_i$ can either induce a constant function (there are $k$ of those), or the sum of a constant plus a variable (there are at most $k\cdot |V_i|$ of those) or the sum of two of the variables (there are at most $|V_i|^2$ of those). So the maximum number of induced functions is $|V_i|^2=O(k^4)$. The number of states querying variables from $V_i$ is found by Neciporuk to be $s\geq 4/k$. In other words $s=1$. So for any of the at least $p-4$ sets in the partition not containing a leaf variable, the method gets one state. Since $p-4=O(k^2)$, the total number of states accounting for all the $V_i$ is $O(k^2)$. The state sequence method {#s:beating} ------------------------- Here we give alternative proofs for some of the lower bounds given in Section \[s:NecLB\]. These proofs are more intricate than the Neciporuk proofs but they do not suffer a priori from a quadratic limitation. The method also yields stronger lower bounds for [$Children_2^4(k)$]{} and [$\mathit{SumMod}_2^3(k)$]{}(Theorems \[t:childLB\] and \[t:beatittwice\]) than those obtained by applying Neciporuk’s method (Theorems \[t:rootfunction\] and \[t:lasttheorem\]). \[t:newLB\] ${\ensuremath{\mathsf{\#ndetBstates}^3_2(k)}}\ge k^{2.5}$ for sufficiently large $k$. Consider an input $I$ to [$BT_2^3(k)$]{}. We number the nodes in $T^3_2$ as in Figure \[sample\], and let $v^I_j$ denote the value of node $j$ under input $I$. We say that a state in a computation on input $I$ [*learns*]{} $v^I_j$ if that state queries $f_j^I(v^I_{2j},v^I_{2j+1})$ (recall $2j,2j+1$ are the children of node $j$). [**Definition \[Learning Interval\]**]{} Thus type 2 learning intervals begin with $\gamma_0$ or a state which learns $v^I_2$, and never learn $v^I_3$ until the last state, and type 3 learning intervals begin with a state which learns $v^I_3$ and never learn $v^I_2$ until the last state. Now let $B$ be as above, and for $j\in\{2,3\}$ let $\Gamma_j$ be the set of all states of $B$ which query the input function $f_j$. We will prove the theorem by showing that for large $k$ $$\label{e:proof} |\Gamma_2| + |\Gamma_3| > k^2\sqrt{k}.$$ For $r,s\in[k]$ let $F^{r,s}_{yes}$ be the set of inputs $I$ to $B$ whose four leaves are labelled $r,s,r,s$ respectively, whose middle node functions $f_2^I$ and $f_3^I$ are identically 1 except $f^I_2(r,s)=v^I_2$ and $f^I_3(r,s)=v^I_3$, and $f^I_1(v^I_2,v^I_3)=1$ (so $v^I_1 = 1$). Thus each such $I$ is a ‘YES input’, and should be accepted by $B$. Note that each member $I$ of $F_{yes}^{r,s}$ is uniquely specified by a triple $$\label{e:triple} (v^I_2,v^I_3,f^I_1) \mbox{ where $f^I_1(v^I_2,v^I_3)=1$}$$ and hence $F_{yes}^{r,s}$ has exactly $k^2(2^{k^2-1})$ members. For $j\in\{2,3\}$ and $r,s\in [k]$ let $\Gamma_j^{r,s}$ be the subset of $\Gamma_j$ consisting of those states which query $f_j(r,s)$. Then $\Gamma_j$ is the disjoint union of $\Gamma_j^{r,s}$ over all pairs $(r,s)$ in $[k]\times [k]$. Hence to prove (\[e:proof\]) it suffices to show $$\label{e:gammas} |\Gamma_2^{r,s}| + |\Gamma_3^{r,s}| > \sqrt{k}$$ for large $k$ and all $r,s$ in $[k]$. We will show this by showing $$\label{e:product} (|\Gamma_2^{r,s}|+1) (|\Gamma_3^{r,s}|+1) \ge k/2$$ for all $k\ge 2$. (Note that given the product, the sum is minimized when the summands are equal.) For each input $I$ in $F_{yes}^{r,s}$ we associate a fixed accepting computation ${\cal C}(I)$ of $B$ on input $I$. Now fix $r,s\in [k]$. For $a,b\in [k]$ and $f:[k]\times [k] {\rightarrow}\{0,1\}$ with $f(a,b)=1$ we use $(a,b,f)$ to denote the input $I$ in $F_{yes}^{r,s}$ it represents as in (\[e:triple\]). To prove (\[e:product\]), the idea is that if it is false, then as $I$ varies through all inputs $(a,b,f)$ in $F_{yes}^{r,s}$ there are too few states learning $v^I_2 = a$ and $v^I_3 =b$ to verify that $f(a,b)=1$. Specifically, we can find $a,b,f,g$ such that $f(a,b)=1$ and $g(a,b)=0$, and by cutting and pasting the accepting computation ${\cal C}(a,b,f)$ with accepting computations of the form ${\cal C}(a,b',g)$ and ${\cal C}(a',b,g)$ we can construct an accepting computation of the ‘NO input’ $(a,b,g)$. We may assume that the branching program $B$ has a unique initial state $\gamma_0$ and a unique accepting state $\delta_{ACC}$. For $j\in \{2,3\}$, $a,b\in [k]$ and $f:[k]\times [k]{\rightarrow}\{0,1\}$ with $f(a,b)=1$ define $\varphi_j(a,b,f)$ to be the set of all state pairs $(\gamma,\delta)$ such that there is a type $j$ learning interval in ${\cal C}(a,b,f)$ which begins with $\gamma$ and ends with $\delta$. Note that if $j=2$ then $\gamma\in(\Gamma_2^{r,s} \cup \{\gamma_0\})$ and $\delta \in (\Gamma_3^{r,s} \cup \{\delta_{ACC}\})$, and if $j=3$ then $\gamma\in\Gamma_3^{r,s}$ and $\delta \in (\Gamma_2^{r,s} \cup \{\delta_{ACC}\})$. To complete the definition, define $\varphi_j(a,b,f)=\varnothing$ if $f(a,b)=0$. For $j\in\{2,3\}$ and $f:[k]\times [k] {\rightarrow}\{0,1\}$ we define a function $\varphi_j[f]$ from $[k]$ to sets of state pairs as follows: $$\begin{aligned} \varphi_2[f](a) & = &\bigcup_{b\in[k]} \varphi_2(a,b,f) \ \subseteq S_2 \\ \varphi_3[f](b) & = &\bigcup_{a\in[k]} \varphi_3(a,b,f) \ \subseteq S_3\end{aligned}$$ where $S_2 = (\Gamma_2^{r,s}\cup \{\gamma_0\}) \times (\Gamma_3^{r,s} \cup \{\delta_{ACC}\})$ and $S_3 = \Gamma_3^{r,s} \times (\Gamma_2^{r,s} \cup \{\delta_{ACC}\})$. For each $f$ the function $\varphi_j[f]$ can be specified by listing a $k$-tuple of subsets of $S_j$, and hence there are at most $2^{k|S_j|}$ distinct such functions as $f$ ranges over the $2^{k^2}$ Boolean functions on $[k]\times[k]$, and hence there are at most $2^{k(|S_2|+|S_3|)}$ pairs of functions $(\varphi_2[f],\varphi_3[f])$. If we assume that (\[e:product\]) is false, we have $|S_2|+|S_3| < k$. Hence by the pigeonhole principle there must exist distinct Boolean functions $f,g$ such that $\varphi_2[f] = \varphi_2[g]$ and $\varphi_3[f] = \varphi_3[g]$. Since $f$ and $g$ are distinct we may assume that there exist $a,b$ such that $f(a,b)=1$ and $g(a,b)=0$. Since $\varphi_2[f](a) = \varphi_2[g](a)$, if $(\gamma,\delta)$ are the endpoints of a type 2 learning interval in ${\cal C}(a,b,f)$ there exists $b'$ such that $(\gamma,\delta)$ are the endpoints of a type 2 learning interval in ${\cal C}(a,b',g)$ (and hence $g(a,b')=1$). Similarly, if $(\gamma,\delta)$ are endpoints of a type 3 learning interval in ${\cal C}(a,b,f)$ there exists $a'$ such that $(\gamma,\delta)$ are the endpoints of a type 3 learning interval in ${\cal C}(a',b,f)$. Now we can construct an accepting computation for the ‘NO input’ $(a,b,g)$ from ${\cal C}(a,b,f)$ by replacing each learning interval beginning with some $\gamma$ and ending with some $\delta$ by the corresponding learning interval in ${\cal C}(a,b',g)$ or ${\cal C}(a',b,g)$. (The new accepting computation has the same sequence of critical states as ${\cal C}(a,b,f)$.) This works because a type 2 learning interval never queries $v_3$ and a type 3 learning interval never queries $v_2$. This completes the proof of (\[e:product\]) and the theorem. \[t:detThree\] Every deterministic branching program that solves [$BT_2^3(k)$]{} has at least $k^3/\log k$ states for sufficiently large $k$. We modify the proof of Theorem \[t:newLB\]. Let $B$ be a deterministic BP which solves [$BT_2^3(k)$]{}, and for $j\in\{2,3\}$ let $\Gamma_j$ be the set of states in $B$ which query $f_j$ (as before). It suffices to show that for sufficiently large $k$ $$\label{e:gammaSum} |\Gamma_2|+|\Gamma_3|\ge k^3/\log k.$$ For $r,s \in [k]$ we define the set $F^{r,s}$ to be the same as $F_{yes}^{r,s}$ except that we remove the restriction on $f^I_1$. Hence there are exactly $k^2 2^{k^2}$ inputs in $F^{r,s}$. As before, for $j\in\{2,3\}$, $\Gamma_j$ is the disjoint union of $\Gamma^{r,s}$ for $r,s\in [k]$. Thus to prove (\[e:gammaSum\]) it suffices to show that for sufficiently large $k$ and all $r,s$ in $[k]$ $$\label{e:newGsum} |\Gamma_2^{r,s}| + |\Gamma_3^{r,s}| \ge k/\log k.$$ We may assume there are unique start, accepting, and rejecting states $\gamma_0$, $\delta_{ACC}$, $\delta_{REJ}$. Fix $r,s\in[k]$. For each root function $f:[k]\times[k]{\rightarrow}\{0,1\}$ we define the functions $$\begin{aligned} \psi_2[f] : [k]\times (\Gamma_2^{r,s} \cup \{\gamma_0\}) & {\rightarrow}& (\Gamma_3^{r,s} \cup \{\delta_{ACC},\delta_{REJ}\})\\ \psi_3[f] : [k]\times \Gamma_3^{r,s} & {\rightarrow}& (\Gamma_2^{r,s} \cup \{\delta_{ACC},\delta_{REJ}\})\end{aligned}$$ by $\psi_2[f](a,\gamma) = \delta$ if $\delta$ is the next critical state after $\gamma$ in a computation with input $(a,b,f)$ (this is independent of $b$), or $\delta=\delta_{REJ}$ if there is no such critical state. Similarly $\psi_3[f](b,\delta)=\gamma$ if $\gamma$ is the next critical state after $\delta$ in a computation with input $(a,b,f)$ (this is independent of $a$), or $\delta=\delta_{REJ}$ if there is no such critical state. CLAIM: The pair of functions $(\psi_2[f],\psi_3[f])$ is distinct for distinct $f$. For suppose otherwise. Then there are $f,g$ such that $\psi_2[f]=\psi_2[g]$ and $\psi_3[f]=\psi_3[g]$ but $f(a,b)\ne g(a,b)$ for some $a,b$. But then the sequences of critical states in the two computations $C(a,b,f)$ and $C(a,b,g)$ must be the same, and hence the computations either accept both $(a,b,f)$ and $(a,b,g)$ or reject both. So the computations cannot both be correct. Finally we prove (\[e:newGsum\]) from the CLAIM. Let $s_2 = |\Gamma_2^{r,s}|$ and let $s_3=|\Gamma_3^{r,s}|$, and let $s=s_2+s_3$. Then the number of distinct pairs $(\psi_2,\psi_3)$ is at most $$(s_3+2)^{k(s_2+1)}(s_2+2)^{ ks_3} \le (s+2)^{k(s+1)}$$ and since there are $2^{k^2}$ functions $f$ we have $$2^{k^2} \le (s+2)^{k(s+1)}$$ so taking logs, $k^2 \le k(s+1)\log (s+2)$ so $k/\log(s+2) \le s+1$, and (\[e:newGsum\]) follows. Recall from Theorem \[t:rootfunction\] that applying the Neciporuk method to [$Children_2^4(k)$]{} yields an $\Omega(k^3)$ size lower bound and from Theorem \[t:lasttheorem\] that applying it to [$\mathit{SumMod}_2^3(k)$]{} yields $\Omega(k^2)$. The next two results improve on these bounds using the state sequence method. The new lower bounds match the upper bounds given by the pebbling method used to prove (\[e:dFUpper\]) in Theorem \[t:BPUpper\]. \[t:childLB\] Any deterministic $k$-way BP for [$Children_2^4(k)$]{}has at least $k^4/2$ states. Let $E_{4true}$ be the set of all inputs $I$ to [$Children_2^4(k)$]{}such that $f_2^I=f^I_3 = +_k$ (addition mod $k$), and for $i\in\{4,5,6,7\}$ $f^I_i$ is identically 0 except for $f^I_i(v^I_{2i},v^I_{2i+1})$. Let $B$ be a branching program as in the theorem. For each $I\in E_{4true}$ let ${\cal C}(I)$ be the computation of $B$ on input $I$. For $r,s\in[k]$ let $E^{r,s}_{4true}$ be the set of inputs $I$ in $E_{4true}$ such that for $i\in\{4,5,6,7\}$, $v^I_{2i}=r$ and $v^I_{2i+1}=s$. Then for each pair $r,s$ each input $I$ in $E^{r,s}_{4true}$ is completely specified by the quadruple $v^I_4,v^I_5,v^I_6,v^I_7$, so $|E^{r,s}_{4true}| = k^4$. For $r,s\in[k]$ and $i\in\{4,5,6,7\}$ let $\Gamma^{r,s}_i$ be the set of states of $B$ that query $f_i(r,s)$, and let $$\label{e:Gammars} \Gamma^{r,s} = \Gamma^{r,s}_4\cup\Gamma^{r,s}_5\cup\Gamma^{r,s}_6 \cup \Gamma^{r,s}_7$$ The theorem follows from the following Claim. CLAIM 1: $|\Gamma^{r,s}| \ge k^2/2$ for all $r,s \in [k]$. To prove CLAIM 1, suppose to the contrary for some $r,s$ $$\label{e:gammak} |\Gamma^{r,s}| < k^2/2$$ We associate a pair $$T(I) = (\gamma^I,v^I_i)$$ with $I$ as follows: $\gamma^I$ is the last state in the computation ${\cal C}(I)$ that is in $\Gamma^{r,s}$ (such a state clearly exists), and $i\in\{4,5,6,7\}$ is the node queried by $\gamma^I$. (Here $v^I_i$ is the value of node $i$). We also associate a second triple $U(I)$ with each input $I$ in $E^{r,s}_{4true}$ as follows: $$U(I) = \left\{ \begin{array}{ll} (v^I_4,v^I_5,v^I_3) & \mbox{if $\gamma^I$ queries node 4 or 5 } \\ (v^I_6,v^I_7,v^I_2) & \mbox{otherwise.} \end{array} \right.$$ CLAIM 2: As $I$ ranges over $E^{r,s}_{4true}$, $U(I)$ ranges over at least $k^3/2$ triples in $[k]^3$. To prove CLAIM 2, consider the the subset $E'$ of inputs in $E^{r,s}_{4true}$ whose values for nodes 4,5,6,7 have the form $a,b,a,c$ for arbitrary $a,b,c \in [k]$. For each such $I$ in $E'$ an adversary trying to minimize the number of triples $U(I)$ must choose one of the two triples $(a,b,a+_k c)$ or $(a,c,a+_k b)$. There are a total of $k^3$ distinct triples of each of the two forms, and the adversary must choose at least half the triples from one of the two forms, so there must be at least $k^3/2$ distinct triples of the form $U(I)$. This proves CLAIM 2. On the other hand by (\[e:gammak\]) there are fewer than $k^3/2$ possible values for $T(I)$. Hence there exist inputs $I,J \in E^{r,s}_{4true}$ such that $U(I) \ne U(J)$ but $T(I)=T(J)$. Since $U(I) \ne U(J)$ but $v^I_i = v^J_i$ (where $i$ is the node queried by $\gamma^I=\gamma^J$) it follows that either $v^I_2 \ne v^J_2$ or $v^I_3 \ne v^J_3$, so $I$ and $J$ give different values to the function [$Children_2^4(k)$]{}. But since $T(I)=T(J)$ if follows that the two computations ${\cal C}(I)$ and ${\cal C}(J)$ are in the same state $\gamma^I=\gamma^J$ the last time any of the nodes $\{4,5,6,7\}$ is queried, and the answers $v^I_i=v^J_i$ to the queries are the same, so both computations give identical outputs. Hence one of them is wrong. \[t:beatittwice\] Any deterministic $k$-way BP for [$\mathit{SumMod}_2^3(k)$]{} requires at least $k^3$ states. We adapt the previous proof. Now $E^{r,s}$ is the set of inputs $I$ to [$\mathit{SumMod}_2^3(k)$]{} such that for $i\in\{2,3\}$, $f^I_i$ is identically one except possibly for $f^I_i(r,s)$, and $v^I_4 = v^I_6 = r$ and $v^I_5 = v^I_7 = s$. Note that an input to $E^{r,s}$ can be specified by the pair $(v^I_2,v^I_3)$, so $E^{r,s}$ has exactly $k^2$ elements. Define $$\Gamma^{r,s} = \Gamma^{r,s}_2 \cup \Gamma^{r,s}_3$$ Now we claim that an input $I$ in $E^{r,s}$ can be specified by the pair $(\gamma^I,v^I_i)$, where $\gamma^I$ is the last state in the computation ${\cal C}(I)$ that is in $\Gamma^{r,s}$, and $i\in\{2,3\}$ is the node queried by $\gamma^I$. The Claim holds because $(\gamma^I,v^I_i)$ determines the output of the computation, which in turn (together with $v^I_i$) determines $v^I_j$, where $j$ is the sibling of $i$. From the Claim it follows that $|\Gamma^{r,s}| \ge k$ for all $r,s\in [k]$, and hence there must be at least $k^3$ states in total. Thrifty lower bounds {#s:thriftyLB} -------------------- See Definition \[d:thrifty\] for thrifty programs. Theorem \[t:detThriftLB\] below shows that the upper bound given in Theorem \[t:BPUpper\] (\[e:dFUpper\]) is optimal for deterministic thrifty programs solving the function problem [$FT_d^h(k)$]{} for $d=2$ and all $h\ge 2$. Theorem \[t:thrifFourtwo\] shows that the upper bound given in Theorem \[t:BPUpper\] (\[e:nBUpper\]) is optimal for nondeterministic thrifty programs solving the Boolean problem [$BT_d^h(k)$]{} for $d=2$ and $h=4$ (it is optimal for $h \le 3$ by Theorem \[c:HtThree\]). \[t:detThriftLB\] For any $h,k$, every deterministic thrifty branching program solving $BT_2^{h}(k)$ has at least $k^h$ states. Fix a deterministic thrifty BP $B$ that solves $BT_2^{h}(k)$. Let $E$ be the inputs to $B$. Let ${\mathsf{Vars}}$ be the set of $k$-valued input variables (so $|E| = k^{|{\mathsf{Vars}}|}$). Let $Q$ be the states of $B$. If $i$ is an internal node then the $i$ variables are $f_i(a,b)$ for $a,b \in [k]$, and if $i$ is a leaf node then there is just one $i$ variable $l_i$. We sometimes say “$f_i$ variable” just as an in-line reminder that $i$ is an internal node. Let ${{\sf var}}(q)$ be the input variable that $q$ queries. Let ${{\sf node}}$ be the function that maps each variable $X$ to the node $i$ such that $X$ is an $i$ variable, and each state $q$ to ${{\sf node}}({{\sf var}}(q))$. When it is clear from the context that $q$ is on the computation path of $I$, we just say “$q$ queries $i$” instead of “$q$ queries the thrifty $i$ variable of $I$”. Fix an input $I$, and let $P$ be its computation path. We will choose $n$ states on $P$ as [**critical states**]{} for $I$, one for each node. Note that $I$ must visit a state that queries the root (i.e. queries the thrifty root variable of $I$), since otherwise the branching program would make a mistake on an input $J$ that is identical to $I$ except $f_1^J(v_2^I,v_3^I) := k - f_1^I(v_2^I,v_3^I)$; hence $J \in BT^h_2(k)$ iff $I \not \in BT^h_2(k)$. So, we can choose the root critical state for $I$ to be the last state on $P$ that queries the root. The remainder of the definition relies on the following small lemma: \[l:basic\_thrifty\] For any $J$ and internal node $i$, if $J$ visits a state $q$ that queries $i$, then for each child $j$ of $i$, there is an earlier state on the computation path of $J$ that queries $j$. Suppose otherwise, and wlog assume the previous statement is false for $j=2i$. For every $a \not = v_{2i}^J$ there is an input $J_a$ that is identical to $J$ except $v_{2i}^{J_a} = a$. But the computation paths of $J_a$ and $J$ are identical up to $q$, so $J_a$ queries a variable $f_i(a,b)$ such that $b = v_{2i+1}^{J_a}$ and $a \not = v_{2i}^{J_a}$, which contradicts the thrifty assumption. Now we can complete the definition of the critical states of $I$. For $i$ an internal node, if $q$ is the node $i$ critical state for $I$ then the node $2i$ (resp. $2i+1$) critical state for $I$ is the last state on $P$ before $q$ that queries $2i$ (resp. $2i+1$). We say that a collection of nodes is a [*minimal cut*]{} of the tree if every path from root to leaf contains exactly one of the nodes. Now we assign a pebbling sequence to each state on $P$, such that the set of pebbled nodes in each configuration is a minimal cut of the tree or a subset of some minimal cut (and once it becomes a minimal cut, it remains so), and any two adjacent configurations are either identical, or else the later one follows from the earlier one by a valid pebbling move. (Here we allow the removal of the pebbles on the children of a node $i$ as part of the move that places a pebble on $i$.) This assignment can be described inductively by starting with the last state on $P$ and working backwards. Note that implicitly we will be using the following fact: \[f:basic\_crit\_state\] For any input $I$, if $j$ is a descendant of $i$ then the node $j$ critical state for $I$ occurs earlier on the computation path of $I$ than the node $i$ critical state for $I$. The pebbling configuration for the output state has just a black pebble on the root. Assume we have defined the pebbling configurations for $q$ and every state following $q$ on $P$, and let $q'$ be the state before $q$ on $P$. If $q'$ is not critical, then we make its pebbling configuration be the same as that of $q$. If $q'$ is critical then it must query a node $i$ that is pebbled in $q$. The pebbling configuration for $q'$ is obtained from the configuration for $q$ by removing the pebble from $i$ and adding pebbles to $2i$ and $2i+1$ (if $i$ is an internal node - otherwise you only remove the pebble from $i$). Now consider the last critical state in the computation path $P^I$ that queries a height 2 node (i.e. a parent of leaves). We use $r^I$ to denote this state and call it the [**supercritical state**]{} of $I$. The pebbling configuration associated with $r^I$ is called the bottleneck configuration, and its pebbled nodes are called [**bottleneck nodes**]{}. The two children of ${{\sf node}}(r^I)$ must be bottleneck nodes, and the bottleneck nodes form a minimal cut of the tree. The path from the root to ${{\sf node}}(r)$ is the [**bottleneck path**]{}, and by Fact \[f:basic\_crit\_state\] it cannot contain any bottleneck nodes. From all this it is easy to see that there must be at least $h$ bottleneck nodes. Here is the main property of the pebbling sequences that we need: \[f:basic\_peb\_seq\] For any input $I$, if non-root node $i$ with parent $j$ is pebbled at a state $q$ on $P^I$, then the node $j$ critical state $q'$ of $I$ occurs later on $P^I$, and there is no state (critical or otherwise) between $q$ and $q'$ on $P^I$ that queries $i$. Let $R$ be the states that are supercritical for at least one input. Let $E_r$ be the inputs with supercritical state $r$. Now we can state the main lemma. \[l:thrifty\_advice\_main\_lemma\] For every $r \in R$, there is an surjective function from $[k]^{|{\mathsf{Vars}}|-h}$ to $E_r$. The lemma gives us that $|E_r| \le k^{|{\mathsf{Vars}}|-h}$ for every $r \in R$. Since $\{E_r\}_{r\in R}$ is a partition of $E$, there must be at least $|E| / k^{|{\mathsf{Vars}}|-h} = k^h$ sets in the partition, i.e. there must be at least $k^h$ supercritical states. So the theorem follows from the lemma. Fix $r \in R$ and let $D := E_r$. Let ${i_{{\sf sc}}}:= {{\sf node}}(r)$. Since $r$ is thrifty for every $I$ in $D$, there are values $v_{2{i_{{\sf sc}}}}^D$ and $v_{2{i_{{\sf sc}}}+1}^D$ such that $v_{2{i_{{\sf sc}}}}^I = v_{2{i_{{\sf sc}}}}^D$ and $v_{2{i_{{\sf sc}}}+1}^I = v_{2{i_{{\sf sc}}}+1}^D$ for every $I$ in $D$. The surjective function of the lemma is computed by a procedure $\IntAdv$ that takes as input a $[k]$-string (the advice), tries to interpret it as the code of an input in $D$, and when successful outputs that input. We want to show that for every $I \in D$ we can choose ${{\sf adv}}^I \in [k]^{|{\mathsf{Vars}}|-h}$ such that $\IntAdv({{\sf adv}}^I){{\downarrow}}= I$. The idea is that the procedure $\IntAdv$ traces the computation path $P$ starting from state $r$, using the advice string ${{\sf adv}}^I$ when necessary to answer queries made by each state $q$ along the path. By the thrifty property, the procedure can ‘learn’ the values $a,b$ of the children of $i={{\sf node}}(q)$ (if $i$ is an internal node) from the query $f_i(a,b)$ of $q$. Each such child that has not been queried earlier in the trace saves one advice value for the future. By Fact \[f:basic\_peb\_seq\] the parent of each of the $h$ bottleneck nodes will be queried before the node itself, making a total savings of at least $h$ values in the advice string. After the trace is completed, the remaining advice values complete the specification of the input $I\in E_r$. In more detail, during the execution of the procedure we maintain a current state $q$, a partial function $v^*$ from nodes to $[k]$, and a set of nodes ${U_{\mathsf{L}}}$. Once we have added a node to ${U_{\mathsf{L}}}$, we never remove it, and once we have added $v^*(i) := a$ to the definition of $v^*$, we never change $v^*(i)$. We have reached $q$ by following a *consistent partial computation path* starting from $r$, meaning there is at least one input in $D$ that visits exactly the states and edges that we visited between $r$ and $q$. So initially $q = r$. Intuitively, $v^*(i){{\downarrow}}= a$ for some $a$ when we have “committed” to interpreting the advice we have read so-far as being the initial segment of *some* complete advice string ${{\sf adv}}^I$ for an input $I$ with $v_i^I = a$. Initially $v^*$ is undefined everywhere. As the procedure goes on, we may often have to use an element of the advice in order to set a value of $v^*$; however, by exploiting the properties of the critical state sequences, for each $I \in D$, when given the complete advice ${{\sf adv}}^I$ for $I$ there will be at least $h$ nodes ${U_{\mathsf{L}}}^I$ that we “learn” without directly using the advice. Such an opportunity arises when we visit a state that queries some variable $f_i(b_1,b_2)$ and we have not yet committed to a value for at least one of $v^*(2i)$ or $v^*(2i+1)$ (if both then, we learn two nodes). When this happens, we add that child or children of $i$ to ${U_{\mathsf{L}}}$ (the [L]{} stands for “learned”). So initially ${U_{\mathsf{L}}}$ is empty. There is a loop in the procedure $\IntAdv$ that iterates until $|{U_{\mathsf{L}}}| = h$. Note that the children of ${i_{{\sf sc}}}$ will be learned immediately. Let $v^*(D)$ be the inputs in $D$ consistent with $v^*$, i.e. $I \in v^*(D)$ iff $I \in D$ and $v_i^I = v^*(i)$ for every $i \in {{\sf Dom}}(v^*)$. Following is the complete pseudocode for $\IntAdv$. We also state the most-important of the invariants that are maintained.\ [**Procedure**]{} $\IntAdv(\vec{a} \in [k]^*)$: $q := r$, ${U_{\mathsf{L}}}:= \emptyset$, $v^* := \text{undefined everywhere}$. If $N$ elements of $\vec{a}$ have been used, then $|{{\sf Dom}}(v^*)| = N + |{U_{\mathsf{L}}}|$.\ $i := {{\sf node}}(q)$ let $b_1,b_2$ be such that ${{\sf var}}(q) = f_i(b_1,b_2)$. $v^*(2i) := b_1$ and ${U_{\mathsf{L}}}:= {U_{\mathsf{L}}}+ 2i$. $v^*(2i+1) := b_2$ and ${U_{\mathsf{L}}}:= {U_{\mathsf{L}}}+ (2i+1)$. let $a$ be the next unused element of $\vec{a}$. $v^*(i) := a$. $q := $ the state reached by taking the edge out of $q$ labeled $v^*(i)$. let $\vec{b}$ be the next $|{\mathsf{Vars}}| - |{{\sf Dom}}(v^*)|$ unused elements of $\vec{a}$. \[line:lastadviceuse\] let $I_1,\ldots,I_{|v^*(D)|}$ be the inputs in $v^*(D)$ sorted according to some globally fixed order on $E$. \[line:secondtolastline\] if $\vec{b}$ is the $t$-largest string in the lexicographical ordering of $[k]^{|{\mathsf{Vars}}| - |{{\sf Dom}}(v^*)|}$, and $t \le |v^*(D)|$, then return $I_t$.[^3] \[line:lastline\] If the loop finishes, then there are at most $|E|/|{{\sf Dom}}(v^*)| = k^{|{\mathsf{Vars}}|-|{{\sf Dom}}(v^*)|}$ inputs in $v^*(D)$. So for each of the inputs $I$ enumerated on line \[line:secondtolastline\], there is a way of setting $\vec{a}$ so that $I$ will be chosen on line \[line:lastline\]. Recall we are trying to show that for every $I$ in $D$ there is a string ${{\sf adv}}^I \in [k]^{|{\mathsf{Vars}}|-h}$ such that $\IntAdv(\vec{a}){{\downarrow}}= I$. This is easy to see under the assumption that there is such a string that makes the loop finish while maintaining the loop invariant; since the loop invariant ensures we have used $|{{\sf Dom}}(v^*)| - h$ elements of advice when we reach line \[line:lastadviceuse\], and since line \[line:lastadviceuse\] is the last time when the advice is used, in all we use at most $|{\mathsf{Vars}}| - h$ elements of advice. To remove that assumption, first observe that for each $I$, we can set the advice to some ${{\sf adv}}^I$ so that $I \in g(D)$ is maintained when $\IntAdv$ is run on $\vec{a}^I$. Moreover, for that ${{\sf adv}}^I$, we will never use an element of advice to set the value of a bottleneck node of $I$, and $I$ has at least $h$ bottleneck nodes. Note, however, that this does not necessarily imply that ${U_{\mathsf{L}}}^I$ (the $h$ nodes ${U_{\mathsf{L}}}$ we obtain when running $\IntAdv$ on ${{\sf adv}}^I$) is a subset of the bottleneck nodes of $I$. Finally, note that we are of course implicitly using the fact that no advice elements are “wasted”; each is used to set a different node value. For any $h,k$, every deterministic thrifty branching program solving $BT_2^{h}(k)$ has at least $\sum_{2 \le l \le h} k^l$ states. The previous theorem only counts states that query height 2 nodes. The same proof is easily adapted to show there are at least $k^{h-l+2}$ states that query height $l$ nodes, for $l = 2,\ldots,h$. \[t:thrifFourtwo\] Every nondeterministic thrifty branching program solving [$BT_2^4(k)$]{} has $\Omega(k^3)$ states. As in the proof of the previous theorem we restrict attention to inputs $I$ in which the function $f_i$ associated with each internal node $i$ (except $i=1$) satisfies $f_i(x,y)=0$ except possibly when $x,y$ are the values of its children. For $r,s\in[k]$ let $E^{r,s}$ be the set of all such inputs $I$ such that for all $j\in\{4,5,6,7\}$, $v^I_{2j}=r$ and $v^I_{2j+1}=s$ (i.e. each pair of sibling leaves have values $r,s$), and $f_1$ is identically 1 (so $I$ is a YES instance). Thus $I$ is determined by the values of its 6 middle nodes $\{2,3,4,5,6,7\}$, so $$|E^{r,s}|= k^6$$ Let $B$ be a nondeterministic thrifty branching program that solves $T_2(4,k)$, and let $\Gamma$ be the set of states of $B$ which query one of the nodes $4,5,6,7$. We will show $|\Gamma| = \Omega(k^3)$. For $r,s\in [k]$ let $\Gamma^{r,s}$ be the set of states of $\Gamma$ that query $f_j(r,s)$ for some $j\in\{4,5,6,7\}$. We will show $$\label{e:GamLB} |\Gamma^{r,s}| +1 \ge k/\sqrt{3}$$ Since $\Gamma$ is the disjoint union of $\Gamma^{r,s}$ for all $r,s\in [k]$, it will follow that $|\Gamma|=\Omega(k^3)$ as required. For each $I\in E^{r,s}$ let ${\cal C}(I)$ be an accepting computation of $B$ on input $I$. Let $t^I_1$ be the first time during ${\cal C}(I)$ that the root $f_1$ is queried. Let $\gamma^I$ be be the last state in $\Gamma^{r,s}$ before $t^I_1$ in ${\cal C}(I)$ (or the initial state $\gamma_0$ if there is no such state) and let $\delta^I$ be the first state in $\Gamma^{r,s}$ after $t^I_1$ (or the ACCEPT state $\delta_{acc}$ if there is no such state). We associate with each $I\in E^{r,s}$ a tuple $$U(I) = (u,\gamma^I,\delta^I,x_1,x_2,x_3,x_4)$$ where $u\in\{1,2,3\}$ is a tag, and $x_1,x_2,x_3,x_4$ are in $[k]$ and are chosen so that $U(I)$ uniquely determines $I$ (by determining the values of all 6 middle nodes). Specifically, $x_1 = v^I_i$, where $i$ is the node queried by $\gamma^I$ (or $i=4$ if $\gamma^I=\gamma_0$). We partition $E^{r,s}$ into three sets $E^{r,s}_1,E^{r,s}_2,E^{r,s}_3$ according to which of the nodes $v_2,v_3$ the computation ${\cal C}(I)$ queries during the segment of the computation between $\gamma^I$ and $\delta^I$. (The tag $u$ tells us that $I$ lies in set $E^{r,s}_u$.) Let node $j\in\{2,3\}$ be the parent of node $i$ (where $i$ is defined above) and let $j'\in\{2,3\}$ be the sibling of $j$. - $E^{r,s}_1$ consists of those inputs $I$ for which ${\cal C}(I)$ queries neither $v_2$ nor $v_3$. - $E^{r,s}_2$ consists of those inputs $I$ for which ${\cal C}(I)$ queries $v_{j'}$. - $E^{r,s}_3$ consists of those inputs $I$ for which ${\cal C}(I)$ queries $v_j$ but not $v_{j'}$. To complete the definition of $U(I)$ we need only specify the meaning of $x_2,x_3,x_4$. Let ${\cal S}(I)$ denote the segment of the computation ${\cal C}(I)$ between $\gamma^I$ and $\delta^I$ (not counting the action of the last state $\delta^I$). This segment always queries the root $f_1(v_2,v_3)$, but does not query any of the nodes $4,5,6,7$ except $\gamma^I$ may query node $i$. The idea is that the segment ${\cal S}(I)$ will determine (using the definition of [*thrifty*]{}) the values of (at least) two of the six middle nodes, and $x_1,x_2,x_3,x_4$ will specify the remaining four values. We require that $x_1,x_2,x_3,x_4$ must specify the value of any node (except the root) that is queried during the segment, but the state that queries the node determines the values of its children. In case the tag $u=1$, the computation queries $f_1(v_2,v_3)$, and hence determines $v_2,v_3$, so $x_1,x_2,x_3,x_4$ specify the four values $v_4,v_5,v_6,v_7$. In case $u=2$, the computation queries $f_{j'}$ at the values of its children, so $x_1,x_2,x_3,x_4$ do not specify the values of these children, but instead specify $v_2,v_3$. In case $u=3$, $x_1,x_2,x_3,x_4$ do not specify the value of the sibling of node $i$ and do not specify $v_{j'}$, but do specify $v_j$ and the values of the other level 2 nodes. [**Claim:**]{} If $I,J\in E^{r,s}$ and $U(I) = U(J)$, then $I=J$. Inequality (\[e:GamLB\]) (and hence the theorem) follows from the Claim, because if $|\Gamma^{r,s}|+1< k/\sqrt{3}$ then there would be fewer than $k^6$ choices for $U(I)$ as $I$ ranges over the $k^6$ inputs in $E^{r,s}$. To prove the Claim, suppose $U(I)=U(J)$ but $I\ne J$. Then we can define an accepting computation of input $I$ which violates the definition of thrifty. Namely follow the computation ${\cal C}(I)$ up to $\gamma^I$. Now follow the segment of ${\cal C}(J)$ between $\gamma^I$ and $\delta^I$, and complete the computation by following ${\cal C}(I)$. Notice that the segment of ${\cal C}(J)$ never queries any of the nodes $4,5,6,7$ except for $v_i$, and $U(I) = U(J)$ (together with the definition of $E^{r,s}$) specifies the values of the other nodes that it queries. However, since $I\ne J$, this segment of ${\cal C}(J)$ with input $I$ will violate the definition of [*thrifty*]{} while querying at least one of the three nodes $v_1,v_2,v_3$. Conclusion {#s:conclu} ========== The Thrifty Hypothesis (page ) states that thrifty branching programs are optimal among $k$-way BPs solving [$FT_d^h(k)$]{}. For the deterministic case, this says that the black pebbling method is optimal. Proving this would separate [$\mathbf{L}$]{} from [$\mathbf{P}$]{}(Corollary \[c:thegoal\]). Even disproving this would be interesting, since it would show that one can improve upon this obvious application of pebbling. The next important step is to extend the tight branching program bounds given in Corollary \[c:HtThree\] for height 3 trees to height 4 trees. The upper bound given in Theorem \[t:BPUpper\] (\[e:dFUpper\]) for the height 4 function problem $FT^4_d(k)$ for deterministic BPs is $O(k^{3d-2})$. If we could match this with a similar lower bound when $d=4$ (e.g. by using a variation of the state sequence method in Section \[s:beating\]) this would yield $\Omega(k^{10})$ states for the function problem and hence (by Lemma \[l:FvsB\]) $\Omega(k^9)$ states for the Boolean problem $BT^4_4(k)$. This would break the Neciporuk $\Omega(n^2)$ barrier for branching programs (see Section \[s:NecLB\]). For nondeterministic BPs, the upper bound given by Theorem \[t:BPUpper\] for the Boolean problem for height 4 trees is $O(k^{2d-1})$. This comes from the upper bound on fractional pebbling given in Theorem \[t:daryFract\], which we suspect is optimal for $h=4$ and degree $d=3$. The corresponding lower bound for nondeterministic BPs for $BT^4_3(k)$ would be $\Omega(k^5)$. A proof would break the Neciporuk $\Omega(n^{3/2})$ barrier for nondeterministic BPs. Other (perhaps more accessible) open problems are to generalize Theorem \[t:thrifFourtwo\] to get general lower bounds for nondeterministic thrifty BPs solving $BT^h_2(k)$, and to improve Theorem \[t:daryFract\] to get tight bounds on the number of pebbles required to fractionally pebble $T^h_d$. The proof of Theorem \[t:detThriftLB\], which states that deterministic thrifty BPs require at least $k^h$ states to solve $BT_2^{h}(k)$, is taken from [@wehr]. That paper also proves the same lower bound for the more general class of ‘less-thrifty’ BPs, which are allowed to query $f_i(a,b)$ provided that either $(a,b)$ correctly specify the values of both children of $i$, or neither $a$ nor $b$ is correct. [@wehr] also calculates $(k+1)^h$ as the exact number of states required to solve $FT_2^h(k)$ using the black pebbling method, and proves this is optimal when $h=2$. So far we have not been able to beat this BP upper bound by even one state, for any $h$ and any $k$ using any method. That this bound might actually be unbeatable (at least for all $h$ and all sufficiently large $k$) makes an intriguing hypothesis. [**Acknowledgment**]{} James Cook played a helpful role in the early parts of this research. [^1]: We thank Yann Strozecki, who posed this question [^2]: The reason for this is quite technical: Klawe’s definition of pebbling is slightly different from ours in that it requires that the root remain pebbled. Adding a new root forces there to be a time when all $c$ of the height $h$ nodes, which represent the root of $T_d^h$, are pebbled. Adding one more pebble to $G_{d,h}$ changes the relationship between the cost of pebbling $T_d^h$ and the cost of pebbling $G_{d,h}$ by a negligible amount. [^3]: See after this code for argument that $|v^*(D)| \le k^{|{\mathsf{Vars}}| - |{{\sf Dom}}(v^*)|}$.
--- abstract: | Let $q$ be a large prime, and $\chi$ the quadratic character modulo $q$. Let $\phi$ be a self-dual Hecke–Maass cusp form for $SL(3,{\ensuremath{\mathbb{Z}}})$, and $u_j$ a Hecke–Maass cusp form for $\Gamma_0(q)\subseteq SL(2,{\ensuremath{\mathbb{Z}}})$ with spectral parameter $t_j$. We prove the hybrid subconvexity bounds for the twisted $L$-functions $$L(1/2,\phi\times u_j\times\chi)\ll_{\phi,{\varepsilon}} (qt_j)^{3/2-\theta+{\varepsilon}},\quad L(1/2+it,\phi\times\chi)\ll_{\phi,{\varepsilon}} (qt)^{3/4-\theta/2+{\varepsilon}},$$ for any ${\varepsilon}>0$, where $\theta=1/23$ is admissible. address: - | School of Mathematics\ Shandong University\ Jinan\ Shandong 250100\ China - | Current address: Department of Mathematics\ Columbia University\ New York\ NY 10027\ USA author: - Bingrong Huang bibliography: - 'hbrbib.bib' title: 'Hybrid subconvexity bounds for twisted $L$-functions on $GL(3)$' --- Introduction {#sec: Introduction} ============ Bounding $L$-functions on their critical lines is one of the central problems in analytic number theory. For $GL(1)$ $L$-functions, subconvexity bounds are due to Weyl [@weyl1921abschatzung] in the $t$-aspect, and Burgess [@burgess1963character] in the $q$-aspect. Hybrid bounds for Dirichlet $L$-functions are given by Heath-Brown [@heath1978hybrid; @heath1980hybrid]. For $GL(2)$ $L$-functions, in the weight aspect, this was achieved in Peng [@peng2001zeros]. In the conductor aspect, Conrey–Iwaniec [@conrey2000cubic] used the cubic moment to give a strong subconvexity bound. And recently, Young [@young2014weyl] generalized their method to obtain a Weyl-type hybrid subconvexity bounds for twisted $L$-functions. In the level aspect, this was first given by Duke–Friedlander–Iwaniec [@duke1994bounds]. Subconvexity bounds for Rankin–Selberg $L$-functions on $GL(2)\times GL(2)$ were known due to Sarnak [@sarnak2001estimates], Kowalski–Michel–Vanderkam [@kowalski2002rankin], and Lau–Liu–Ye [@lau2006new], etc. Now for $L$-functions on $GL(1)$ and $GL(2)$, this was solved completely, due to the work of Michel–Venkatash [@michel2010subconvexity] and many other important contributions on the way. For $GL(3)$ $L$-functions, Li [@li2011bounds] gave the first subconvexity bound in the $t$-aspect for self-dual forms. Recently, Mckee–Sun–Ye [@mckee2015improved] improved Li’s results. Blomer [@blomer2012subconvexity] considered the conductor aspect for twisted $L$-functions on $GL(3)$. On the other hand, in a series of papers [@munshi2015circle2; @munshi2015circle3; @munshi2015circle4], Munshi used the circle method and $GL(3)$ Voronoi formula to give the subconvexity bounds. So far, there are mainly two methods to solve the subconvexity problem for $GL(3)$ $L$-functions: the moment method and the circle method. They work in different situations. In this paper, we consider certain types of twisted $L$-functions of degree $3$ and $6$ in both $q$ and $t$ aspects. More precisely, let $q$ be a large prime, and $\chi$ the primitive quadratic character modulo $q$. Let $u_j$ be an even Hecke–Maass cusp newform with spectral parameter $t_j$ of level $q'|q$. We denote the Hecke eigenvalues by $\lambda_j(n)$. Let $\phi$ be a self-dual Hecke–Maass form of type $(\nu,\nu)$ for $SL(3,{\ensuremath{\mathbb{Z}}})$, with Fourier coefficients $A(m,n)=A(n,m)$, normalized so that the first Fourier coefficient $A(1,1)=1$. We define the $L$-function $$\label{eqn: L(s,phi)} L(s,\phi) = \sum_{n=1}^{\infty} \frac{A(1,n)}{n^s},$$ for ${\operatorname{Re}}(s)>1$. The twisted $L$-functions $$\label{eqn: L(s,phi.chi)} L(s,\phi\times\chi) = \sum_{n=1}^{\infty} \frac{A(1,n)\chi(n)}{n^s}$$ is defined for ${\operatorname{Re}}(s)>1$, and can be continued to an entire function with a functional equation of conductor $q^3$. Similarly, we define the Rankin–Selberg $L$-function $$\label{eqn: L(s,phi.u.chi)} L(s,\phi\times u_j\times\chi) = \sum_{m\geq1}\sum_{n\geq1}\frac{A(m,n)\lambda_j(n)\chi(n)}{(m^2n)^s},$$ for ${\operatorname{Re}}(s)>1$, and can be continued to an entire function with conductor $q^6$. Our main result is \[thm: main\] With notation as above, we have $$L(1/2,\phi\times u_j\times\chi) \ll_{\phi,{\varepsilon}} (qt_j)^{3/2-\theta+{\varepsilon}},$$ and $$L(1/2+it,\phi\times\chi) \ll_{\phi,{\varepsilon}} (qt)^{3/4-\theta/2+{\varepsilon}},$$ for any ${\varepsilon}>0$, where $\theta=(35-\sqrt{1057})/56$. In order to prove Theorem \[thm: main\], we will use two different methods to show the following two theorems. And with some modifications, we will give the proof of Theorem \[thm: main\] in the end of §\[sec: MT\]. \[thm: q\] With notation as above, we have $$L(1/2,\phi\times u_j\times\chi) \ll_{\phi,{\varepsilon}} q^{5/4+{\varepsilon}}t_j^{3/2+{\varepsilon}},$$ and $$L(1/2+it,\phi\times\chi) \ll_{\phi,{\varepsilon}} q^{5/8+{\varepsilon}}t^{3/4+{\varepsilon}},$$ for any ${\varepsilon}>0$. and \[thm: t\] With notation as above, we have $$L(1/2,\phi\times u_j\times\chi) \ll_{\phi,{\varepsilon}} q^{4+{\varepsilon}}t_j^{4/3+{\varepsilon}},$$ and $$L(1/2+it,\phi\times\chi) \ll_{\phi,{\varepsilon}} q^{2+{\varepsilon}}t^{2/3+{\varepsilon}},$$ for any ${\varepsilon}>0$. Theorem \[thm: main\] gives the first hybrid subconvexity bound for $GL(3)$ $L$-functions. Note that the convexity bound for $L(1/2,\phi\times u_j\times\chi)$ is $(qt_j)^{3/2+{\varepsilon}}$, and for $L(1/2+it,\phi\times\chi)$ is $(qt)^{3/4+{\varepsilon}}$. Theorem \[thm: q\] is crucial, which is a generalization of Blomer’s results in [@blomer2012subconvexity], since the bounds there are subconvexity in the $q$-aspect and convexity in the $t$-aspect. So any bound which is subconvexity in term of $t$ and of polynomial growth in term of $q$ is sufficient to get a hybrid subconvexity bound by combining with Theorem \[thm: q\]. Theorem \[thm: t\] is a generalization of Li’s results in [@li2011bounds] and Mckee–Sun–Ye’s improvements in [@mckee2015improved]. Let $f$ be a weight $2k$ holomorphic modular form for $\Gamma_0(q)$. One may prove $$L(1/2,\phi\times f\times \chi)\ll_{\phi,{\varepsilon}} (qk)^{3/2-\theta+{\varepsilon}}.$$ The proof of the above result is similar to Theorem \[thm: main\], see Li [@li2011bounds Appendix] for example. One can also think about the hybrid subconvexity bounds for $GL(3)$ $L$-functions in other cases, such as Munshi [@munshi2013bounds1; @munshi2013bounds3; @munshi2015circle4]. We will get into these topics elsewhere. We end the introduction with a brief outline of the proof of our theorems. In our work, we will assume $q\ll T^B$, for some fixed $B>0$. Note that Blomer’s method showed an upper bound of the form $T^A q^{5/4+{\varepsilon}}$. To prove our theorems, the basic idea is similar to Li [@li2011bounds] and Blomer [@blomer2012subconvexity]. We consider the average of $L(1/2,\phi\times u_j\times \chi)$ over the spectrum of the Laplacian on $\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}}$, see Proposition \[prop: q\] and \[prop: t\] below. And then our results follow from a theorem of Lapid [@lapid2003nonnegativity], which shows that $L(1/2,\phi\times u_j\times \chi)$ is always a non-negative real number. (We can drop all but one term to obtain an individual bound; similarly for $L(1/2+it,\phi\times\chi)$.) To prove Proposition \[prop: q\], which is strong in $q$-aspect, after applying the approximate functional equations for the Rankin–Selberg $L$-functions, the $GL(2)$ Kuznetsov formula, and the $GL(3)$ Voronoi formula, we are led to bound ${\mathcal{S}}_\sigma(q,N;\delta)$, see . To estimate ${\mathcal{S}}_\sigma(q,N;\delta)$, we will use the hybrid large sieve inequality and many results in Conrey–Iwaniec [@conrey2000cubic], Blomer [@blomer2012subconvexity], and Young [@young2014weyl]. This is inspired by Young [@young2014weyl]. However, this will not give us subconvexity bounds in both $q$ and $t$ aspects. In order to prove results as in Theorem \[thm: main\], we still need to handle the case $q$ is much smaller than $t$. That is, we will need a result as in Theorem \[thm: t\], which is strong in the $t$-aspect and will follow from Proposition \[prop: t\]. Now, to prove Proposition \[prop: t\], it turns out that Li’s method is still working. The key point here is that we can have a second application of the Voronoi formula. To get a better bound, we will also use an $n$th-order asymptotic expansion of a weighted stationary phase integral as Mckee–Sun–Ye [@mckee2015improved] did. Throughout the paper, $e(x)$ means $e^{2\pi ix}$, negligible means $O(T^{-A})$ for any $A>0$, and ${\varepsilon}$ is an arbitrarily small positive number which may not be the same in each occurrence. Preliminaries ============= In this section, we introduce notation and recall some standard facts of automorphic forms on $GL(2)$ and $GL(3)$. Automorphic forms ----------------- We start by reviewing automorphic forms for $\Gamma_0(q)$. Let $\mathbb{H}$ be the upper half-plane. Let ${\mathcal{A}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ denote the space of automouphic functions of weight zero, i.e., the functions $f:{\ensuremath{\mathbb{H}}}\rightarrow{\ensuremath{\mathbb{C}}}$ which are $\Gamma_0(q)$-periodic. Let ${\mathcal{L}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ denote the subspace of square-integrable functions with respect to the inner product $$\label{eqn: inner product} {\langle}f,g {\rangle}= \int_{\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}}} f(z)\overline{g(z)}d\mu z,$$ where $d\mu z = y^{-2}dxdy$ is the invariant measure on ${\ensuremath{\mathbb{H}}}$. The Laplace operator $${\Delta}= -y^2\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right)$$ acts in the dense subspace of smooth functions in ${\mathcal{L}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ such that $f$ and ${\Delta}f$ are both bounded; it has a self-adjoint extension which yields the spectral decomposition $${\mathcal{L}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}}) = {\ensuremath{\mathbb{C}}}\oplus{\mathcal{C}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})\oplus{\mathcal{E}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}}).$$ Here ${\ensuremath{\mathbb{C}}}$ is the space of constant functions, ${\mathcal{C}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ is the space of cusp forms and ${\mathcal{E}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ is the space of Eisenstein series. We choose an orthonormal basis ${\mathcal{B}}(q)$ of even Hecke–Maass forms of level $q$ as follows: for each even newform $u_j$ of level $q'|q$ we choose an orthonormal basis ${\mathcal{V}}(u_j)$ of the space generated by $\{u_j(dz):d|(q/q')\}$ containing $u_j/\|u_j\|$, and let ${\mathcal{B}}(q)$ be the union of all ${\mathcal{V}}(u_j)$ for $u_j$ ranging over the newforms of level dividing $q$. Let ${\mathcal{B}}^*(q)$ be the subset of all newforms in ${\mathcal{B}}(q)$. Each $u_j\in{\mathcal{B}}(q)$ with spectral parameter $t_j$ has a Fourier expansion $$u_j(z) = \sum_{n\neq0} \rho_j(n)W_{s_j}(nz),$$ where $W_s(z)$ is the $GL(2)$ Whittaker function given by $$W_s(z) := 2|y|^{1/2} K_{s-1/2}(2\pi|y|)e(x),$$ and $K_s(y)$ is the $K$-Bessel function with $s=1/2+it$. We have the Hecke operators acting on $u_j$ with $$\label{eqn: HO} (T_n u_j)(z) := \frac{1}{\sqrt{n}}\sum_{ad=n} \sum_{b({\operatorname{mod}\ }d)}u_j\left(\frac{az+b}{d}\right) = \lambda_j(n)u_j(z),$$ for all $n$ with $(n,q)=1$. We have $$\rho_j(\pm n) = \rho_j(\pm1)\lambda_j(n)n^{-1/2},$$ if $n>0$. Moreover, the reflection operator $R$ defined by $(Ru_j)(z)=u_j(-\bar{z})$ commutes with $\Delta$ and all $T_n$, so that we can also require $$\label{eqn: Ru_j} Ru_j=\epsilon_j u_j.$$ Since $R$ is an involution, the space ${\mathcal{C}}(\Gamma_0(q){\backslash}{\ensuremath{\mathbb{H}}})$ is split into even and odd cusp forms according to $\epsilon_j=1$ and $\epsilon_j=-1$. We define $$\label{eqn: omega_j} \omega_j := \frac{4\pi}{\cosh(\pi t_j)}|\rho_j(1)|^2.$$ By [@iwaniec1990small Theorem 2], we have $$\label{eqn: omega^*_j} \omega_j^*:=\frac{4\pi}{\cosh(\pi t_j)}\sum_{f\in{\mathcal{V}}(u_j)}|\rho_f(1)|^2 \gg q^{-1}(qt_j)^{-{\varepsilon}}.$$ The Eisenstein series $E_{\mathfrak{a}}(z,s)$ is defined by $$E_{\mathfrak{a}}(z,s) := \sum_{\gamma\in\Gamma_{\mathfrak{a}}{\backslash}\Gamma_0(q)} {\operatorname{Im}}(\sigma_{\mathfrak{a}}^{-1}\gamma z)^s.$$ It has the following Fourier expansion $$E_{\mathfrak{a}}(z,s) = \delta_{\mathfrak{a}}y^s+\varphi_{\mathfrak{a}}(s)y^{1-s}+\sum_{n\neq0}\varphi_{\mathfrak{a}}(n,s)W_s(nz),$$ where $\delta_{\mathfrak{a}}=1$ if ${\mathfrak{a}}\sim\infty$, or $\delta_{\mathfrak{a}}=0$ otherwise. Let $$\label{eqn: eta(n,s)} \eta(n,s) = \sum_{ad=|n|}\left(\frac{a}{d}\right)^{s-1/2}.$$ The Eisenstein series $E_{\mathfrak{a}}(z,s)$ is even, and we have $$T_n E_{\mathfrak{a}}(z,s) = \eta(n,s)E_{\mathfrak{a}}(z,s),$$ if $(n,q)=1$. Write $\eta_t(n)=\eta(n,1/2+it)$ and $E_{{\mathfrak{a}},t}(z)=E_{\mathfrak{a}}(z,1/2+it)$. We define $$\label{eqn: omega(t)} \omega_{\mathfrak{a}}(t) := \frac{4\pi}{\cosh(\pi t)}|\varphi_{\mathfrak{a}}(1,1/2+it)|^2.$$ And, by [@conrey2000cubic p. 1188], we have $$\label{eqn: omega^*(t)} \omega^*(t):=\sum_{{\mathfrak{a}}}\omega_{\mathfrak{a}}(t) \gg q^{-1-{\varepsilon}}\min(|t|^{-{\varepsilon}},|t|^2).$$ Now we recall some background on Maass forms for $SL(3,{\ensuremath{\mathbb{Z}}})$. We will follow the notation in Goldfeld’s book [@goldfeld2006automorphic]. Let $\phi$ be a Maass form of type $(\nu_1,\nu_2)$. We have the following Fourier–Whittaker expansion $$\label{eqn: phi FE} \phi(z) = \sum_{\gamma\in U_2({\ensuremath{\mathbb{Z}}}){\backslash}\Gamma_0(q)} \sum_{m_1=1}^{\infty} \sum_{m_2\neq0} \frac{A(m_1,m_2)}{m_1|m_2|} W_J\left(M\begin{pmatrix} \gamma & \\ & 1 \end{pmatrix}z,\nu_1,\nu_2,\psi_{1,1}\right),$$ where $U_2({\ensuremath{\mathbb{Z}}})$ is the group of $2\times2$ upper triangular matrices with integer entries and one on the diagonal, $W_J(z,\nu_1,\nu_2,\psi_{1,1})$ is the Jacquet–Whittaker function, and $M={\operatorname{diag}}(m_1|m_2|,m_1,1)$. From now on, let $\phi$ be a self-dual Hecke–Maass form of type $(\nu,\nu)$ for $SL(3,{\ensuremath{\mathbb{Z}}})$, normalized to have the first Fourier coefficient $A(1,1)=1$. For later purposes, we record the Hecke relation $$\label{eqn: HR} A(m,n) = \sum_{d|(m,n)}\mu(d)A\left(\frac{m}{d},1\right)A\left(1,\frac{n}{d}\right).$$ Moreover, the Rankin–Selberg theory implies the bound $$\label{eqn: RS bound} \sum_{n\ll x}|A(1,n)|^2 \ll x,$$ for all $x\geq1$. We will also need the following estimate (see Blomer [@blomer2012subconvexity Eq. (10) and (11)]) $$\label{eqn: blomer} \sum_{n\leq x}|A(na,b)|^2 \ll x(ab)^{7/16+{\varepsilon}}, \quad \textrm{and}\quad \sum_{n\leq x}|A(na,b)| \ll x(ab)^{7/32+{\varepsilon}}.$$ $L$-functions and the approximate functional equations ------------------------------------------------------ The $L$-function attached to $\phi$ is $L(s,\phi)=\sum_{n=1}^{\infty}A(1,n)n^{-s}$, and the completed $L$-function is given by $$\Lambda(s,\phi) = \pi^{-3s/2}\prod_{j=1}^{3}\Gamma\left(\frac{s-\alpha_j}{2}\right) L(s,\phi).$$ where $\alpha_1=3\nu-1$, $\alpha_2=0$, and $\alpha_3=1-3\nu$. The $L$-function attached to the twist $\phi\times\chi$ is $$L(s,\phi\times\chi)=\sum_{n=1}^{\infty}\frac{A(1,n)\chi(n)}{n^s},$$ whose completed version is $$\label{eqn: Lambda(s,phi.chi)} \Lambda(s,\phi\times\chi) = q^{3s/2}L_\infty(s,\phi\times\chi)L(s,\phi\times\chi),$$ where $$L_\infty(s,\phi\times\chi) = \pi^{-3s/2}\prod_{j=1}^{3}\Gamma\left(\frac{s+\delta-\alpha_j}{2}\right),$$ with $\delta=0$ or $1$ according to whether $\chi(-1)=1$ or $-1$. Then $\Lambda(s,\phi\times\chi)$ is entire, and its functional equation is $$\Lambda(s,\phi\times\chi)=\Lambda(1-s,\phi\times\chi).$$ Note that the root number of $\Lambda(s,\phi\times\chi)$ is 1. Next we consider the Rankin–Selberg convolution of $\phi$ with $u_j\times\chi$ given by $$L(s,\phi\times u_j\times\chi) = \sum_{m,n=1}^{\infty}\frac{A(m,n)\lambda_j(n)\chi(n)}{(m^2n)^s}.$$ The completed version of $L(s,\phi\times u_j\times\chi)$ is $$\Lambda(s,\phi\times u_j\times\chi) = q^{3s} L_\infty(s,\phi\times u_j\times\chi)L(s,\phi\times u_j\times\chi),$$ where $$L_\infty(s,\phi\times u_j\times\chi) = \pi^{-3s} \prod_{\pm}\prod_{j=1}^{3} \Gamma\left(\frac{s+\delta\pm it_j-\alpha_j}{2}\right).$$ The function $\Lambda(s,\phi\times u_j\times\chi)$ is entire, and its functional equation is $$\Lambda(s,\phi\times u_j\times\chi)=\Lambda(1-s,\phi\times u_j\times\chi).$$ Note that again the root number is 1. Finally, we consider the convolution with the Eisenstein series which is defined as $$L(s,\phi\times E_{{\mathfrak{a}},t}\times\chi) = \sum_{m,n=1}^{\infty}\frac{A(m,n)\eta_t(n)\chi(n)}{(m^2n)^s}.$$ By the definition of $\eta_t(n)$ , by comparing the Euler products, we know that $$L(s,\phi\times E_{{\mathfrak{a}},t}\times\chi) = L(s+it,\phi\times\chi)L(s-it,\phi\times\chi).$$ Now we consider the approximate functional equations for $L(s,\phi\times u_j\times\chi)$ and $L(s,\phi\times E_{{\mathfrak{a}},t}\times\chi)$. We use the results from Blomer [@blomer2012subconvexity §2]. Let $$\label{eqn: G(u)} G(u) = e^{u^2}.$$ We have the following approximate functional equations, (see [@iwaniec2004analytic Theorem 5.3]). \[lemma: AFE\] We have $$\label{eqn: AFE cusp} L(1/2,\phi\times u_j\times\chi) = 2\sum_{m,n=1}^{\infty}\frac{A(m,n)\lambda_j(n)\chi(n)}{(m^2n)^{1/2}}V_{t_j}\left(\frac{m^2n}{q^3}\right),$$ where $$V_t(y) = \frac{1}{2\pi i}\int_{(3)}(\pi^3y)^{-u}\prod_\pm \prod_{j=1}^{3} \frac{\Gamma\left(\frac{1/2+u+\delta\pm it-\alpha_j}{2}\right)} {\Gamma\left(\frac{1/2+\delta\pm it-\alpha_j}{2}\right)} G(u)\frac{du}{u}.$$ And similarly, we have $$\label{eqn: AFE eis} L(1/2,\phi\times E_{{\mathfrak{a}},t}\times\chi) = 2\sum_{m,n=1}^{\infty}\frac{A(m,n)\eta_t(n)\chi(n)}{(m^2n)^{1/2}}V_t\left(\frac{m^2n}{q^3}\right).$$ We see that $V_t(y)$ has the following properties which effectively limit the terms in and with $m^2n\ll (q(1+|t_j|))^{3+{\varepsilon}}$ and $(q(1+|t|))^{3+{\varepsilon}}$ respectively. Note that we can separate the variables $t$ and $y$ in $V_t(y)$ by the second part of the following lemma. Moreover, we see the $u$-integral can be easily handled now. In our later application, we will take $U=\log^2(qT)$. \[lemma: V\_t\] - We have $$y^k V_t^{(k)}(y) \ll \left(1+\frac{y}{(1+|t|)^3}\right)^{-A},$$ and $$y^k V_t^{(k)}(y) = \delta_k + O\left(\left(\frac{y}{(1+|t|)^3}\right)^{\alpha}\right),$$ where $\delta_0=1$, $\delta_k=0$ if $k>0$, and $0<\alpha\leq (1/2-|{\operatorname{Re}}(3\nu-1)|)/3$ (for example, we can take $\alpha=3/32$). - For any $1<U\ll T^{\varepsilon}$, ${\varepsilon}>0$, and $|t-T|\ll T^{1-2{\varepsilon}}$, we have the following approximation $$\begin{split} V_t(y) & = \sum_{k=0}^{K/2}\sum_{l=0}^{K} t^{-2k} \left(\frac{t^2-T^2}{T^2}\right)^l V_{k,l}\left(\frac{y}{T^3}\right) + O\left(y^{-{\varepsilon}}(1+|T|)^{\varepsilon}e^{-U}\right) \\ & \quad + O\left(\left(\frac{1+|t-T|}{T}\right)^{K+1}\left(1+\frac{y}{T^3}\right)^{-A}\right), \end{split}$$ where $$\label{eqn: V_kl} V_{k,l}(y) = \frac{1}{2\pi i} \int_{{\varepsilon}-iU}^{{\varepsilon}+iU} P_{k,l}(u)(2\pi)^{-3u} y^{-u} G(u)\frac{du}{u},$$ for some polynomial $P_{k,l}$. <!-- --> - See Iwaniec–Kowalski [@iwaniec2004analytic Proposition 5.4]. - By Stirling’s formula and contour shifts as in Iwaniec–Kowalski [@iwaniec2004analytic p. 100], we have (see Blomer [@blomer2012subconvexity Lemma 1]) $$V_t(y) = \frac{1}{2\pi i}\int_{{\varepsilon}-iU}^{{\varepsilon}+iU} \pi^{-3u}\prod_\pm \prod_{j=1}^{3} \frac{\Gamma\left(\frac{1/2+u+\delta\pm it-\alpha_j}{2}\right)} {\Gamma\left(\frac{1/2+\delta\pm it-\alpha_j}{2}\right)} y^{-u} G(u)\frac{du}{u} + O\left(y^{-{\varepsilon}}(1+|t|)^{\varepsilon}e^{-U}\right).$$ The rest of the proof is following very closely to Young [@young2014weyl §5]. At first, by Stirling’s formula, if $|{\operatorname{Im}}(z)|\rightarrow\infty$ (with fixed real part), but $|u|\ll |z|^{1/2}$, then $$\label{eqn: Gamma} \frac{\Gamma(z+u)}{\Gamma(z)} = z^u \left(1+\sum_{k=1}^{K}\frac{P_k(u)}{z^k}+O\left(\frac{(1+|u|)^{2K+2}}{|z|^{K+1}}\right) \right),$$ for certain polynomials $P_k(u)$ of degree $2k$. So for $|{\operatorname{Im}}(u)|\ll U$, and $t\asymp T$, we have that $$\prod_\pm \prod_{j=1}^{3} \frac{\Gamma\left(\frac{1/2+u+\delta\pm it-\alpha_j}{2}\right)} {\Gamma\left(\frac{1/2+\delta\pm it-\alpha_j}{2}\right)} = \left(\frac{t}{2}\right)^{3u} \left(1+\sum_{k=1}^{K/2}\frac{P_{2k}(u)}{t^{2k}}+O\left(\frac{(1+|u|)^{2K+2}}{t^{K+1}}\right) \right),$$ for a different collection of $P_k(u)$. Note that, in fact, the factor $\left(t/2\right)^{3u}$ is $\left((t/2)^2\right)^{3u/2}$, which is even as a function of $t$. For convenience, set $P_0(u)=1$. Hence $$\label{eqn: V_t(y)=sum_k} \begin{split} V_t(y) & = \sum_{k=0}^{K/2}t^{-2k} \frac{1}{2\pi i}\int_{{\varepsilon}-iU}^{{\varepsilon}+iU} P_{2k}(u)\left(\frac{t}{2\pi}\right)^{3u} y^{-u}G(u)\frac{du}{u} \\ & \quad + O\left(t^{-K-1}\left(1+\frac{y}{t^3}\right)^{-A}\right) + O\left(y^{-{\varepsilon}}(1+|t|)^{\varepsilon}e^{-U}\right), \end{split}$$ where the extra factor $\left(1+\frac{y}{t^3}\right)^{-A}$ arises from moving the contour to ${\operatorname{Re}}(u)=A$ if $y\geq t^3$, and to ${\operatorname{Re}}(u)=-1/4$ if $y\leq t^3$ (here we use the fact $|{\operatorname{Re}}(\alpha_j)|\leq 7/32$). We further refine by approximating $t$ by $T$. Since in our application $h(t)$ is very small unless $|t-T|\ll M\log^2 T$, where $M\ll T^{1/2}$ and $T$ large, our assumption $|t-T|\ll T^{1-2{\varepsilon}}$ is flexible enough. Note that $\left|u\frac{t^2-T^2}{T^2}\right|\ll \left|u\frac{t-T}{T}\right| \ll T^{-{\varepsilon}}$. By Taylor expansion, we have $$\label{eqn: t^u} \begin{split} t^{3u} = T^{3u} e^{\frac{3u}{2}\log(1+\frac{t^2-T^2}{T^2})} & = T^{3u}\sum_{l=0}^{K} Q_l(u)\left(\frac{t^2-T^2}{T^2}\right)^l \\ & \quad + O\left((1+|u|)^{K+1}\left(\frac{|t-T|}{T}\right)^{K+1}\right), \end{split}$$ for certain polynomial $Q_l(u)$ of degree $\leq l$. So, by and , we prove this lemma. The Kuznetsov formula for $\Gamma_0(q)$ --------------------------------------- The two central tools we need in this paper are the Kuznetsov formula for $\Gamma_0(q)$ and the Voronoi formula for $SL(3,{\ensuremath{\mathbb{Z}}})$. In this subsection, we recall the Kuznetsov formula, and then in the next subsection, we will review the Voronoi formula. As usual, let $$S(a,b;c) = {\sum_{d(c)}}^* e\left(\frac{ad+b\bar{d}}{c}\right)$$ be the classical Kloosterman sum. For any $m,n\geq1$, and any test function $h(t)$ which is even and satisfies the following conditions: - $h(t)$ is holomorphic in $|{\operatorname{Im}}(t)|\leq 1/2+{\varepsilon}$, - $h(t)\ll (1+|t|)^{-2-{\varepsilon}}$ in the above strip, we have the following Kuznetsov formula (see [@conrey2000cubic Eq. (3.17)] for example). \[lemma: KTF\] We have $$\begin{split} & \quad {\sum_j}'h(t_j)\omega_j^*\lambda_j(m)\lambda_j(n) + \frac{1}{4\pi}\int_{-\infty}^{\infty}h(t)\omega^*(t)\eta_t(m)\eta_t(n)dt \\ & = \frac{1}{2}\delta_{m,n}H + \frac{1}{2}\sum_{q|c}\frac{1}{c}\sum_{\pm} S(n,\pm m;c)H^{\pm}\left(\frac{4\pi\sqrt{mn}}{c}\right), \end{split}$$ where $\sum'$ restricts to the even Hecke–Maass cusp forms, $\delta_{m,n}$ is the Kronecker symbol, $$\label{eqn: H} \begin{split} H & = \frac{2}{\pi}\int_{0}^{\infty} h(t) \tanh(\pi t)t dt, \\ H^+(x) & = 2i \int_{-\infty}^{\infty} J_{2it}(x)\frac{h(t)t}{\cosh(\pi t)}dt, \\ H^-(x) & = \frac{4}{\pi} \int_{-\infty}^{\infty} K_{2it}(x)\sinh(\pi t)h(t)t dt, \end{split}$$ and $J_\nu(x)$ and $K_\nu(x)$ are the standard $J$-Bessel function and $K$-Bessel function respectively. The Voronoi formula for $SL(3,{\ensuremath{\mathbb{Z}}})$ --------------------------------------------------------- Let $\psi$ be a smooth compactly supported function on $(0,\infty)$, and let $\tilde{\psi}(s):=\int_{0}^{\infty}\psi(x)x^s\frac{dx}{x}$ be its Mellin transform. For $\sigma>7/32$, we define $$\label{eqn: Psi} \begin{split} \Psi^{\pm}(x) & := x\frac{1}{2\pi i} \int_{(\sigma)} (\pi^3x)^{-s} \prod_{j=1}^{3} \frac{\Gamma\left(\frac{s+\alpha_j}{2}\right)}{\Gamma\left(\frac{1-s-\alpha_j}{2}\right)}\tilde{\psi}(1-s)ds \\ & \qquad \pm \frac{x}{i}\frac{1}{2\pi i} \int_{(\sigma)} (\pi^3x)^{-s} \prod_{j=1}^{3} \frac{\Gamma\left(\frac{1+s+\alpha_j}{2}\right)}{\Gamma\left(\frac{2-s-\alpha_j}{2}\right)}\tilde{\psi}(1-s)ds. \end{split}$$ Here $\alpha_j$ has the same meaning as above, that is, $\alpha_1=3\nu-1,\alpha_2=0$, and $\alpha_3=1-3\nu$. Note that changing $\psi(y)$ to $\psi(y/N)$ for a positive real number $N$ has the effect of changing $\Psi^\pm(x)$ to $\Psi^\pm(xN)$. The Voronoi formula on $GL(3)$ was first proved by Miller and Schmid [@miller2006automorphic]. The present version is due to Goldfeld and Li [@goldfeld2006voronoi] with slightly renormalized variables (see Blomer [@blomer2012subconvexity Lemma 3]). \[lemma: VSF\] Let $c,d,\bar{d}\in{\ensuremath{\mathbb{Z}}}$ with $c\neq0$, $(c,d)=1$, and $d\bar{d}\equiv1\pmod{c}$. Then we have $$\begin{split} &\quad \sum_{n=1}^{\infty} A(m,n)e\left(\frac{n\bar{d}}{c}\right)\psi(n) \\ & = \frac{c\pi^{3/2}}{2} \sum_{\pm} \sum_{n_1|cm} \sum_{n_2=1}^{\infty} \frac{A(n_2,n_1)}{n_1n_2} S\left(md,\pm n_2;\frac{mc}{n_1}\right) \Psi^{\pm}\left(\frac{n_1^2n_2}{c^3m}\right). \end{split}$$ To prove Theorem \[thm: q\] by applying Lemma \[lemma: VSF\], we need to know the asymptotic behaviour of $\Psi^{\pm}$. This will be done in §\[sec: Psi\]. Our work differs from Blomer [@blomer2012subconvexity] in the nature of the weight function $\Psi^{\pm}$. We will use the method of Young [@young2014weyl]. See Young [@young2014weyl §8] for a more detail discussion of this method. However, to prove Theorem \[thm: t\], we will need the following asymptotic formula for $\Psi^\pm$. \[lemma: Psi=M+O\] Suppose $\psi(y)$ is a smooth function, compactly supported on $[N,2N]$. Let $\Psi^\pm(x)$ be defined as in . Then for any fixed integer $K\geq1$, and $xN\gg1$, we have $$\Psi^\pm(x) = x\int_0^\infty \psi(y) \sum_{\ell=1}^{K} \frac{\gamma_\ell}{(xy)^{\ell/3}} e\left(\pm3(xy)^{1/3}\right) dy + O\left((xN)^{1-K/3}\right),$$ where $\gamma_\ell$ are constants depending only on $\alpha_1,\alpha_2,\alpha_3$, and $K$. See Li [@li2009central Lemma 6.1] and Blomer [@blomer2012subconvexity Lemma 6]. The stationary phase lemma -------------------------- In this subsection, we will recall a result in Mckee–Sun–Ye [@mckee2015improved], which will be used to prove Theorem \[thm: t\]. Let $f(x)$ be a real function, $2n+3$ times continuously differentiable for $\alpha\leq x\leq\beta$. Suppose that $f'(x)$ changes signs only at $x=\gamma$, from negative to positive, with $\alpha<\gamma<\beta$. Let $g(x)$ be a real function, $2n+1$ times continuously differentiable for $\alpha\leq x\leq\beta$. Denote $$\label{eqn: H_i} H_1(x):=\frac{g(x)}{2\pi if^\prime(x)},\quad \textrm{and} \quad H_i(x):=-\frac{H_{i-1}^\prime(x)}{2\pi if^\prime(x)}.$$ Let $$\label{eqn: lambda_k} \lambda_k := \frac{f^{(k)}(\gamma)}{k!}, \quad \textrm{for}\quad k=2,\ldots,2n+2,$$ $$\label{eqn: eta_k} \eta_k := \frac{g^{(k)}(\gamma)}{k!}, \quad \textrm{for}\quad k=0,\ldots,2n,$$ and $\varpi_k$ be defined by the Taylor expansion of $g(x)\frac{dx}{dy}$, where $y=h(x-\gamma)$ with $f(x)-f(\gamma)=\lambda_2 h^2(x-\gamma)$ such that $y=h(x-\gamma)$ has the same sign as that of $x-\gamma$. By [@mckee2015improved Lemma 3.4] we have $$\label{eqn: varpi_k} \varpi_k = \eta_k + \sum_{\ell=0}^{k-1}\eta_\ell \sum_{j=1}^{k-\ell}\frac{C_{k\ell j}}{\lambda_{2}^j} \sum_{\substack{3\leq n_1,\ldots,n_j\leq 2n+3 \\ n_1+\cdots+n_j=k-\ell+2j}} \lambda_{n_1}\cdots\lambda_{n_j},$$ for some constant coefficients $C_{k\ell j}$. See [@mckee2015improved §2 and §3] for more details. \[lemma: MSY\] Let $f(x)$, $g(x)$, and $H_k(x)$ be defined as above. Suppose that there are positive parameters $M_0$, $N_0$, $T_0$, $U_0$, with $$\label{eqn: M0} M_0 > \beta-\alpha,$$ and positive constants $C_r$ such that for $\alpha\leq x\leq\beta$, $$\label{eqn: f^(r)} |f^{(r)}(x)|\leq C_r \frac{T_0}{M_0^r},\quad \text{for}\quad r=2,3,...,2n+3,$$ $$\label{eqn: f''} f^{\prime\prime}(x)\geq \frac{T_0}{C_2M_0^2},$$ and $$\label{eqn: g^(s)} |g^{(s)}(x)|\leq C_s\frac{U_0}{N_0^s},\quad \text{for}\quad s=0,1,2,...,2n+1.$$ If $T_0$ is sufficiently large comparing to the constants $C_r$, we have for $n\geq2$ that $$\label{eqn: MSY} \begin{split} & \qquad \int_{\alpha}^{\beta}g(x)e(f(x))dx \\ & = \frac{e\Big(f(\gamma)+\frac{1}{8}\Big)}{\sqrt{f''(\gamma)}} \Big(g(\gamma)+\sum_{j=1}^{n}\varpi_{2j}\frac{(-1)^{j}(2j-1)!!}{(4\pi i\lambda_2)^j}\Big) + \Big[e(f(x))\cdot\sum_{i=1}^{n+1}H_{i}(x)\Big]_{\alpha}^{\beta} \\ & \quad + O\left(\frac{U_0M_0^{2n+5}}{N_0T_0^{n+2}} \sum_{j=1}^{[\frac{n+1}{2}]}\Big(\frac{1}{(\gamma-\alpha)^{n+2+j}}+\frac{1}{(\beta-\gamma)^{n+2+j}}\Big) \sum_{t=j}^{n+1-j}\frac{1}{N_0^{n+1-j-t}M_0^{t}}\right) \\ & \quad + O\left(\frac{U_0M_0^{2n+4}}{T_0^{n+2}N_0^{n+1}}\Big(\frac{M_0}{N_0}+1\Big) \Big(\frac{1}{(\gamma-\alpha)^{n+2}}+\frac{1}{(\beta-\gamma)^{n+2}}\Big)\right) \\ & \quad + O\left(\frac{U_0M_0^{2n+4}}{T_0^{n+2}}\sum_{j=1}^{n+1} \Big(\frac{1}{(\gamma-\alpha)^{n+2+j}}+\frac{1}{(\beta-\gamma)^{n+2+j}}\Big) \sum_{t=0}^{n+1-j}\frac{1}{N_0^{n+1-j-t}M_0^{t}}\right) \\ & \quad + O\left(\frac{U_0}{T_0^{n+1}}\Big(\frac{M_0^{2n+2}}{N_0^{2n+1}}+M_0\Big)\right). \end{split}$$ See Mckee–Sun–Ye [@mckee2015improved Theorem 3.6]. Initial setup of Theorem \[thm: q\] {#sec: setup q} =================================== We are now ready to start with the proof of Theorem \[thm: q\]. As indicated in the introduction, both results follow rather easily from the following bound. \[prop: q\] With notation as above, for any ${\varepsilon}>0$, $T$ large, and $M\asymp T^{1/2}$, we have $$\sum_{u_j\in{\mathcal{B}}^*(q)\atop T-M\leq t_j\leq T+M} L(1/2,\phi\times u_j\times\chi) + \frac{1}{4\pi}\int_{T-M}^{T+M}|L(1/2+it,\phi\times\chi)|^2dt \ll_{\phi,{\varepsilon}} q^{5/4}TM(qT)^{{\varepsilon}}.$$ Theorem \[thm: q\] is followed from the above proposition. The key ingredient is Lapid’s theorem [@lapid2003nonnegativity] about the nonnegativity of $L(1/2,\phi\times u_j\times\chi)$. See Blomer [@blomer2012subconvexity §4] for more details. To prove Proposition \[prop: q\], we introduce the spectrally normalized first moment of the central values of $L$-functions $$\label{eqn: cW} {\mathcal{M}}:= \sum_{u_j\in{\mathcal{B}}^*(q)}h(t_j)\omega_j^* L(1/2,\phi\times u_j\times\chi) + \frac{1}{4\pi}\int_{-\infty}^{\infty}h(t)\omega^*(t)|L(1/2+it,\phi\times\chi)|^2dt,$$ where $$\label{eqn: f(t)} h(t) := \frac{1}{\cosh\left(\frac{t-T}{M}\right)} + \frac{1}{\cosh\left(\frac{t+T}{M}\right)}.$$ Here we choose the above weight because we can use Young’s results [@young2014weyl] directly. However, maybe the following Li’s weight function $$ h(t) = e^{-\frac{(t-T)^2}{M^2}} + e^{-\frac{(t+T)^2}{M^2}}$$ will work too. By and , we have $$\sum_{u_j\in{\mathcal{B}}^*(q)\atop T-M\leq t_j\leq T+M} L(1/2,\phi\times u_j\times\chi) + \frac{1}{4\pi}\int_{T-M}^{T+M}|L(1/2+it,\phi\times\chi)|^2dt \\ \ll {\mathcal{M}}q^{1+{\varepsilon}}T^{\varepsilon},$$ for any ${\varepsilon}>0$. Therefore, to prove Proposition \[prop: q\], we just need to prove $$\label{eqn: cW<<} {\mathcal{M}}\ll_{\phi,{\varepsilon}} q^{1/4}TM(qT)^{\varepsilon}.$$ Applying Lemma \[lemma: AFE\] to ${\mathcal{M}}$, we have $$\begin{split} {\mathcal{M}}& = 2\sum_{u_j\in{\mathcal{B}}^*(q)}h(t_j)\omega_j^* \sum_{m,n=1}^{\infty}\frac{A(m,n)\lambda_j(n)\chi(n)}{(m^2n)^{1/2}}V_{t_j}\left(\frac{m^2n}{q^3}\right) \\ & \quad + 2\frac{1}{4\pi}\int_{-\infty}^{\infty}h(t)\omega^*(t) \sum_{m,n=1}^{\infty}\frac{A(m,n)\eta_t(n)\chi(n)}{(m^2n)^{1/2}}V_{t}\left(\frac{m^2n}{q^3}\right)dt. \end{split}$$ By Lemma \[lemma: V\_t\], we can truncate the $m,n$-sums at $$m^2n\leq (qT)^{3+{\varepsilon}}$$ at the cost of a negligible error. Now we handle the weight $V_t(y)$. By Lemma \[lemma: V\_t\] (we choose $U=\log^2(qT)$), to prove , we need to prove $$\begin{split} 2\sum_{m^2n\leq (qT)^{3+{\varepsilon}}} & \frac{A(m,n)\chi(n)}{(m^2n)^{1/2+2u}} \bigg(\sum_{u_j\in{\mathcal{B}}^*(q)}h_{k,l}(t_j)\omega_j^* \lambda_j(n) \\ & \quad + \frac{1}{4\pi}\int_{-\infty}^{\infty}h_{k,l}(t)\omega^*(t)\eta_t(n)dt \bigg) \ll_{\phi,{\varepsilon}} q^{1/4}TM(qT)^{\varepsilon}, \end{split}$$ uniformly in $u\in[{\varepsilon}-i\log^2(qT),{\varepsilon}+i\log^2(qT)]$, where $$\label{eqn: h_kl} h_{k,l}(t) = t^{-2k} T^{-2l} \left(t^2-T^2\right)^l h(t).$$ Now we apply the Kuznetsov formula with $m=1$ (note that $m$ has different meaning here), we arrive at bounding $$\sum_{m^2n\leq (qT)^{3+{\varepsilon}}}\frac{A(m,n)\chi(n)}{(m^2n)^{1/2+u}} \left(\delta_{n,1}H + \sum_{q|c}\frac{1}{c}\sum_{\pm}S(n,\pm1;c)H^{\pm}\left(\frac{4\pi \sqrt{n}}{c}\right)\right),$$ where $H,\ H^\pm$ are defined as in with $h(t)=h_{k,l}(t)$. We will only deal with the case $k=l=0$, since the others can be handled similarly. By , and the fact $H\ll TMT^{\varepsilon}$, we know the diagonal term is bounded by $$\label{eqn: cD} {\mathcal{D}}:= \sum_{m^2n\leq (qT)^{3+{\varepsilon}}}\frac{A(m,n)\chi(n)}{(m^2n)^{1/2+u}} \delta_{n,1}H \ll \sum_{m^2\leq (qT)^{3+{\varepsilon}}}\frac{|A(m,1)|}{m}|H| \ll TM (qT)^{\varepsilon}.$$ Now we need to bound the off-diagonal terms $$\label{eqn: cR} {\mathcal{R}}^{\pm} := \sum_{m^2n\leq (qT)^{3+{\varepsilon}}}\frac{A(m,n)\chi(n)}{(m^2n)^{1/2+u}} \sum_{q|c}\frac{1}{c}S(n,\pm1;c)H^{\pm}\left(\frac{4\pi \sqrt{n}}{c}\right).$$ By the argument in Blomer [@blomer2012subconvexity §5], it is then enough to show that $$\label{eqn: sum over cS} \frac{1}{N^{1/2}} \sum_{m^2\delta^3\leq (qT)^{3+{\varepsilon}}\atop (\delta,q)=1,\ |\mu(\delta)|=1} \frac{|A(1,m)|}{m\delta^{3/2}}|{\mathcal{S}}_\sigma(q,N;\delta)| \ll q^{1/4}TM(qT)^{\varepsilon},$$ for $\sigma\in\{\pm1\}$, where $$\label{eqn: cS} {\mathcal{S}}_\sigma(q,N;\delta) := \sum_{q|c}\frac{1}{c} \sum_{n}A(n,1)\chi(n) S(\delta n,\sigma;c) \psi_{\sigma}\left(\frac{n}{N};\frac{\sqrt{\delta N}}{c}\right),$$ with $$\label{eqn: psi_s} \psi_\sigma(y;D) := \left\{\begin{array}{ll} w(y)y^{-u}H^+(4\pi \sqrt{y}D), & \textrm{if } \sigma=1, \\ w(y)y^{-u}H^-(4\pi \sqrt{y}D), & \textrm{if } \sigma=-1, \end{array} \right.$$ $w$ a suitable fixed smooth function with support in $[1,2]$, and $$\label{eqn: N} N \leq \frac{(qT)^{3+{\varepsilon}}}{m^2\delta^3}.$$ Here we suppress the dependence on $u$ in $\psi_\sigma(y;D)$. As Blomer [@blomer2012subconvexity §5] did, by the Voronoi formula, we have $$\label{eqn: cS_s} \begin{split} {\mathcal{S}}_\sigma(q,N;\delta) & = \frac{\pi^{3/2}}{2} \sum_{\pm} \sum_{q|c}\frac{1}{c^2} \sum_{c_1|c} c_1 \sum_{n_1|c_1}\sum_{n_2=1}^{\infty}\frac{A(n_2,n_1)}{n_1n_2} \\ & \quad \times \Psi_{\sigma}^{\pm}\left(\frac{n_1^2n_2N}{c_1^3};\frac{\sqrt{\delta N}}{c}\right) {\mathcal{T}}_{\delta,c_1,n_1,n_2}^{\pm,\sigma}(c,q), \end{split}$$ where $$\label{eqn: cT} \begin{split} {\mathcal{T}}_{\delta,c_1,n_1,n_2}^{\pm,\sigma}(c,q) & = {\sum_{b(c_1)}}^* {\sum_{d(c)}}^{*}e\left(\sigma\frac{\bar{d}}{c}\right) \sum_{a(c)}\chi(a)e\left(\frac{-\bar{b}a}{c_1}\right) \\ & \quad \times e\left(\frac{\delta da}{c}\right) \underset{f(c_1/n_1)}{{\sum}^*}e\left(\frac{bf\pm n_2\bar{f}}{c_1/n_1}\right), \end{split}$$ and $\Psi_{\sigma}^{\pm}(x;D)$ is defined as in with $\psi(x)=\psi_\sigma(x;D)$. We end this section by truncating the $c$-sum. Define $${\mathcal{S}}_\sigma(q,N;C;\delta) = \sum_{c\asymp C \atop q|c}\frac{1}{c} \sum_{n}A(n,1)\chi(n) S(\delta n,\sigma;c) \psi_{\sigma}\left(\frac{n}{N};\frac{\sqrt{\delta N}}{c}\right).$$ Using the weak bound $H^\pm(y)\ll Ty^{3/4}$, and the Weil bound for Kloosterman sums, we have $${\mathcal{S}}_\sigma(q,N;C;\delta) \ll T (qT)^{33/8+{\varepsilon}} C^{-1/4+{\varepsilon}},$$ which is good enough if $C$ is a large power of $qT$. Therefore, it suffices to bound ${\mathcal{S}}$ with $C\ll (qT)^B$ for some large but fixed $B$. Analytic separation of variables {#sec: Psi} ================================ Our goal in the section is to handle $\Psi_{\sigma}^{\pm}(x;D)$. We follow the approach in Young [@young2014weyl]. The following argument will need the stationary phase method. We’ll use the following lemma (see [@blomer2013distribution Lemma 8.1 and Proposition 8.2]). \[lemma: SP\] Suppose that $w$ is a smooth weight function with compact support on $[X, 2X]$, satisfying $w^{j}(t) \ll X^{-j}$, for $X \gg 1$ (in particular, $w$ is inert with uniformity in $X$). Also suppose that $\phi$ is smooth and satisfies $\phi^{(j)}(t) \ll \frac{Y}{X^j}$ for some $Y \gg X^{\varepsilon}$. Let $$I = \int_{-\infty}^{\infty} w(t) e^{i \phi(t)} dt.$$ 1. If $\phi'(t) \gg \frac{Y}{X}$ for all $t$ in the support of $w$, then $I \ll_A Y^{-A}$ for $A$ arbitrarily large. 2. If $\phi''(t) \gg \frac{Y}{X^2}$ for all $t$ in the support of $w$, and there exists $t_0 \in {\ensuremath{\mathbb{R}}}$ such that $\phi'(t_0) = 0$ (note $t_0$ is necessarily unique), then $$I = \frac{e^{i \phi(t_0)}}{\sqrt{\phi''(t_0)}} F(t_0) + O(Y^{-A}),$$ where $F$ is an inert function (depending on $A$, but uniformly in $X$ and $Y$) supported on $t_0 \asymp X$. Here, following Young [@young2014weyl], we say a smooth function $f(x_1,\ldots,x_n)$ on ${\ensuremath{\mathbb{R}}}^n$ *inert* if $$\label{eqn: inert} x_1^{k_1}\cdots x_n^{k_n} f^{(k_1,\ldots,k_n)}(x_1,\ldots,x_n) \ll 1,$$ with an implied constant depending on $k_1,\ldots,k_n$ and with the superscript denoting partial differentiation. Now, we recall some results about $H^\pm$ from Young [@young2014weyl §7]. \[lemma: H+\] Let $H^+$ be given by . There exists a function $g$ depending on $T$ and $M$ satisfying $g^{(j)}(y)\ll_{j,A} (1+|y|)^{-A}$, so that $$\label{eqn: H^+=int} H^+(x) = MT\int_{|v|\leq \frac{M^{\varepsilon}}{M}}\cos(x\cosh(v))e\left(\frac{vT}{\pi}\right)g(Mv)dv + O(T^{-A}).$$ Furthermore, $H^+(x)\ll T^{-A}$ unless $x\gg MT^{1-{\varepsilon}}$. And if $x\gg MT^{1-{\varepsilon}}$, then $H^+(x)\ll TMx^{-1/2}$. See Young [@young2014weyl Lemma 7.1]. And the upper bound for $H^+$ when $x\gg MT^{1-{\varepsilon}}$ comes from and Lemma \[lemma: SP\]. \[lemma: H-\] Let $H^-$ be given by . There exists a function $g$ depending on $T$ and $M$ satisfying $g^{(j)}(y)\ll_{j,A} (1+|y|)^{-A}$, so that $$\label{eqn: H^-=} H^-(x) = MT\int_{|v|\leq \frac{M^{\varepsilon}}{M}}\cos(x\sinh(v))e\left(\frac{vT}{\pi}\right)g(Mv)dv + O(T^{-A}).$$ Furthermore, $H^-(x)\ll (x+T)^{-A}$ unless $x\asymp T$. And if $x\asymp T$, then we have $H^{-}(x)\ll T^{1+{\varepsilon}}$. See Young [@young2014weyl Lemma 7.2]. And the upper bound for $H^-$ when $x\asymp T$ is an easy consequence of . Define $$\label{eqn: tilde(Psi)} \check{\Psi}_\sigma^\pm(x;D) := e\left(\mp\sigma\frac{x}{D^2}\right)\Psi_\sigma^\pm(x;D),$$ and $$\label{eqn: Upsilon} \Upsilon(t)=\Upsilon_{X,D}(t):=\int_{0}^{\infty}w\left(\frac{x}{X}\right)\check{\Psi}_\sigma^\pm(x;D)x^{it}\frac{dx}{x},$$ where $w$ is a fixed smooth function supported on $[1/2,3]$, and with value 1 on $[1,2]$. Now together with the Mellin technique, we can prove the following lemma, which will help us to separate the variables. \[lemma: Psi\] Let $x\asymp X$ with $X \gg (qT)^{-B}$ for some large but fixed $B$. 1. We have $\check{\Psi}_\sigma^\pm(x;D)\ll T^{-A}$ unless $$\label{eqn: D} \left\{ \begin{array}{ll} D\gg TM^{1-{\varepsilon}}, & \textrm{if } \sigma=1, \\ D\asymp T, & \textrm{if } \sigma=-1. \end{array} \right.$$ 2. When $X\ll T^{\varepsilon}$, if $D$ satisfies , then we have $$\label{eqn: dPsi} x^k\frac{d^k}{dx^k}\check{\Psi}_\sigma^\pm(x;D)\ll_{k,{\varepsilon}} \left\{\begin{array}{ll} X^{2/3}TMD^{-1/2}T^{{\varepsilon}}, & \textrm{if } \sigma=1, \\ X^{2/3}T^{1+{\varepsilon}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ Note that here the ${\varepsilon}$ on the right hand side may depend on $k$. Furthermore, if $x\in[X,2X]$, we have $$\label{eqn: Psi x<} \check{\Psi}_\sigma^\pm(x;D) = \frac{1}{2\pi}\int_{-T^{\epsilon}}^{T^{\epsilon}} \Upsilon(t)x^{-it}dt + O(T^{-A}).$$ And for $|t|\ll T^{\epsilon}$, we have $$\label{eqn: Upsilon<<} \Upsilon(t) \ll \left\{\begin{array}{ll} X^{2/3}TMD^{-1/2}T^{{\varepsilon}}, & \textrm{if } \sigma=1, \\ X^{2/3}T^{1+{\varepsilon}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ 3. When $X\gg T^{\varepsilon}$, we have $$\label{eqn: Psi_s^pm} \check{\Psi}_\sigma^\pm(x;D) = \sum_{\ell=1}^{K} \gamma_\ell \frac{x^{5/6}M}{x^{\ell/3}} L(x;D) + O(T^{-A}),$$ where $L$ is a function that takes the form $$\label{eqn: L} L(x;D) = \int_{|t|\ll U} \lambda_{X,T}(t) \left(\frac{x}{D^2}\right)^{it}dt$$ with the following parameters. Here $\lambda_{X,T}(t)\ll1$ does not depend on $x$ and $D$. If $\sigma=1$, then $U=T^2/D$; furthermore, $L$ vanishes unless $$\label{eqn: X&D+} X\asymp D^3, \quad \textrm{and} \quad D\gg MT^{1-{\varepsilon}}.$$ If $\sigma=-1$, then $U=T^{2/3}X^{1/3}D^{-2/3}$; in addition, $L$ vanishes unless $$\label{eqn: X&D-} X \ll D^3 M^{{\varepsilon}-3}, \quad \textrm{and} \quad D\asymp T.$$ We first handle the case $X\gg T^{\varepsilon}$. By Blomer [@blomer2012subconvexity Lemma 6], we have $$\label{eqn: Psi= Blomer} \Psi_\sigma^{\pm}(x;D) = x \int_{0}^{\infty} \psi_\sigma(y;D) \sum_{\ell=1}^{K} \frac{\gamma_\ell}{(xy)^{\ell/3}}e\left(\pm3(xy)^{1/3}\right)dy + O(T^{-A}),$$ for some constants $\gamma_\ell$ depending only on $\alpha_1,\alpha_2,\alpha_3$. Recall the definition of $\psi_\sigma(y;D)$ . By Lemmas \[lemma: H+\] and \[lemma: H-\], we arrive at $$ \sum_{\ell=1}^{K} \frac{xMT\gamma_\ell}{x^{\ell/3}} \int_{|v|\ll \frac{M^{{\varepsilon}}}{M}} g(Mv)e\left(\frac{vT}{\pi}\right) \left(\int_{0}^{\infty} w(y)y^{-u} e\left(2\sqrt{y}D\phi_\sigma(v) \pm 3(xy)^{1/3}\right) y^{-\ell/3} dy \right) dv,$$ where $\phi_\sigma(v)=\pm\cosh(v)$ for $\sigma=1$, and $\phi_\sigma(v)=\pm\sinh(v)$ for $\sigma=-1$. The $y$-integral can be analyzed by stationary phase. By Lemma \[lemma: SP\], we know the above integral is small unless a stationary point exists, which implies $$\label{eqn: phi(v)} |\phi_\sigma(v)|=\pm \phi_\sigma(v) \asymp X^{1/3}/D.$$ In addition, since $|v|\ll M^{\varepsilon}/M$, we have $X\asymp D^3$ if $\sigma=1$, and $X\ll D^3M^{{\varepsilon}-3}$ if $\sigma=-1$. At this point, we can restrict the size of $X$. Recall that in our application, $D=\sqrt{\delta N}/c$ and $N$ satisfying , we have $D\ll (qT)^2$. Hence we can assume that $X\ll (qT)^6$. Otherwise, we get $\Psi_\sigma^{\pm}(x;D)\ll T^{-A}$. Now we consider the range of $D$ to make $\Psi_\sigma^\pm(x;D)$ be not negligible. At first, by the above argument, we can restrict ourself to the case $(qT)^{-B}\ll X\ll (qT)^B$. By and Parseval’s formula, we have $$\label{eqn: Psi Parseval} \Psi_\sigma^\pm(x;D) = x\int_{0}^{\infty} \psi_\sigma(y;D) g^{\pm}(\pi^3 xy)dy,$$ where $$g^{\pm}(y) = \frac{1}{2\pi i} \int_{(c)} G^{\pm}(s)y^{-s}ds$$ is the inverse Mellin transform of $$G^{\pm}(s) = \prod_{j=1}^{3} \frac{\Gamma\left(\frac{s+\alpha_j}{2}\right)}{\Gamma\left(\frac{1-s-\alpha_j}{2}\right)} \pm \frac{1}{i} \prod_{j=1}^{3} \frac{\Gamma\left(\frac{1+s+\alpha_j}{2}\right)}{\Gamma\left(\frac{2-s-\alpha_j}{2}\right)}.$$ Now, by Lemmas \[lemma: H+\] and \[lemma: H-\], we can assume that $D\gg MT^{1-{\varepsilon}}$ if $\sigma=1$, and $D\asymp T$ if $\sigma=-1$. Otherwise, we have $\Psi_\sigma^{\pm}(x;D)\ll T^{-A}$. Thus we give the proof of part (i). Assuming , the stationary point at $y_0=x^2(\phi_\sigma(v)D)^{-6}\asymp1$, so, by Lemma \[lemma: SP\], we have $$\begin{split} & \quad \int_{0}^{\infty} w(y)y^{-u} e\left(2\sqrt{y}D\phi_\sigma(v) \pm 3(xy)^{1/3}\right) y^{-\ell/3} dy \\ & = x^{-1/6} e\left(\mp \frac{x}{\phi_\sigma^2(v)D^2}\right) w_1(v) + O(T^{-A}), \end{split}$$ where $w_1$ is inert in terms of $v$, and $w_1$ has support on . The fact that $w_1$ is inert in terms of $v$ needs some discussion. We naturally obtain an inert function in terms of $\phi_\sigma(v)$, but since $\phi_\sigma(v)$ has bounded derivatives for $|v|\leq1$, we do get an inert function of $v$. Hence, to bound $\Psi_\sigma^{\pm}(x;D)$, we only need to estimate $$\sum_{\ell=1}^{K} \frac{x^{5/6}MT\gamma_\ell}{x^{\ell/3}} e\left(\mp\sigma\frac{x}{D^2}\right) \Phi_\sigma\left(\frac{x}{D^2}\right),$$ where $$\label{eqn: Phi} \Phi_\sigma(y) = \int_{|v|\ll \frac{M^{{\varepsilon}}}{M}} g(Mv)e\left(\frac{vT}{\pi}\right) e\left(\pm y(\sigma-\phi_\sigma^{-2}(v))\right) w_1(v) dv.$$ Finally, we shall use the Mellin technique to analyze $\Phi_\sigma(y)$. By the same proof as Young [@young2014weyl Lemma 8.2] did, for $$y\asymp Y\asymp X/D^2 \asymp \left\{\begin{array}{ll} X^{1/3}, & \textrm{if } \sigma=1, \\ XT^{-2}, & \textrm{if } \sigma=-1, \end{array}\right.$$ we have $$\label{eqn: Phi=} \Phi_\sigma(y) = \frac{1}{T} \int_{|t|\ll U} \lambda_{Y,T}(t)y^{it}dt + O(T^{-A}),$$ where $\lambda_{Y,T}$ and $U$ depend on $Y,T$. Precisely, we have $\lambda_{Y,T}(t)\ll 1$, and $$\label{eqn: U} \left\{\begin{array}{ll} U=T^2/Y, & \textrm{if } \sigma=1, \\ U=Y^{1/3}T^{2/3}, & \textrm{if } \sigma=-1. \end{array}\right.$$ (Note that the assumption $Y\gg1$ in Young [@young2014weyl Lemma 8.2] is not used. Here we just need $Y/|v_0|^2\gg T^{{\varepsilon}}$, where $|v_0|=x^{1/3}/D$ in the proof. And we can derive this from $Y/|v_0|^2 \asymp X/(D|v_0|)^2\asymp X^{1/3}\gg T^{{\varepsilon}}$.) Note that in either case, we have $U\gg T^{\varepsilon}$. Now, by , we have $Y\asymp D$ if $\sigma=1$. We prove part (iii). Finally, we deal with the case $X\ll T^{\varepsilon}$. By Blomer [@blomer2012subconvexity Lemma 7], for $D$ satisfying , we have $$\label{eqn: blomer Psi} x^k\frac{d^k}{dx^k}\check{\Psi}_\sigma^{\pm}(x;D) \ll_k (1+x^{1/3})^k x^{2/3}\|\psi_\sigma\|_\infty.$$ Now, by and Lemmas \[lemma: H+\] and \[lemma: H-\], we prove the upper bound . Next, we want to use the Mellin technique to separate the variables. Recall that $$\Upsilon(t)=\int_{0}^{\infty}w\left(\frac{x}{X}\right)\check{\Psi}_\sigma^\pm(x;D)x^{it}\frac{dx}{x}.$$ Note that for $|t|\gg T^\epsilon$, (taking $\epsilon>2{\varepsilon}$), we have $t/x \gg T^{\varepsilon}$. So using integral by parts many times, for $|t|\gg T^\epsilon$, we have $\Upsilon(t)\ll (tT)^{-A}$. By the Mellin inversion, for $x\in[X,2X]$, we have $$\check{\Psi}_\sigma^\pm(x;D) = \frac{1}{2\pi}\int_{-\infty}^{\infty} \Upsilon(t) x^{-it}dt = \frac{1}{2\pi}\int_{-T^{\epsilon}}^{T^{\epsilon}} \Upsilon(t)x^{-it}dt + O(T^{-A}).$$ And for $|t|\ll T^{\epsilon}$, the upper bound of $\Upsilon(t)$ is a consequence of and . Thus we prove part (ii). This finishes the proof of the lemma. Lemma \[lemma: Psi\] is good enough to give a nice bound for the terms related the $K$-Bessel function. However, we don’t know how to apply both the large sieve inequalities and a second use of Voronoi formula when we want to bound the terms related to the $J$-Bessel function. So, on the one hand, in the following sections, we will get a bound without using the Voronoi formula twice. This result is good in $q$-aspect and not too bad in $t$-aspect. And then, on the other hand, in §\[sec: Psi II\], we will use another method to deal with the integral transforms that appear on the right hand side of the Voronoi formula. This will be done by following Blomer [@blomer2012subconvexity §3], Li [@li2011bounds §4], and Mckee–Sun–Ye [@mckee2015improved §6]. By doing this, we will obtain a bound which is good in the $t$-aspect, and not too bad in the $q$-aspect. Then combining these two bounds, one can get a hybrid subconvexity bound. Applying the large sieve ======================== Let $$\label{eqn: H(w;q)} H(w;q) = \sum_{u,v({\operatorname{mod}\ }q)} \chi_q(uv(u+1)(v+1))e_q((uv-1)w).$$ By Conrey–Iwaniec [@conrey2000cubic Eq. (11.7)], we have $$\label{eqn: H to H^*} H(w;q) = \sum_{q_1q_2=q} \mu(q_1)\chi_{q_1}(-1)H^*(\overline{q_1}w;q_2),$$ where $$\label{eqn: H^*} H^*(w;q) = \sum_{\substack{u,v({\operatorname{mod}\ }q)\\(uv-1,q)=1}} \chi_q(uv(u+1)(v+1))e_q((uv-1)w),$$ and from [@conrey2000cubic Eq. (11.9)], we have $$\label{eqn: H^*=} H^*(w;q) = \frac{1}{\varphi(q)}\sum_{\psi({\operatorname{mod}\ }q)} \tau(\bar{\psi})g(\chi,\psi)\psi(w),$$ where $\tau(\psi)$ is the Gauss sum, and $g(\chi,\psi)\ll q^{1+{\varepsilon}}$. We first recall the following hybrid large sieve. \[lemma: HLS\] Suppose $U\geq1$, and let $a_n$ be a sequence of complex numbers. Then $$\label{eqn: HLS} \int_{-U}^{U} \sum_{\psi({\operatorname{mod}\ }q)} \bigg|\sum_{n\leq N}a_n\psi(n)n^{it}\bigg|^2dt \ll (qU+N)\sum_{n\leq N}|a_n|^2.$$ See Gallagher [@gallagher1970large Theorem 2]. \[lemma: average H\] Suppose that $q$ is squarefree. Let $a_1,a_2,b_1,b_2\in{\ensuremath{\mathbb{Z}}}$, $c\in{\ensuremath{\mathbb{Z}}}$, such that $(b_1b_2c,q)=(a_1a_2,c)=1$. Let $N_2,D_2\geq1$. Suppose $\alpha_d,\beta_n\in{\ensuremath{\mathbb{C}}}$ with $|\alpha_d|\leq1$ for $1\leq d\leq D_2$, $1\leq n\leq N_2$, and $U\geq1$. Then for any ${\varepsilon}>0$, we have $$\label{eqn: average H} \begin{split} & \qquad \int_{|t|\ll U} \bigg|\sum_{\substack{n\asymp N_2,d\asymp D_2\\(nd,qc)=1}} \alpha_d\beta_ne(a_1\overline{a_2}\bar{d}n/c)H(b_1\overline{b_2}\bar{d}n;q)\left(\frac{n}{d}\right)^{it}\bigg|dt \\ & \ll \frac{q^{1/2+{\varepsilon}}}{c^{1/2-{\varepsilon}}} (qcU+D_2)^{1/2} D_2^{1/2} (qcU+N_2)^{1/2}\|\beta\|, \end{split}$$ where as usual $\|\beta\|=\left(\sum|\beta_n|^2\right)^{1/2}$. This is a variation of [@conrey2000cubic Lemma 11.1]. We combine the ingredients in both [@blomer2012subconvexity Lemma 13] and [@young2014weyl Lemma 9.2]. By , we have $$\begin{split} & \qquad \int_{|t|\ll U} \bigg|\sum_{\substack{n\asymp N_2,d\asymp D_2\\(nd,qc)=1}} \alpha_d\beta_ne(a_1\overline{a_2}\bar{d}n/c)H(b_1\overline{b_2}\bar{d}n;q)\left(\frac{n}{d}\right)^{it}\bigg|dt \\ & \leq \sum_{q_1q_2=q}\int_{|t|\ll U} \bigg|\sum_{\substack{n\asymp N_2,d\asymp D_2\\(nd,qc)=1}} \alpha_d\beta_ne(a_1\overline{a_2}\bar{d}n/c)H^*(b_1\overline{b_2}\bar{d}n\overline{q_1};q_2)\left(\frac{n}{d}\right)^{it}\bigg|dt. \end{split}$$ We just handle the case $q_2=q$, since the other cases turn out to have a smaller upper bound. By , we have $$\begin{split} & \qquad \int_{|t|\ll U} \bigg|\sum_{\substack{n\asymp N_2,d\asymp D_2\\(nd,qc)=1}} \alpha_d\beta_ne(a_1\overline{a_2}\bar{d}n/c)H^*(b_1\overline{b_2}\bar{d}n;q)\left(\frac{n}{d}\right)^{it}\bigg|dt \\ & \ll \int_{|t|\ll U} \frac{1}{\varphi(qc)}\sum_{\psi(q)}\sum_{\omega(c)}\tau(\bar{\psi})\tau(\bar{\omega})g(\chi,\psi)\bigg|\sum_{\substack{n\asymp N_2,d\asymp D_2\\(nd,qc)=1}} \alpha_d\beta_n\psi\omega(\bar{d}n)\left(\frac{n}{d}\right)^{it}\bigg|dt \\ & \ll \frac{q^{1/2+{\varepsilon}}}{c^{1/2-{\varepsilon}}} \bigg(\int_{|t|\ll U}\sum_{\psi\omega(qc)}\bigg|\sum_d \alpha_d \psi\omega(\bar{d})d^{-it}\bigg|^2 dt \bigg)^{1/2} \bigg(\int_{|t|\ll U}\sum_{\psi\omega(qc)}\bigg|\sum_n \beta_n \psi\omega(n)n^{it}\bigg|^2 dt \bigg)^{1/2}, \end{split}$$ where $|\alpha_d|\leq 1$. Now after applying Lemma \[lemma: HLS\], we prove the lemma. Proof of Theorem \[thm: q\] {#sec: thm q} =========================== Denote $R_q(n)=S(n,0;q)$ the Ramanujan sum. By a long and complicated computation, Blomer [@blomer2012subconvexity Eq. (51)] gave $$\label{eqn: cS=} \begin{split} {\mathcal{S}}_\sigma(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{c_1'c_2'=q} \sum_{\substack{f_1,f_2,d_2'\\ (f_1f_2d_2',c_2'\delta)=1\\ (f_1,f_2)=1,\mu^2(f_1)=1\\(f_1f_2,qd_2')=1}} \sum_{\substack{n_1'|f_1c_1'\\(n_1',d_2')=1}} \sum_{\substack{n_2\\ (n_2,d_2')=1}}\frac{d_2'\mu(f_2)}{c_2'} \\ & \quad \times \frac{A(n_2,n_1'f_2)}{n_1'n_2} \frac{\varphi(f_1f_2d_2'c_1')\varphi(f_1d_2'c_1'/n_1')}{\varphi(f_1f_2d_2'c_1'c_2')^2} e\left(\pm\sigma\frac{(n_1'c_2')^2f_2n_2\delta_0\overline{d_2'c_1'}}{f_1\delta'}\right) \\ & \quad \times \frac{h\chi_h(-1)}{\varphi(k)} R_k(n_2n_1'f_2c_2')R_k(c_2'\delta_0)R_k(n_1'f_2c_2') \\ & \quad \times H(\mp\sigma\overline{f_1d_2'hk}n_2(n_1'c_2')^2f_2c_2'\delta_0\overline{\delta'},\ell) \check{\Psi}_\sigma^\pm\left(\frac{n_2(n_1')^2N}{(f_1d_2'c_1')^3f_2},\frac{\sqrt{\delta N}}{qf_1f_2d_2'\delta_0}\right), \end{split}$$ where $\varphi$ is the Euler function, $\gamma=\pi^{3/2}\chi(-1)/2$, and $$\label{eqn: h,k,l} h=(f_1f_2d_2',q)=(d_2',q),\quad k=(n_2(n_1'f_2c_2')^2c_2,q/h), \quad \ell=q/(hk).$$ We summary the relations of these variables and previous variables here, although we don’t need them in this section (see Blomer [@blomer2012subconvexity §6]) $$\begin{gathered} c=qr, \quad c_2=c/c_1, \quad \delta_0=(\delta,r), \quad \delta'=\delta/\delta_0,\nonumber\\ c'=c/\delta_0, \quad c_2'=c_2/\delta_0,\quad r'=r/\delta_0, \label{eqn: relations} \\ n_1'=n_1/f_2, \quad c_1'=c_1/r', \quad f_1f_2d_2'=r'.\nonumber $$ As Blomer [@blomer2012subconvexity §7] did, in the $q$-aspect, one can use the decay conditions of $\check{\Psi}_\sigma^\pm$ to show that several variables can be dropped. But in our case, things become much more complicated. However, the argument is similar to Blomer [@blomer2012subconvexity §7], and we need to track the dependence on $T$ and $M$. One can see that the argument in §\[subsec: c\_2’=q s=-1\]–\[subsec: k=q s=-1\] is similar to §\[subsec: main case s=-1\], and even easier. In the $q$-aspect, results in §\[subsec: c\_2’=q s=-1\]–\[subsec: k=q s=-1\] are better. However, it seems that to get a good bound in the $t$-aspect, we have to use the large sieve in all cases. The main case {#subsec: main case s=-1} ------------- We first deal with the main case, that is, $c_1'=q,\ c_2'=h=k=1$. This is the most important case (at least in the $q$-aspect), so we give the details of the treatment of this case. Denote these terms in as ${\mathcal{S}}_\sigma^\dag(q,N;\delta)$. Note that we have $(d_2'n_1'n_2,q)=1$. Write $f_1=n_1'g$. Then we have $$\label{eqn: S_s^d} \begin{split} {\mathcal{S}}_\sigma^\dag(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta} \frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,d_2'\\(gn_1'f_2d_2',q\delta)=1\\(gn_1',f_2)=1,\mu^2(gn_1')=1\\(gn_1'f_2,d_2')=1}} \frac{d_2'\mu(f_2)}{\varphi(n_1'f_2)n_1'} \\ & \quad \times \sum_{\substack{n_2\\(n_2,qd_2')=1}}\frac{A(n_2,n_1'f_2)}{n_2} e\left(\pm\sigma\frac{n_1'f_2n_2\delta_0\overline{d_2'q}}{g\delta'}\right) \\ & \quad \times H(\mp\sigma\overline{gd_2'}n_2n_1'f_2\delta_0\overline{\delta'},q) \check{\Psi}_\sigma^\pm\left(\frac{n_2N}{(gd_2'q)^3n_1'f_2}, \frac{\sqrt{\delta N}}{qgn_1'f_2d_2'\delta_0}\right). \end{split}$$ Since we have $(n_1'f_2\delta_0\overline{d_2'q},g\delta')=1$, let $$s=(n_2,g\delta'), \quad n_2=n_2's,\quad (n_2',g\delta'/s)=1.$$ We cancel the factor $s$ from the numerator and denominator of the exponential getting $$\label{eqn: cS<<} \begin{split} {\mathcal{S}}_\sigma^\dag(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta} \frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2\\(gn_1'f_2,q\delta)=1\\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \frac{\mu(f_2)}{\varphi(n_1'f_2)n_1'} \sum_{s|g\delta'} \frac{1}{s} \\ & \quad \times \sum_{\substack{d_2'\\(d_2',q\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\\(n_2',qd_2'g\delta'/s)=1}}\frac{d_2'A(n_2's,n_1'f_2)}{n_2'} e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) \\ & \quad \times H(\mp\sigma\overline{gd_2'}n_2'sn_1'f_2\delta_0\overline{\delta'},q) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sN}{(gd_2'q)^3n_1'f_2}, \frac{\sqrt{\delta N}}{qgn_1'f_2d_2'\delta_0}\right). \end{split}$$ The main actors in are the variables $d_2'$ and $n_2'$. We open the coprimality condition $(d_2',n_2')=1$ by Möbius inversion. We introduce a new variable $r$ and get $$\begin{split} {\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}\sup_{N_2,D_2} \sum_{\pm} \sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\(gn_1'f_2,q\delta)=1\\(gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \sum_{s|g\delta'} \frac{1}{f_2(n_1')^2s} \\ & \quad \times \bigg| \sum_{\substack{N_2\leq n_2'\leq 2N_2\\(n_2',qg\delta'/s)=1}} \sum_{\substack{D_2\leq d_2'\leq 2D_2 \\ (d_2',q\delta gn_1'f_2)=1}} \frac{d_2'A(n_2'rs,n_1'f_2)}{n_2'} e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) \\ & \quad \times H(\mp\sigma\overline{gd_2'}n_2'sn_1'f_2\delta_0\overline{\delta'},q) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sN}{(gd_2'q)^3r^2n_1'f_2}, \frac{\sqrt{\delta N}}{qgn_1'f_2d_2'r\delta_0}\right)\bigg|. \end{split}$$ By Lemma \[lemma: Psi\], we can assume that $$\label{eqn: D_2} \left\{\begin{array}{ll} 1\leq D_2 \leq \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 TM^{1-{\varepsilon}}}, & \textrm{if } \sigma=1, \\ 1\leq D_2 \asymp \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 T}, & \textrm{if } \sigma=-1, \end{array}\right.$$ and $$\label{eqn: N_2} \left\{\begin{array}{ll} 1\leq N_2 \asymp \frac{(\delta^3N)^{1/2}}{(n_1'f_2)^2 r\delta_0^3 s}, & \textrm{if } \sigma=1, \\ 1\leq N_2 \leq \frac{(\delta^3N)^{1/2}}{(n_1'f_2)^2 r\delta_0^3 s M^{3-{\varepsilon}}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ Now we consider the case $\sigma=-1$, and $x=\frac{n_2'sN}{(gd_2'q)^3r^2n_1'f_2}\gg T^{\varepsilon}$. By Lemma \[lemma: Psi\], we infer $$\begin{split} {\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}M \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\(gn_1'f_2,q\delta)=1\\(gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \sum_{s|g\delta'}\frac{1}{(gf_2s)^{3/2}(n_1)^{7/2}r} \\ & \quad \times \sup_{D_2,N_2} \frac{N^{1/2}}{q^{3/2}N_2^{1/2}D_2^{1/2}} \int_{|t|\ll U} \bigg| \sum_{\substack{N_2\leq n_2'\leq 2N_2\\(n_2',qg\delta'/s)=1}} \sum_{\substack{D_2\leq d_2'\leq 2D_2 \\ (d_2',q\delta gn_1'f_2)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) H(\mp\sigma\overline{gd_2'}n_2'sn_1'f_2\delta_0\overline{\delta'},q) \left(\frac{n_2'}{d_2'}\right)^{it}\bigg|dt. \end{split}$$ Note that $U\asymp x^{1/3}\ll T^{1+{\varepsilon}}/M$, and from we have $$\bigg(\sum_{n_2'\asymp N_2}|A(n_2'rs,n_1'f_2)|^2\bigg)^{1/2} \ll (qT)^{\varepsilon}N_2^{1/2}(rsn_1'f_2)^{7/32}.$$ Applying Lemma \[lemma: average H\], we obtain $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}M q^{-1}r^{7/32} \sup_{D_2,N_2} (qU+D_2)^{1/2} (qU\delta+N_2)^{1/2} \\ & \ll (qT)^{\varepsilon}M q^{-1} r^{7/32} \left(\frac{qT}{M}+\frac{(qT)^{1/2}}{r}\right)^{1/2} \left(\frac{qT}{M}\delta+\frac{(qT)^{3/2}}{rM^3}\right)^{1/2} \\ & \ll (qT)^{\varepsilon}\left( T\delta^{1/2}r^{7/32} + q^{1/4}T^{5/4}M^{-1}\right) \\ & \ll (qT)^{\varepsilon}\left( T\delta^{1/2}(qT)^{7/64} + q^{1/4}TM \right), \end{split}$$ provided $T^{1/8+{\varepsilon}}\ll M \ll T^{1/2}$. Here we use the fact $r\ll (qT)^{1/2+{\varepsilon}}$, which is a consequence of . Note that if $\sigma=1$, and $x\gg T^{\varepsilon}$, then the same argument will give $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}M q^{-1}r^{7/32} \sup_{D_2,N_2} (qU+D_2)^{1/2} (qU\delta+N_2)^{1/2} \\ & \ll (qT)^{\varepsilon}M q^{-1} r^{7/32} \left(\frac{qT}{M}+\frac{(qT)^{1/2}}{r}\right)^{1/2} \left(\frac{qT}{M}\delta+\frac{(qT)^{3/2}}{r}\right)^{1/2} \\ & \ll (qT)^{\varepsilon}\left( T\delta^{1/2}r^{7/32} + q^{1/4}TM + q^{1/4}T^{5/4}M^{1/2}\right) \\ & \ll (qT)^{\varepsilon}\left( T\delta^{1/2}(qT)^{7/64} + q^{1/4}TM \right), \end{split}$$ provided $M\asymp T^{1/2}$. This will not give us a subconvexity bound in the $t$-aspect, so we have to sum over $n_2$ non-trivially. Now we consider the case $x\ll T^{\varepsilon}$, i.e., $N_2 \ll \frac{(\delta^3N)^{1/2}T^{\varepsilon}}{(n_1'f_2)^2 r\delta_0^3 s T^{3}}$. Note that the upper bound for $N$ implies that this will happen only if $q\gg T^{1-{\varepsilon}}$. By Lemma \[lemma: Psi\], for both $\sigma=\pm1$, we have $$\begin{split} {\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}T \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{\substack{g,f_2,n_1',r\\(n_1'f_2g,q)=1\\(n_1'f_2\delta_0,g\delta')=1}} \sum_{s|g\delta'}\frac{1}{(gf_2s)^{3/2}(n_1)^{7/2}r} \\ & \quad \times \sup_{D_2,N_2} \frac{N^{1/2}}{q^{3/2}N_2^{1/2}D_2^{1/2}} \bigg| \sum_{\substack{N_2\leq n_2'\leq 2N_2\\(n_2',qg\delta'n_1'f_2/s)=1}} \sum_{\substack{D_2\leq d_2'\leq 2D_2 \\ (d_2',\delta n_1'gf_2q)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) H(\mp\sigma\overline{gd_2'}n_2'sn_1'f_2\delta_0\overline{\delta'},q) \bigg|. \end{split}$$ Now by Blomer [@blomer2012subconvexity Lemma 13], we have $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\dag(q,N;\delta) & \ll (qT)^{\varepsilon}T q^{-1}r^{7/32} \sup_{D_2,N_2} (q+D_2)^{1/2} (q\delta+N_2)^{1/2} \\ & \ll (qT)^{\varepsilon}T q^{-1} r^{7/32} \left(q+\frac{(qT)^{1/2}}{r}\right)^{1/2} \left(q\delta+\frac{q^{3/2}}{rT^{3/2}}\right)^{1/2} \\ & \ll (qT)^{\varepsilon}T r^{7/32} \left(\delta+\frac{q^{1/2}}{rT^{3/2}}\right)^{1/2} \\ & \ll (qT)^{\varepsilon}\left( T\delta^{1/2}(qT)^{7/64} + (qT)^{1/4} \right). \end{split}$$ Hence in both cases, we prove . This finishes the estimation of ${\mathcal{S}}_\sigma(x;D)$ when $\sigma=-1$, and also when $\sigma=1$ and $x\ll T^{\varepsilon}$. In next section, we will focus on the case $\sigma=1$ and $x\gg T^{\varepsilon}$. The case $c_2'=q$ {#subsec: c_2'=q s=-1} ----------------- In this subsection, we prove the terms attached with $c_2'=q$, $c_1'=1$ has a good bound, that is, we show that we can assume $c_2'=1$. Denote these terms in as ${\mathcal{S}}_\sigma^\flat(q,N;\delta)$. Let $D=\frac{\sqrt{\delta N}}{qf_1f_2d_2'\delta_0}$. By Lemma \[lemma: Psi\], we can assume $$\left\{ \begin{array}{ll} D\gg TM^{1-{\varepsilon}}, & \textrm{if } \sigma=1, \\ D\asymp T, & \textrm{if } \sigma=-1. \end{array}\right.$$ Hence, by , we have $$x=\frac{n_2(n_1')^2N}{(f_1d_2'c_1')^3f_2} =\frac{n_2(n_1')^2 f_2^2\delta_0^3 (qD)^3}{(\delta^3 N)^{1/2}} \gg T^{\varepsilon}$$ By the condition $(f_1f_2d_2',q\delta)=1$, we have $h=(d_2',q)=1$, and then $k=q$ and $\ell=1$. Hence, after writing $n_1'g=f_1$, we have $$\label{eqn: S_b^f} \begin{split} {\mathcal{S}}_\sigma^\flat(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,d_2'\\ (gn_1'f_2d_2',q\delta)=1\\ (gn_1',f_2)=1,\mu^2(gn_1')=1\\(gn_1'f_2,qd_2')=1}} \frac{1}{\varphi(n_1'f_2)} \sum_{\substack{n_2\\ (n_2,d_2')=1}}\frac{d_2'\mu(f_2)}{q} \\ & \quad \times \frac{A(n_2,n_1'f_2)}{n_1'n_2} e\left(\pm\sigma\frac{n_1'q^2f_2n_2\delta_0\overline{d_2'}}{g\delta'}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2 N}{(gd_2')^3n_1'f_2},\frac{\sqrt{\delta N}}{qgn_1'f_2d_2'\delta_0}\right), \end{split}$$ Since we have $(n_1'q^2f_2\delta_0\overline{d_2'},g\delta')=1$, let $$s=(n_2,g\delta'), \quad n_2=n_2's,\quad (n_2',g\delta'/s)=1.$$ We cancel the factor $s$ from the numerator and denominator of the exponential, and open the coprimality condition $(n_2',d_2')=1$ by Möbius inversion (introduce the new variable $r$), we get $$\begin{split} {\mathcal{S}}_\sigma^\flat(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\ (gn_1'f_2,q\delta)=1\\ (gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \frac{\mu(f_2)\mu(r)}{n_1'\varphi(n_1'f_2)} \sum_{s|g\delta'}\frac{1}{s} \\ & \quad \times \sum_{\substack{d_2'\\(d_2',q\delta gn_1'f_2)=1}}\sum_{\substack{n_2'\\(n_2',g\delta'/s)=1}} \frac{d_2'A(n_2'rs,n_1'f_2)}{qn_2'} \\ & \quad \times e\left(\pm\sigma\frac{n_1'q^2f_2n_2'\delta_0\overline{d_2'}}{g\delta'/s}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2's N}{(gd_2')^3r^2n_1'f_2},\frac{\sqrt{\delta N}}{qgn_1'f_2d_2'r\delta_0}\right). \end{split}$$ Then by Lemma \[lemma: Psi\], we have $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\flat(q,N;\delta) & \ll (qT)^{\varepsilon}\sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gf_2)^{3/2}(n_1')^{5/2}r} \sum_{s|g\delta'}\frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{M}{qD_2^{1/2}N_2^{1/2}}\int_{|t|\ll U} \bigg|\sum_{\substack{d_2'\asymp D_2\\(d_2',q\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\asymp N_2\\(n_2',g\delta'/s)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1'f_2) e\left(\pm\sigma\frac{n_1'q^2f_2n_2'\delta_0\overline{d_2'}}{g\delta'/s}\right) \left(\frac{n_2'}{d_2'}\right)^{it}\bigg|dt. \end{split}$$ Here we can assume that $$ \left\{\begin{array}{ll} 1\leq D_2 \leq \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 TM^{1-{\varepsilon}}}, & \textrm{if } \sigma=1, \\ 1\leq D_2 \asymp \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 T}, & \textrm{if } \sigma=-1, \end{array}\right.$$ and $$ \left\{\begin{array}{ll} 1\leq N_2 \asymp \frac{(\delta^3N)^{1/2}}{q^3(n_1'f_2)^2 r\delta_0^3 s}, & \textrm{if } \sigma=1, \\ 1\leq N_2 \leq \frac{(\delta^3N)^{1/2}}{q^3(n_1'f_2)^2 r\delta_0^3 s M^{3-{\varepsilon}}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ Note that in both cases, we have $U\ll T^{1+{\varepsilon}}/M$. We can use the multiplicative characters to separate the variables in the exponential function, together with Cauchy–Schwarz inequality and Lemma \[lemma: HLS\], we obtain $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\flat(q,N;\delta) & \ll (qT)^{\varepsilon}\sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gn_1')^{3/2}\varphi(n_1'f_2)f_2^{1/2}r} \sum_{s|g\delta'}\frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{M}{q} (g\delta'/s)^{1/2} (U+D_2)^{1/2}(U+N_2)^{1/2} (rsn_1'f_2)^{7/32}. \end{split}$$ In the case $\sigma=-1$, we have $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\flat(q,N;\delta) & \ll (qT)^{\varepsilon}\sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gn_1')^{3/2}\varphi(n_1'f_2)f_2^{1/2}r} \sum_{s|g\delta'}\frac{1}{s^{1/2}} \\ & \quad \times \frac{M}{q^{3/4}} (g\delta'/s)^{1/2} \left(\frac{T}{M} + \frac{T^{5/4}}{M^2r^{1/2}} \right) (rsn_1'f_2)^{7/32} \\ & \ll (qT)^{\varepsilon}\delta^{1/2} TM, \end{split}$$ provided $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$, which is good enough for our purpose by . From now on we assume $c_2'=1, c_1'=q$. In the case $\sigma=1$, the same argument will give $$\frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^\flat(q,N;\delta) \ll (qT)^{\varepsilon}\delta^{1/2} TM,$$ provided $M\asymp T^{1/2}$. The case $h=q$ {#subsec: h=q s=-1} -------------- Next we show that $h=q$ is negligible. Denote these terms in as ${\mathcal{S}}_\sigma^\natural(q,N;\delta)$. In this case, we have $k=\ell=1$ and $q|d_2'$, Write $f_1=gn_1',\ d_2'=qd_2''$. We get $$\label{eqn: S_s^n} \begin{split} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\ (gn_1'f_2,q\delta)=1\\ (gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \frac{\mu(f_2)\mu(r)}{n_1'\varphi(n_1'f_2)} \sum_{s|g\delta'} \frac{1}{s} \\ & \quad \times q^2\chi_q(-1) \sum_{\substack{d_2''\\ (d_2'',\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\\ (n_2',qg\delta'/s)=1}} \frac{d_2''A(n_2'rs,n_1'f_2)}{n_2'} \\ & \quad \times e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2''q^2}}{g\delta'/s}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sN}{(gd_2''q^2)^3r^2n_1'f_2}, \frac{\sqrt{\delta N}}{q^2gn_1'f_2d_2''r\delta_0}\right). \end{split}$$ Let $s=(n_2,g\delta')$ and $n_2=n_2's$ as before. After opening the condition $(n_2',d_2'')=1$ by Möbius inversion, we obtain $$\begin{split} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\ (gn_1'f_2,q\delta)=1\\ (gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \frac{\mu(f_2)\mu(r)}{n_1'\varphi(n_1'f_2)} \sum_{s|g\delta'} \frac{1}{s} \\ & \quad \times q^2\chi_q(-1) \sum_{\substack{d_2''\\ (d_2'',\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\\ (n_2',qg\delta'/s)=1}} \frac{d_2''A(n_2'rs,n_1'f_2)}{n_2'} \\ & \quad \times e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2''q^2}}{g\delta'/s}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sN}{(gd_2''q^2)^3r^2n_1'f_2}, \frac{\sqrt{\delta N}}{q^2gn_1'f_2d_2''r\delta_0}\right). \end{split}$$ Now we consider the sum over $d_2''\asymp D_2, n_2'\asymp N_2$, and we can assume $$ \left\{\begin{array}{ll} 1\leq D_2 \leq \frac{\sqrt{\delta N}}{q^2gn_1'f_2r\delta_0 TM^{1-{\varepsilon}}}, & \textrm{if } \sigma=1, \\ 1\leq D_2 \asymp \frac{\sqrt{\delta N}}{q^2gn_1'f_2r\delta_0 T}, & \textrm{if } \sigma=-1, \end{array}\right.$$ and $$ \left\{\begin{array}{ll} 1\leq N_2 \asymp \frac{(\delta^3N)^{1/2}}{(n_1'f_2)^2 r\delta_0^3 s}, & \textrm{if } \sigma=1, \\ 1\leq N_2 \leq \frac{(\delta^3N)^{1/2}}{(n_1'f_2)^2 r\delta_0^3 s M^{3-{\varepsilon}}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ The following argument will depend on the size of $x=\frac{n_2'sN}{(gd_2''q^2)^3r^2n_1'f_2}$. If $x\gg T^{\varepsilon}$ and $\sigma=-1$, then by Lemma \[lemma: Psi\], we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & \ll \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gf_2)^{3/2}(n_1')^{5/2}r} \sum_{s|g\delta'} \frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{M}{q(D_2N_2)^{1/2}} \int_{|t|\ll U} \bigg| \sum_{\substack{d_2''\asymp D_2\\ (d_2'',\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\asymp N_2 \\ (n_2',qg\delta'/s)=1}} \alpha(d_2'')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2''q^2}}{g\delta'/s}\right) \left(\frac{n_2'}{d_2''}\right)^{it} \bigg|dt. \end{split}$$ Hence by Lemma \[lemma: HLS\] again, we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & \ll \sup_{D_2,N_2} (qT)^{\varepsilon}\frac{M}{q}r^{7/32}(U+D_2)^{1/2}(\delta U+N_2)^{1/2}\\ & \ll (qT)^{\varepsilon}\delta^{1/2}\left(\frac{T}{q}r^{7/32}+\frac{T^{5/4}}{q^{1/4}M}\right) \ll (qT)^{\varepsilon}TM\delta^{1/2}, \end{split}$$ provided $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$. And again, if $x\gg T^{\varepsilon}$ and $\sigma=1$, the same argument shows that $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & \ll (qT)^{\varepsilon}\delta^{1/2}\left(\frac{T}{q}r^{7/32}+\frac{T^{5/4}M^{1/2}}{q^{1/4}}\right) \ll (qT)^{\varepsilon}TM\delta^{1/2}, \end{split}$$ provided $M\asymp T^{1/2}$. Now if $x\ll T^{\varepsilon}$, then by Lemma \[lemma: Psi\], for both $\sigma=\pm1$, we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & \ll \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gf_2)^{3/2}(n_1')^{5/2}r} \sum_{s|g\delta'} \frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{T}{q(D_2N_2)^{1/2}} \bigg| \sum_{\substack{d_2''\asymp D_2\\ (d_2'',\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\asymp N_2 \\ (n_2',qg\delta'/s)=1}} \alpha(d_2'')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'\delta_0\overline{d_2''q^2}}{g\delta'/s}\right)\bigg|. \end{split}$$ Hence by Blomer [@blomer2012subconvexity Lemma 13], we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\natural(q,N;\delta) & \ll \sup_{D_2,N_2} (qT)^{\varepsilon}\frac{T}{q}r^{7/32}(1+D_2)^{1/2}(\delta+N_2)^{1/2}\\ & \ll (qT)^{\varepsilon}\delta^{1/2}\left(\frac{T}{q}r^{7/32}+\frac{T^{1/4}}{q^{1/4}}\right) \ll (qT)^{\varepsilon}TM\delta^{1/2}, \end{split}$$ provided $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$. This proves in this case. So from now on, we can assume $c_2'=h=1$. The case $k=q$ {#subsec: k=q s=-1} -------------- Now, we show that we can also exclude the case $k=q$. First note that we can simplify $k=(n_2n_1',q)$. Hence we distinguish two cases and show that the contribution of $q|n_1'$ and $q|n_2$ is negligible. We first deal with the case $q|n_1'$. Denote these terms in as ${\mathcal{S}}_\sigma^\sharp(q,N;\delta)$. As before, we have $$x \gg \frac{q^2N}{q^3}\left(\frac{\sqrt{\delta N}}{Dq\delta_0}\right)^{-3} \gg \frac{q^2 D^3}{(\delta^3N)^{1/2}} \gg q^{1/2-{\varepsilon}}T^{3/2-{\varepsilon}} \gg T^{\varepsilon}.$$ Note that for $q$ prime, we have $R_q(b)=\varphi(q)$ if $q|b$, and $R_q(b)=-1$ if $q\nmid b$. By the same process, (writing $n_1'=qn_1''$), we obtain $$\begin{split} {\mathcal{S}}_\sigma^\sharp(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{-\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1'',f_2,r\\ (gn_1''f_2,q\delta)=1\\ (gn_1'',f_2)=1,\mu^2(gn_1'')=1\\(r,q\delta gn_1'' f_2)=1}} \frac{\mu(f_2)\mu(r)}{n_1''\varphi(f_2)} \sum_{\substack{s|g\delta'\\(s,d_2')=1}}\frac{1}{s} \\ & \quad \times \sum_{\substack{d_2'\\ (d_2',q\delta gn_1''f_2)=1}} \sum_{\substack{n_2'\\ (n_2',g\delta'/s)=1}} \frac{d_2' A(n_2'rs,n_1''qf_2)}{qn_2'} \\ & \quad \times e\left(\pm\sigma\frac{q^2n_1''f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sN}{(gd_2'q)^3r^2qn_1''f_2}, \frac{\sqrt{\delta N}}{q^2gn_1''f_2d_2'r\delta_0}\right). \end{split}$$ Now we consider the sum over $d_2'\asymp D_2, n_2'\asymp N_2$, and by Lemma \[lemma: Psi\], we can assume $$ \left\{\begin{array}{ll} 1\leq D_2 \leq \frac{\sqrt{\delta N}}{q^2gn_1''f_2r\delta_0 TM^{1-{\varepsilon}}}, & \textrm{if } \sigma=1, \\ 1\leq D_2 \asymp \frac{\sqrt{\delta N}}{q^2gn_1''f_2r\delta_0 T}, & \textrm{if } \sigma=-1, \end{array}\right.$$ and $$ \left\{\begin{array}{ll} 1\leq N_2 \asymp \frac{(\delta^3N)^{1/2}}{q^2(n_1''f_2)^2 r\delta_0^3 s}, & \textrm{if } \sigma=1, \\ 1\leq N_2 \leq \frac{(\delta^3N)^{1/2}}{q^2(n_1''f_2)^2 r\delta_0^3 s M^{3-{\varepsilon}}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ If $\sigma=-1$, then by Lemma \[lemma: Psi\], we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\sharp(q,N;\delta) & \ll \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1'',f_2,r} \frac{1}{(gn_1''f_2)^{3/2}r} \sum_{\substack{s|g\delta'\\(s,d_2')=1}}\frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{M}{q^3(D_2N_2)^{1/2}} \int_{|t|\ll U}\bigg| \sum_{\substack{d_2'\asymp D_2\\ (d_2',q\delta gn_1''f_2)=1}} \sum_{\substack{n_2'\asymp N_2\\ (n_2',d_2'g\delta'/s)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rs,n_1''qf_2) e\left(\pm\sigma\frac{q^2n_1''f_2n_2'\delta_0\overline{d_2'q}}{g\delta'/s}\right) \left(\frac{n_2'}{d_2'}\right)^{it} \bigg|dt. \end{split}$$ Hence by Lemma \[lemma: HLS\] again, we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\sharp(q,N;\delta) & \ll \sup_{D_2,N_2} (qT)^{\varepsilon}\frac{M}{q^3}r^{7/32}(U+D_2)^{1/2}(\delta U+N_2)^{1/2}\\ & \ll (qT)^{\varepsilon}\delta^{1/2}\left(\frac{T}{q^3}r^{7/32}+\frac{T^{5/4}}{q^{3}M}\right) \ll (qT)^{\varepsilon}TM\delta^{1/2}, \end{split}$$ provided $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$. If $\sigma=1$, we have $$\begin{split} \frac{1}{N^{1/2}} {\mathcal{S}}_\sigma^\sharp(q,N;\delta) & \ll (qT)^{\varepsilon}\delta^{1/2}\left(\frac{T}{q^3}r^{7/32}+\frac{T^{5/4}M^{1/2}}{q^{3}}\right) \ll (qT)^{\varepsilon}TM\delta^{1/2}, \end{split}$$ provided $M\asymp T^{1/2}$. From now on we assume $(q,n_1')=1$. Since $c_1'=q$ and $n_1'|f_1c_1'$, we have $n_1'|f_1$. We write $$n_1'g=f_1.$$ Now we treat the case $q|n_2$. Denote these terms in as ${\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta)$. Write $n_2=qn_2'$. By a similar argument, we get $$\begin{split} {\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta) & = \gamma \sum_{\pm} \sum_{\delta_0\delta'=\delta}\frac{\mu(\delta_0)\chi(\delta)}{\delta_0} \sum_{\substack{g,n_1',f_2,r\\ (gn_1'f_2,q\delta)=1\\ (gn_1',f_2)=1,\mu^2(gn_1')=1\\(r,q\delta gn_1'f_2)=1}} \frac{\mu(f_2)\mu(r)}{n_1'\varphi(n_1'f_2)} \sum_{\substack{s|g\delta'\\(s,d_2')=1}} \frac{1}{s} \\ & \quad \times \frac{1}{q} \sum_{\substack{d_2'\\(d_2',q\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\\ (n_2',g\delta'/s)=1}} \frac{d_2'A(n_2'rqs,n_1'f_2)}{n_2'} \\ & \quad \times e\left(\pm\sigma\frac{n_1'f_2n_2'q\delta_0\overline{d_2'q}}{g\delta'/s}\right) \check{\Psi}_\sigma^\pm\left(\frac{n_2'sqN}{(gd_2'q)^3r^2n_1'f_2},\frac{\sqrt{\delta N}}{qgn_1'f_2d_2'r\delta_0}\right). \end{split}$$ We consider the sum over $d_2'\asymp D_2, n_2'\asymp N_2$, and by Lemma \[lemma: Psi\], we can assume $$ \left\{\begin{array}{ll} 1\leq D_2 \leq \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 TM^{1-{\varepsilon}}}, & \textrm{if } \sigma=1, \\ 1\leq D_2 \asymp \frac{\sqrt{\delta N}}{qgn_1'f_2r\delta_0 T}, & \textrm{if } \sigma=-1, \end{array}\right.$$ and $$ \left\{\begin{array}{ll} 1\leq N_2 \asymp \frac{(\delta^3N)^{1/2}}{q(n_1'f_2)^2 r\delta_0^3 s}, & \textrm{if } \sigma=1, \\ 1\leq N_2 \leq \frac{(\delta^3N)^{1/2}}{q(n_1'f_2)^2 r\delta_0^3 s M^{3-{\varepsilon}}}, & \textrm{if } \sigma=-1. \end{array}\right.$$ If $x\gg T^{\varepsilon}$ and $\sigma=-1$, then by Lemma \[lemma: Psi\], we have $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta) & \ll \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gf_2)^{3/2}(n_1')^{5/2}r} \sum_{\substack{s|g\delta'\\(s,d_2')=1}} \frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{M}{q^2(D_2N_2)^{1/2}} \int_{|t|\ll U} \bigg|\sum_{\substack{d_2'\asymp D_2\\(d_2',q\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\asymp N_2\\ (n_2',g\delta'/s)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rqs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'q\delta_0\overline{d_2'q}}{g\delta'/s}\right) \left(\frac{n_2'}{d_2'}\right)^{it} \bigg|dt. \end{split}$$ The same argument shows that $$\frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta) \ll (qT)^{\varepsilon}q^{-1}TM,$$ provided $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$. And if $x\gg T^{\varepsilon}$ and $\sigma=1$, we get $$\frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta) \ll (qT)^{\varepsilon}q^{-1}TM,$$ provided $M\asymp T^{1/2}$ again. If $x\ll T^{\varepsilon}$, then by Lemma \[lemma: Psi\] again, for both $\sigma=\pm1$, we have $$\begin{split} \frac{1}{N^{1/2}}{\mathcal{S}}_\sigma^{\sharp\sharp}(q,N;\delta) & \ll \sum_{\delta_0\delta'=\delta}\frac{1}{\delta_0} \sum_{g,n_1',f_2,r} \frac{1}{(gf_2)^{3/2}(n_1')^{5/2}r} \sum_{\substack{s|g\delta'\\(s,d_2')=1}} \frac{1}{s^{1/2}} \\ & \quad \times \sup_{D_2,N_2} \frac{T}{q^2(D_2N_2)^{1/2}} \bigg|\sum_{\substack{d_2'\asymp D_2\\(d_2',q\delta gn_1'f_2)=1}} \sum_{\substack{n_2'\asymp N_2\\ (n_2',g\delta'/s)=1}} \alpha(d_2')\beta(n_2') \\ & \quad \times A(n_2'rqs,n_1'f_2) e\left(\pm\sigma\frac{n_1'f_2n_2'q\delta_0\overline{d_2'q}}{g\delta'/s}\right) \bigg|. \end{split}$$ A better bound will show up under the assumption $T^{1/8+{\varepsilon}}\ll M\ll T^{1/2}$. Conclusion ---------- From the above discussion, we can take $M\asymp T^{1/2}$. This proves Proposition \[prop: q\], and hence Theorem \[thm: q\]. On the other hand, recalling ${\mathcal{R}}^\pm$ in , we have $$\label{eqn: cR^-} {\mathcal{R}}^- \ll q^{1/4}TM(qT)^{\varepsilon},$$ provided $T^{1/8+{\varepsilon}}\ll M \ll T^{1/2}$. In the rest of this paper, we will use another method to bound ${\mathcal{R}}^+$, and then prove Theorem \[thm: t\] and Theorem \[thm: main\]. Initial setup of Theorem \[thm: t\] {#sec: setup t} =================================== As in Section \[sec: setup q\], we will use the moment method to prove Theorem \[thm: t\]. Since $q^{5/4}T^{3/2}\leq q^{4}T^{4/3}$ if $q\geq T^{2/33}$, we know that Theorem \[thm: t\] follows from Theorem \[thm: q\] if $q\geq T^{2/33}$. To prove Theorem \[thm: t\], we only need to consider the case $q\leq T^{2/33}$. However, in the most part of our following arguments, we just require $q\leq T^{1/4}$, since we will need this in the proof of Theorem \[thm: main\], see the end of §\[sec: MT\]. Similarly, at first, we will prove the following proposition. \[prop: t\] With notation as before, for any ${\varepsilon}>0$, and $T$ large, assuming $$\label{eqn: q&M} q\ll T^{1/6}, \quad \textrm{and} \quad T^{1/3+{\varepsilon}}\ll M \ll T^{1/2},$$ we have $$\sum_{u_j\in{\mathcal{B}}^*(q)\atop T-M\leq t_j\leq T+M} L(1/2,\phi\times u_j\times\chi) + \frac{1}{4\pi}\int_{T-M}^{T+M}|L(1/2+it,\phi\times\chi)|^2dt \ll_{\phi,{\varepsilon}} q^{4}TM(qT)^{{\varepsilon}}.$$ It’s easy to see that Theorem \[thm: t\] will follow from Proposition \[prop: t\]. And as in §\[sec: setup q\], it suffices to prove $$\label{eqn: cR^pm<<} {\mathcal{R}}^\pm \ll q^{3}TM(qT)^{{\varepsilon}},$$ provided $T^{1/3+{\varepsilon}}\ll M\ll T^{1/2}$. Recall that ${\mathcal{R}}^\pm$ is defined as in . Note that we have , which gives a better bound for ${\mathcal{R}}^-$. So we only need to prove for ${\mathcal{R}}^+$. As Blomer [@blomer2012subconvexity §5] did, opening the Kloosterman sum, splitting the $n$-sum in to residue classes mod $c$, and detecting the summation congruence condition by primitive additive characters, it suffices to prove, for $T^{1/3+{\varepsilon}}\ll M\ll T^{1/2}$, (see Blomer [@blomer2012subconvexity p. 1406–1407]) $$\label{eqn: cR<<} \sum_{m^2\delta^3\leq (qT)^{3+{\varepsilon}}\atop (\delta,q)=1,\ |\mu(\delta)|=1} \frac{|A(1,m)|}{m\delta^{3/2}}|{\mathcal{S}}(q,N;\delta)| \ll q^{3}TM(qT)^{\varepsilon},$$ where $$\label{eqn: cS(n)} \begin{split} {\mathcal{S}}(q,N;\delta) & := \sum_{q|c}\frac{1}{c^2} \sum_{c_1|c}\underset{b(c_1)}{{\sum}^*} \underset{d(c)}{{\sum}^*} e\left(\frac{\bar{d}}{c}\right) \sum_{a(c)}\chi(a) e\left(-\frac{\bar{b}a}{c_1}\right) e\left(\frac{\delta da}{c}\right)\\ & \quad \times \sum_n A(n,1)e\left(\frac{\bar{b}n}{c_1}\right) v\left(\frac{n}{N}\right)n^{-1/2-u}H^+\left(\frac{4\pi \sqrt{\delta n}}{c}\right), \end{split}$$ where $v$ a suitable fixed smooth function with support in $[1,2]$, and $N$ satisfying . Note that as pointing out in the end of §\[sec: setup q\], we can restrict the $c$-sum to $c\leq (qT)^B$, for some fixed $B>0$. Now we want to use the Voronoi formula to deal with the $n$-sum in . Before we do this, we need to give an asymptotic formula for $H^+$. These will be done in the next section. Integral transforms and special functions {#sec: Psi II} ========================================= In this section, we follow Blomer [@blomer2012subconvexity §3], Li [@li2011bounds §4], and Mckee–Sun–Ye [@mckee2015improved §6] to give an estimate for the $n$-sum in . As in Li [@li2011bounds Proposition 4.1], we will give an asymptotic formula for $H^+$ at first. We shall follow Li [@li2011bounds §4] and Mckee–Sun–Ye [@mckee2015improved §6] more closely, so readers who are familiar with their works can safely skip this section at a first reading. As Li [@li2011bounds §4] did, we have $$\label{eqn: H^+=} H^+(x) = H^+_1(x) + H^+_2(x) + O(T^{-A}),$$ where $$\label{eqn: H^+_1} H^+_1(x) = \frac{4TM}{\pi} \int_{t=-\infty}^\infty \int_{\zeta=- T^\varepsilon}^{T^\varepsilon} \frac{1}{\cosh t}\cos(x\cosh\zeta)e\left(\frac{tM\zeta}{\pi}\right) e\left(\frac{T\zeta}{\pi}\right)dtd\zeta,$$ and $$\label{eqn: H^+_1} H^+_2(x) = \frac{4M^2}{\pi} \int_{t=-\infty}^\infty \int_{\zeta=- T^\varepsilon}^{T^\varepsilon} \frac{t}{\cosh t}\cos(x\cosh\zeta)e\left(\frac{tM\zeta}{\pi}\right) e\left(\frac{T\zeta}{\pi}\right)dtd\zeta.$$ In the following we only treat $H^+_1(x)$, since $H^+_2(x)$ is a lower order term which can be handled in a similar way. It is clear that $$\label{eqn: H^+_1=} \begin{split} H^+_1(x) & =\frac{4MT}{\pi}\int_{-T^\varepsilon}^{T^\varepsilon} \widehat{k} \left(-\frac{M\zeta}{\pi}\right) \cos(x\cosh\zeta)e\left(\frac{T\zeta}{\pi}\right)d\zeta \\ & = 4T\int_{-\frac{MT^\varepsilon}{\pi}}^{\frac{MT^\varepsilon}{\pi}} \widehat{k}(\zeta)\cos\left(x\cosh\frac{\zeta\pi}{M}\right)e\left(-\frac{T\zeta}{M}\right)d\zeta, \end{split}$$ by making a change of variable $-\frac{M\zeta}{\pi}\mapsto\zeta$, here $$\label{eqn: k(t)} k(t)=\frac{1}{\cosh t},$$ and $$\widehat{k}(\zeta)=\int_{-\infty}^\infty k(t)e(-t\zeta)dt,$$ is its Fourier transform. Since $\widehat{k}(\zeta)$ is a Schwartz class function, one can extend the integral in to $(-\infty, \infty)$ with a negligible error term. Now let $$\label{eqn: W(x)} W(x) := T\int_\mathbb{R} \widehat{k} (\zeta) \cos \left( x\cosh \frac{\zeta \pi}{M}\right) e\left( -\frac{T\zeta}{M} \right) d\zeta,$$ then we have $$H^+_1(x) = 4W(x) + O\left(T^{-A}\right).$$ \[lemma: W asymp\] - For $|x|\leq T^{1-{\varepsilon}}M$, we have $$W(x) \ll _{{\varepsilon},A} T^{-A}.$$ - Assume $MT^{1-{\varepsilon}}\leq x\leq T^2$, and $T^{1/3+2{\varepsilon}}\leq M\leq T^{1/2}$. Let $L_1,L_2\in {\ensuremath{\mathbb{Z}}}_{+}$. We have $$\label{eqn: H^+ asymp} \begin{split} W(x) & = \frac{MT}{\sqrt{x}}\sum_\pm e\left(\mp\frac{x}{2\pi}\pm \frac{T^2}{\pi x}\right) \sum_{l=0}^{L_1}\sum_{0\leq l_1\leq 2l}\sum_{\frac{l_1}{4}\leq l_2\leq L_2} c_{l,l_1,l_2} \frac{M^{2l-l_1}T^{4l_2-l_1}}{x^{l+3l_2-l_1}} \\ & \qquad \times \bigg[ \widehat{k}^{(2l-l_1)}\left(\mp\frac{2MT}{\pi x}\right) - \frac{\pi^6 ix}{6!M^6}(y^6\widehat{k}(y))^{(2l-l_1)}\left(\mp\frac{2MT}{\pi x}\right) \\ & \qquad\qquad + \frac{\pi^{12} i^2x^2}{2!(6!)^2M^{12}} (y^{12}\widehat{k}(y))^{(2l-l_1)}\left(\mp\frac{2MT}{\pi x}\right)\bigg] \\ & \quad + O\left(\frac{TM}{\sqrt{x}}\left(\frac{T^4}{x^3}\right)^{L_2+1} + T\left(\frac{M}{\sqrt{x}}\right)^{2L_2+3} + \frac{Tx^3}{M^{18}}\right), \end{split}$$ where $c_{l,l_1,l_2}$ are constants depending only on the indices. See Mckee–Sun–Ye [@mckee2015improved Proposition 6.1]. Now we estimate ${\mathcal{S}}(q,N;\delta)$. Let $x=\frac{4\pi\sqrt{\delta n}}{c}$ in the above lemma. Assume $MT^{1-{\varepsilon}}\leq x\leq T^2$. By choosing $L_1,L_2$ large enough (depending on ${\varepsilon}$) in , the contribution to ${\mathcal{S}}$ from the first two error terms can be made as small as desired. We need to estimate the contribution from the last error term. By the support of $v$, we may assume that $x\ll \frac{(qT)^{3/2+{\varepsilon}}}{c}$. Note that for $q\ll T^{1/4}$, we always have $x\leq T^2$. So the contribution from the last error term is bounded by $$\label{eqn: error1} \begin{split} &\quad \sum_{q|c}\frac{1}{c^2} \sum_{c_1|c}\underset{b(c_1)}{{\sum}^*} \sum_{a(c)}|S(\delta a,1;c)| \sum_{n\ll N} \frac{|A(n,1)|}{n^{1/2}} \frac{TN^{3/2}}{M^{18}c^3} \\ &\ll (qT)^{\varepsilon}\frac{T (qT)^{6}}{M^{18}} \sum_{q|c}\frac{1}{c^{5/2}} \ll (qT)^{\varepsilon}q^{7/2} TM \frac{T^6}{M^{19}} \ll (qT)^{\varepsilon}q^{3/2} TM, \end{split}$$ provided $T^{1/3+2{\varepsilon}}\leq M\leq T^{1/2}$ and $q\leq T^{1/6}$. In the finite series , with our assumptions, we always have $$\frac{M^{2l-l_1}T^{4l_2-l_1}}{x^{l+3l_2-l_1}} \ll 1.$$ All the terms in are similar, and can be estimated in a similar way, so we will only work with the first term, that is, the term with $l=l_1=l_2=0$. We are led to estimate $$\label{eqn: tilde cS} \begin{split} \tilde{{\mathcal{S}}}(q,N;\delta) & := \frac{TM}{\delta^{1/4}} \sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{3/2}} \sum_{c_1|c}\underset{b(c_1)}{{\sum}^*} \underset{d(c)}{{\sum}^*} e\left(\frac{\bar{d}}{c}\right) \sum_{a(c)}\chi(a) e\left(-\frac{\bar{b}a}{c_1}\right) e\left(\frac{\delta da}{c}\right)\\ & \quad \times \sum_n A(n,1)e\left(\frac{\bar{b}n}{c_1}\right) \psi(n), \end{split}$$ where $$\label{eqn: C} C=\frac{\sqrt{\delta N}}{T^{1-{\varepsilon}}M},$$ and $$\label{eqn: psi} \psi(y) = y^{-3/4-u} v\left(\frac{y}{N}\right) \sum_\pm e\left(\mp\frac{2\sqrt{\delta y}}{c}\pm \frac{T^2c}{4\pi^2\sqrt{\delta y}}\right) \widehat{k}\left(\mp\frac{MTc}{2\pi^2\sqrt{\delta y}}\right).$$ Now we apply the Voronoi formula for the $n$-sum in , getting $$\label{eqn: tilde cS Voronoi} \begin{split} \tilde{{\mathcal{S}}}(q,N;\delta) & = \frac{TM}{\delta^{1/4}} \sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{3/2}} \sum_{c_1|c}\underset{b(c_1)}{{\sum}^*} \underset{d(c)}{{\sum}^*} e\left(\frac{\bar{d}}{c}\right) \sum_{a(c)}\chi(a) e\left(-\frac{\bar{b}a}{c_1}\right) e\left(\frac{\delta da}{c}\right)\\ & \quad \times \frac{c_1\pi^{3/2}}{2}\sum_{\pm}\sum_{n_1|c_1}\sum_{n_2\geq1} \frac{A(n_2,n_1)}{n_1n_2} S\left(b,\pm n_2;c_1/n_1\right) \Psi^\pm\left(\frac{n_1^2n_2}{c_1^3}\right) \\ & = \frac{\pi^{3/2}TM}{2}\sum_{\pm}\sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{3/2}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2\geq1} \frac{A(n_2,n_1)}{n_1n_2} {\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) \Psi^\pm\left(\frac{n_1^2n_2}{c_1^3}\right), \end{split}$$ where $\Psi^\pm(x)$ is defined as in with $\psi$ in , and ${\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q)={\mathcal{T}}_{\delta,c_1,n_1,n_2}^{\pm,\sigma}(c,q)$ with $\sigma=1$. Now, we will deal with $\Psi^\pm(x)$, where $x=\frac{n_1^2n_2}{c_1^3}$. Since for $q\ll T^{1/4}$, we have $$xN = \frac{n_1^2n_2}{c_1^3}N \geq NC^{-3} \geq M^3 T^{1-{\varepsilon}}.$$ By Lemma \[lemma: Psi=M+O\], we have $$\Psi^\pm(x) = \gamma_1 x^{2/3} \sum_{\sigma\in\{\pm1\}} \int_0^\infty a_{\sigma}(y) e\left(\sigma\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}\right) dy + \textrm{lower order terms},$$ where $$ a_{\sigma}(y) = y^{-13/12-u} v\left(\frac{y}{N}\right) e\left(-\sigma \frac{T^2c}{4\pi^2\sqrt{\delta y}}\right) \widehat{k}\left(\sigma\frac{MTc}{2\pi^2\sqrt{\delta y}}\right).$$ Note that for $\Psi^+$, when $\sigma=1$ has no stationary points, so the contribution to $\tilde{{\mathcal{S}}}$ is negligible; so does $\Psi^-$ with $\sigma=-1$. Hence, we have $$\label{eqn: Psi(x)=} \Psi^\pm(x) = \gamma_1 x^{2/3} \int_0^\infty a^{\pm}(y) e\left(\mp\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}\right) dy + \textrm{lower order terms},$$ where $$\label{eqn: a(y)} a^{\pm}(y) = v\left(\frac{y}{N}\right) e\left(\pm \frac{T^2c}{4\pi^2\sqrt{\delta y}}\right) \widehat{k}\left(\mp\frac{MTc}{2\pi^2\sqrt{\delta y}}\right) y^{-13/12-u}.$$ By , to prove Proposition \[prop: t\], we only need to show $$\label{eqn: tilde cR} \begin{split} \tilde{{\mathcal{R}}}=\tilde{{\mathcal{R}}}(q,N;\delta) & := \sum_{\pm}\sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{3/2}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2\geq1} \frac{A(n_2,n_1)}{n_1n_2} \\ &\qquad \qquad \times {\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) \Psi^\pm_0\left(\frac{n_1^2n_2}{c_1^3}\right) \ll (qT)^{\varepsilon}q^3, \end{split}$$ where $$\label{eqn: Psi(x)_0} \Psi^\pm_0(x) := x^{2/3} \int_0^\infty a^{\pm}(y) e\left(\mp\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}\right) dy.$$ Now we will use the stationary phase method to deal with . Denote $$\label{eqn: phi(y)} \phi(y)=\mp\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}.$$ By the first derivative of $\phi$ and the support of $y$, we know $\Psi^\pm_0(x)$ is negligible unless $$\label{eqn: x&n_2} \frac{2}{3}\frac{(\delta^3N)^{1/2}}{c^3} \leq x \leq 2\frac{(\delta^3N)^{1/2}}{c^3}, \quad \textrm{that is,} \quad \frac{2}{3}\frac{(\delta^3N)^{1/2}c_1^3}{n_1^2c^3}\leq n_2 \leq 2\frac{(\delta^3N)^{1/2}c_1^3}{n_1^2c^3}.$$ Since the support of $a^{\pm}$ is in $[N,2N]$, we have $$\label{eqn: int a} \int_0^\infty a^{\pm}(y) e\left(\mp\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}\right) dy = \int_{\frac{1}{4}x^2c^6/\delta^3}^{\frac{9}{2}x^2c^6/\delta^3} a^{\pm}(y) e\left(\mp\frac{2\sqrt{\delta y}}{c}\pm3(xy)^{1/3}\right) dy.$$ There is a stationary phase point $y_0=x^2c^6/\delta^3$ such that $\phi'(y_0)=0$. Note that we have $$N\leq y\leq 2N, \quad \textrm{and} \quad c\ll C=\frac{\sqrt{\delta N}}{T^{1-{\varepsilon}}M}.$$ Write $a(y)=a^{\pm}(y)$. Let $n_0\in{\ensuremath{\mathbb{N}}}$ which will be chosen later. Simple calculus estimates give us $$\phi^{(r)}(y) \ll \frac{\sqrt{\delta N}}{c}N^{-r}, \quad \textrm{for}\ r=2,\ldots,2n_0+3,$$ and $$a^{(r)}(y) \ll N^{-13/12}\left(\frac{\delta^{1/2}N^{3/2}}{T^2c}\right)^{-r}, \quad \textrm{for}\ r=0,1,\ldots,2n_0+1,$$ for $y\asymp N$. To apply Lemma \[lemma: MSY\], set $$\label{eqn: MTNU} M_0=10N, \quad T_0=\frac{\sqrt{\delta N}}{c}, \quad N_0=\frac{\delta^{1/2}N^{3/2}}{T^2c}, \quad U_0=N^{-13/12}.$$ Note that $\phi''(y)\gg T_0M_0^{-2}$ for $y\in[\frac{1}{4}x^2 c^6/\delta^3,\frac{9}{2}x^2 c^6/\delta^3]$, and the condition $N_0\geq M_0^{1+{\varepsilon}}/\sqrt{T_0}$ is consistent with our assumption $c\leq C$ when $M\geq T^{1/3+2{\varepsilon}}$. We are ready to apply Lemma \[lemma: MSY\] (where we take $n=n_0$). The main term of the integral in is $$\label{eqn: mainterm} \frac{e(\phi(y_0)+ {1}/{8})}{\sqrt{\phi^{''}(y_0)}} \Big( a(y_0) + \sum_{j=1}^{n_0}\varpi_{2j}\frac{(-1)^{j}(2j-1)!!}{(4\pi i\lambda_2)^j} \Big),$$ where $\lambda_2 = |\phi''(y_0)|/{2}$. Notice we have used $$\gamma - \alpha \asymp \beta - \gamma \asymp M_0 ,$$ with $\alpha = \frac{1}{4}x^2 c^6/\delta^3$, $\beta = \frac{9}{2}x^2 c^6/\delta^3$, and $\gamma = y_0 = x^2 c^6/\delta^3$. To save time in estimates, notice that there are no boundary terms here. That is, the terms related to $H_i$ in Lemma \[lemma: MSY\] will vanish. This is due to the compact support of $a$, with itself and all of its derivatives zero at $\frac{1}{4}x^2 c^6/\delta^3$ and $\frac{9}{2}x^2 c^6/\delta^3$. The sum of the four error terms in Lemma \[lemma: MSY\] can be simplified to $$\label{eqn: ET1} O\left( \frac{U_0 M_0^{2n_0 +2}}{T_0^{n_0 + 1} N_0^{2n_0 +1}} \right) = O\left( c^{3n_0+2}T^{4n_0+2}N^{-\frac{3}{2}n_0-\frac{13}{12}} \delta^{-\frac{3}{2}n_0-1}\right).$$ This estimate uses the current assumptions on $c$, and the size of $N$ compared to $q$ and $T$. Note that $$M_0 \gg N_0, \quad \textrm{if} \quad q\ll T^{1/4}.$$ We now need to deal with the $\varpi_{2j}$ terms in . Recall the expression for $\varpi_{2j}$ in equation . Here we take $2 \leq 2j \leq 2n_0$. One can see from that the main term from $\varpi_{2j}$ is $a^{(2j)}(y_0)$. ($a$ given in equation , and $\phi$, above , take the place of $g$ and $f$ in Lemma \[lemma: MSY\]. Further $y_0$ takes the place of $\gamma$.) Using the above estimates, we have $$\varpi_{2j} - \frac{a^{(2j)}(y_0)}{(2j)!} = O \left(\frac{U_0}{M_0 N_0^{2j-1}}\right).$$ The constant ultimately depends on $n_0$ and we have used $M_0 \gg N_0$. To estimate this error term contribution to $\tilde{\mathcal{R}}$, we must divide by $\lambda_2^{j+ 1/2}$ and sum over $j$. (See (\[eqn: mainterm\]).) Since $y_0 \asymp N$, we have $\lambda_2 \asymp \frac{\delta^{1/2}}{cN^{3/2}}$. We have then that this contribution is $$\label{eqn: ET2} O\left( N^{-\frac{25}{12}} \left(\frac{T^2 c}{\delta^{1/2}N^{\frac{3}{2}}}\right)^{2j-1} \left(\frac{cN^{\frac{3}{2}}}{\delta^{1/2}}\right)^{j + \frac{1}{2}} \right) = O\left( c^{3j - \frac{1}{2}} T^{4j-2} N^{-\frac{3}{2}j + \frac{1}{6}} \delta^{-\frac{3}{2}j+\frac{1}{4}} \right).$$ We must now estimate the $a^{(2j)}(y_0)$ term in $\varpi_{2j}$ in . Let $i_1$ be the number of times $v\left(\frac{y}{N}\right)$ is differentiated plus the number of times $y^{-{13}/{12}-u}$ is differentiated. So at every differentiation either the factor $\frac{1}{N}$ comes out, or up to a constant, the factor $\frac{1}{y}$ comes out. Notice that $\frac{1}{y} \asymp \frac{1}{N}$. Let $i_2$ be the number of times $\widehat{k}\Big(\frac{MTc}{2 \pi^2 \sqrt{y}}\Big)$ is differentiated, and denote $i_3$ to be the number of times $e\Big(\frac{-T^2c}{4 \pi^2 \sqrt{y}}\Big)$ is differentiated. Then $i_1 + i_2 + i_3 = 2j$, and neglecting coefficients (which ultimately depend on $n_0$), $a^{(2j)}(y_0)$ is the sum over all combinatorial possibilities of $$N^{-\frac{13}{12}-i_1} \left( \frac{MTc}{\delta^{1/2}N^{\frac{3}{2}}} \right)^{i_2} \left( \frac{T^2 c}{\delta^{1/2}N^{\frac{3}{2}}} \right)^{i_3}.$$ The main term is when $i_3 = 2j$ and we will estimate this separately, below. So we can assume in all terms, now, that $i_1 + i_2 \geq 1$. To estimate this error term, which is all but one term in $a^{(2j)}(y_0)$, as before, in , we must divide by $\lambda_2^{j + \frac{1}{2}}$ where $\lambda_2 \asymp \frac{\delta^{1/2}}{cN^{3/2}}$ with our assumption on $y_0$. We have then a sum of error terms which are all $$O\left( M^{i_2} c^{j + i_2 + i_3 + \frac{1}{2}} T^{i_2 + 2i_3} N^{\frac{3}{2}j -i_1 -\frac{3}{2}i_2 -\frac{3}{2}i_3 - \frac{1}{3}} \delta^{-\frac{1}{2}(j+i_2+i_3+\frac{1}{2})} \right).$$ Using $i_3=2j-i_1-i_2$, the above bound will be $$\label{eqn: ET3} O\left( M^{i_2} c^{3j - i_1 + \frac{1}{2}} T^{4j-2i_1-i_2} N^{-\frac{3}{2}j -\frac{5}{2}i_1 - \frac{1}{3}} \delta^{-\frac{3}{2}j+\frac{1}{2}i_1-\frac{1}{4}} \right).$$ We will bound the contribution of all these error terms to $\tilde{{\mathcal{R}}}$ in the next section. This leaves the main term of $a^{(2j)}(y_0)$ (where $i_3 = 2j$ and $i_1 = i_2 = 0$) which is $$\label{eqn: a2jalpha} a_{2j}(y_0) := \alpha_j \Big( \frac{T^2c}{\delta^{1/2}y_0^{3/2}} \Big)^{2j} v\Big(\frac{y_0}{N}\Big) \widehat{k}\Big(\frac{MTc}{2 \pi^2 \sqrt{\delta y_0}}\Big) e\Big(\frac{-T^2c}{4 \pi^2 \sqrt{\delta y_0}}\Big) y_0^{-{13}/{12}-u},$$ where the constant $\alpha_j$ depends on $j$ which ultimately can be bounded in terms of $n_0$. As Li [@li2011bounds §4] did, we can not bound these terms trivially. Instead, we will apply the Voronoi formula a second time. This will be done in section \[sec: MT\]. Contribution from the error terms {#sec: ET} ================================= In this section, we will bound the contribution of these error terms , , and to $\tilde{{\mathcal{R}}}$, see . To do this, we need to recall the result of Blomer [@blomer2012subconvexity] for ${\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q)$. \[lemma: cT\] Assume $(q,\delta)=1$, $\delta_0|c_2$, $(r',c_2)=1$. Then $$\label{eqn: cT=} \begin{split} {\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) & = e\left(\mp\frac{n_1^2n_2(c_2')^2c_2}{c'\delta'}\right) \frac{\varphi(c_1)\varphi(c_1/n_1)}{\varphi(c')^2} \frac{\mu(\delta_0)\chi(\delta)}{\delta_0} r^2 q \chi(-1) \\ & \quad \times \sum_{\substack{f_1f_2d_2'=r'\\(d_2',f_1n_1n_2)=1\\ (f_1,f_2)=1,\mu^2(f_1)=1\\(f_1f_2,q)=1,f_2|n_1}} \frac{\mu(f_2)}{f_1} e\left(\pm\frac{(n_1'c_2')^2f_2n_2\delta_0\overline{d_2'c_1'}}{f_1\delta'}\right) {\mathcal{V}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q), \end{split}$$ where $$\label{eqn: cV:=} \begin{split} {\mathcal{V}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) & := \sum_{g_5,g_6(q)}\chi(g_5g_6) \chi(g_5r'+c_2g_6r'\mp c_2n_2n_1c_2')\\ & \qquad \times \chi(r'g_6\mp n_2n_1c_2')e\left(\bar{\delta'}n_1c_2'\frac{g_5+c_2g_6}{q}\right). \end{split}$$ Furthermore, we have $$\label{eqn: cV} \begin{split} {\mathcal{V}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) & = \chi_h(-1)\frac{h}{\varphi(k)} R_k(n_2n_1c_2')R_k(c_2)R_k(n_1c_2') H(\mp\overline{r'hk}n_2(n_1c_2')^2c_2\overline{\delta'},\ell). \end{split}$$ Recall that $h,k,\ell$ are defined in , and the relations of the variables are summarized in . If one of the conditions $\delta_0|c_2$ and $(r',c_2)=1$ is not satisfied, then ${\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q)=0$. See Blomer [@blomer2012subconvexity Lemma 12, eq. (50), and §7]. By the definition of ${\mathcal{V}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q)$, we have the following trivial bound $$|{\mathcal{V}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q)|\leq q^2.$$ A trivial estimate shows $${\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q) \ll \frac{\delta_0r^2q^3}{c_2^2n_1}\tau_3(c).$$ The contribution to from the error term in is bounded by $$\begin{split} E_1 & = \sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{\frac{3}{2}}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2} \frac{|A(n_2,n_1)|}{n_1n_2} \frac{\delta_0r^2q^3}{c_2^2n_1}\tau_3(c) \left(\frac{n_1^2n_2}{c_1^3}\right)^{\frac{2}{3}} c^{3n_0+2}T^{4n_0+2}N^{-\frac{3}{2}n_0-\frac{13}{12}} \delta^{-\frac{3}{2}n_0-1} \\ & \leq (qT)^{\varepsilon}T^{4n_0+2} N^{-\frac{3}{2}n_0-\frac{13}{12}} \delta^{-\frac{3}{2}n_0-1} q \sum_{\substack{q|c\\c\ll C}} c^{3n_0+\frac{1}{2}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\frac{1}{n_1^{2/3}} \sum_{n_2} \frac{|A(n_2,n_1)|}{n_2^{1/3}}, \end{split}$$ where $n_2$ satisfies . We have $$\begin{split} E_1 & \ll (qT)^{\varepsilon}T^{4n_0+2} N^{-\frac{3}{2}n_0-\frac{13}{12}} \delta^{-\frac{3}{2}n_0-1} q \sum_{\substack{q|c\\c\ll C}} c^{3n_0+\frac{1}{2}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\frac{1}{n_1^{2/3}} \frac{(\delta^3N)^{1/3}c_1^2}{n_1^{4/3}c^2} n_1 \\ & \ll (qT)^{\varepsilon}T^{4n_0+2} N^{-\frac{3}{2}n_0-\frac{3}{4}} \delta^{-\frac{3}{2}n_0} q \sum_{\substack{q|c\\c\ll C}} c^{3n_0+\frac{3}{2}} \\ & \ll (qT)^{\varepsilon}T^{4n_0+2} N^{-\frac{3}{2}n_0-\frac{3}{4}} \delta^{-\frac{3}{2}n_0} C^{3n_0+\frac{5}{2}}. \end{split}$$ By and , (noting that we can let the ${\varepsilon}$ in the upper bound of $C$ be much smaller than the ${\varepsilon}$ in the lower bound of $M$), we have $$\label{eqn: error2E1} E_1 \ll (qT)^{\varepsilon}q^{3/2} T^{n_0+1} M^{-3n_0-5/2} \ll (qT)^{\varepsilon}q^{3/2},$$ provided $T^{1/3+{\varepsilon}}\ll M \ll T^{1/2}$ and $n_0\geq 1/{\varepsilon}$. Similarly, the contribution to from the error term in is bounded by $$\label{eqn: error2E2} \begin{split} E_2 & = \sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{\frac{3}{2}}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2} \frac{|A(n_2,n_1)|}{n_1n_2} \frac{\delta_0r^2q^3}{c_2^2n_1}\tau_3(c) \\ & \qquad \times \left(\frac{n_1^2n_2}{c_1^3}\right)^{\frac{2}{3}} c^{3j-\frac{1}{2}} T^{4j-2} N^{-\frac{3}{2}j + \frac{1}{6}} \delta^{-\frac{3}{2}j+\frac{1}{4}} \\ & \ll (qT)^{\varepsilon}T^{4j-2} N^{-\frac{3}{2}j + \frac{1}{2}} \delta^{-\frac{3}{2}j+\frac{5}{4}} q \sum_{\substack{q|c\\c\ll C}} c^{3j-1} \\ & \ll (qT)^{\varepsilon}T^{4j-2} N^{-\frac{3}{2}j + \frac{1}{2}} \delta^{-\frac{3}{2}j+\frac{5}{4}} C^{3j} \\ & \ll (qT)^{\varepsilon}T^{j-2} N^{\frac{1}{2}} \delta^{\frac{5}{4}} M^{-3j} \ll (qT)^{\varepsilon}q^{\frac{3}{2}} T^{j-2} M^{-3j} \ll (qT)^{\varepsilon}q^{\frac{3}{2}}, \end{split}$$ for $0\leq j \leq n_0$, provided $T^{1/3}\ll M \ll T^{1/2}$. Finally, the contribution to from the error term in is bounded by $$\label{eqn: error2E3} \begin{split} E_3 & = \sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{\frac{3}{2}}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2} \frac{|A(n_2,n_1)|}{n_1n_2} \frac{\delta_0r^2q^3}{c_2^2n_1}\tau_3(c) \left(\frac{n_1^2n_2}{c_1^3}\right)^{\frac{2}{3}} \\ & \qquad \times M^{i_2} c^{3j - i_1 + \frac{1}{2}} T^{4j-2i_1-i_2} N^{-\frac{3}{2}j -\frac{5}{2}i_1 - \frac{1}{3}} \delta^{-\frac{3}{2}j+\frac{1}{2}i_1-\frac{1}{4}} \\ & \ll (qT)^{\varepsilon}M^{i_2} T^{4j-2i_1-i_2} N^{-\frac{3}{2}j -\frac{5}{2}i_1} \delta^{-\frac{3}{2}j+\frac{1}{2}i_1+\frac{3}{4}} q \sum_{\substack{q|c\\c\ll C}} c^{3j-i_1} \\ & \ll (qT)^{\varepsilon}M^{-3j+i_1+i_2-1} T^{j-i_1-i_2-1} N^{-2i_1+\frac{1}{2}} \delta^{\frac{5}{4}} \\ & \ll (qT)^{\varepsilon}q^{\frac{3}{2}} M^{-3j+i_1+i_2-1} T^{j-i_1-i_2+\frac{1}{2}} \ll (qT)^{\varepsilon}q^{\frac{3}{2}}, \end{split}$$ for $1\leq i_1+i_2 \leq 2j$, provided $T^{1/3}\ll M \ll T^{1/2}$. We finish the estimate of these error terms. Completion of the proof of Theorem \[thm: t\] and Theorem \[thm: main\] {#sec: MT} ======================================================================= \[sec: thm t\] In this section, we will give the proof of Theorem \[thm: t\] and Theorem \[thm: main\]. At first, we will estimate the contribution to $\tilde{{\mathcal{R}}}$ of $a_{2j}(y_0)$ in . To bound this, we only need to estimate $$\label{eqn: tilde cR j} \begin{split} \tilde{{\mathcal{R}}}_j =\tilde{{\mathcal{R}}}_j(q,N;\delta) & := \sum_{\pm}\sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{3/2}} \sum_{c_1|c}c_1 \sum_{n_1|c_1}\sum_{n_2\geq1} \frac{A(n_2,n_1)}{n_1n_2} \\ & \qquad \times x^{2/3} \frac{a_{2j}(y_0)}{\lambda_2^{j+1/2}} e(\phi(y_0)){\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q), \end{split}$$ for all $0\leq j\leq n_0$, where $x=\frac{n_1^2n_2}{c_1^3}$, $y_0=\frac{x^2c^6}{\delta^3}$, and $\lambda_2=\frac{\delta^{1/2}}{12cy_0^{3/2}}$. Inserting these values, together with , we have $$\begin{split} \tilde{{\mathcal{R}}}_j & = 12^{j+1/2}\alpha_j T^{4j} \delta^{3j+3/4+3u} \sum_{\pm}\sum_{\substack{q|c\\c\ll C}}\frac{1}{c^{6j+3+6u}} \sum_{c_1|c}c_1^{9j+1+6u} \sum_{n_1|c_1}\frac{1}{n_1^{6j+1+4u}} \\ & \quad \times \sum_{n_2\geq1} \frac{A(n_2,n_1)}{n_2^{3j+1+2u}} v\Big(\frac{n_1^4n_2^2c^6}{c_1^6\delta^3N}\Big) \widehat{k}\Big(\frac{MT\delta c_1^3}{2 \pi^2 n_1^2n_2c^2}\Big) e\Big(\frac{-T^2\delta c_1^3}{4 \pi^2 n_1^2n_2c^2}\Big) \check{{\mathcal{T}}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q), \end{split}$$ where $$\check{{\mathcal{T}}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q):= e\Big(\pm\frac{n_1^2n_2c^2}{c_1^3\delta}\Big){\mathcal{T}}_{c_1,n_1,n_2}^{\pm,\delta}(c,q).$$ By Lemma \[lemma: cT\], after some simplification, we have $$\label{eqn: tilde cR_j=} \begin{split} \tilde{{\mathcal{R}}}_j & = \frac{\gamma_j T^{4j} \delta^{3j+3/4+3u}}{q} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{\mu(\delta_0)\chi(\delta)}{\delta_0^{6j+2+6u}} \sum_{c_1'c_2'=q} \frac{(c_1')^{3j}}{(c_2')^{6j+1+6u}} \\ & \quad \times \sum_{\substack{f_1f_2d_2'\ll C/(q\delta_0) \\ (f_1f_2d_2',c_2'\delta)=1 \\(f_1,f_2)=1,\mu^2(f_1)=1\\(f_1f_2,qd_2')=1}} \frac{\mu(f_2) f_1^{3j} (d_2')^{3j}}{f_2^{3j+4u}f_1f_2} \sum_{\substack{n_1'|f_1c_1'\\(n_1',d_2')=1}} \frac{1}{(n_1')^{6j+1+4u}} \frac{\varphi(f_1f_2d_2'c_1')\varphi(f_1d_2'c_1'/n_1')}{\varphi(f_1f_2d_2'c_1'c_2')^2} \\ & \quad \times \sum_{\substack{n_2\\(n_2,d_2')=1}} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\pm\frac{(n_1'c_2')^2f_2n_2\delta_0\overline{d_2'c_1'}}{f_1\delta'}\right) \frac{h\chi_h(-1)}{\varphi(k)} \\ & \quad \times R_k(n_2n_1'f_2c_2')R_k(c_2'\delta_0)R_k(n_1'f_2c_2') H(\mp\overline{f_1d_2'hk}n_2(n_1'c_2')^2f_2c_2'\delta_0\overline{\delta'},\ell) \\ & \quad \times v\Big(\frac{n_1^4n_2^2(c_2')^6\delta_0^3}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTc_1'f_1d_2'\delta'}{2 \pi^2 \delta_0f_2(c_2'n_1')^2n_2}\Big) e\Big(\frac{-T^2c_1'f_1d_2'\delta'}{4 \pi^2 \delta_0f_2(c_2'n_1')^2n_2}\Big), \end{split}$$ where $\gamma_j=12^{j+\frac{1}{2}}\chi(-1)\alpha_j$. Recall that the relations of the new variables and the old variables are $$\label{eqn: var} c=c_1'f_1f_2d_2'c_2'\delta_0, \quad c_1=c_1'f_1f_2d_2',\quad c_2=c_2'\delta_0,\quad r=f_1f_2d_2'\delta_0,\quad n_1=n_1'f_2.$$ Note that $u\in[{\varepsilon}-i\log^2(qT),{\varepsilon}+i\log^2(qT)]$, so the appearance of $u$ in the exponents is harmless. As in §\[sec: thm q\], we have the following four cases to handle. Since all these cases are similar, we will only deal with the main case, that is the case $c_1'=q,\ c_2'=h=k=1$. Denote these terms in as $\tilde{{\mathcal{R}}}_j^\dag$. Note that we have $(d_2'n_1'n_2,q)=1$. Write $f_1=n_1'g$. Then we have $$\label{eqn: R_j^d} \begin{split} \tilde{{\mathcal{R}}}_j^\dag & = \gamma_j q^{3j-1} T^{4j} \delta^{3j+3/4+3u} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{\mu(\delta_0)\chi(\delta)}{\delta_0^{6j+2+6u}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \\ & \quad \times \frac{\mu(f_2) g^{3j-1} }{f_2^{3j+1+4u}\varphi(n_1'f_2)(n_1')^{3j+2+4u}} \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} (d_2')^{3j} \\ & \quad \times \sum_{\substack{n_2\\(n_2,d_2')=1}} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\pm\frac{n_1'f_2\delta_0\overline{d_2'q}n_2}{g\delta'}\right) H(\mp\overline{gd_2'}n_1'f_2\delta_0\overline{\delta'}n_2,q) \\ & \quad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big). \end{split}$$ To remove the coprime condition $(n_2,d_2')=1$, we split the $n_2$-sum into residue classes mod $d_2'$, and then detect the summation congruence condition by additive characters mod $d_2'$, getting $$ \begin{split} \tilde{{\mathcal{R}}}_j^\dag & \ll (qT)^{\varepsilon}q^{3j-1} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \\ & \quad \times \frac{g^{3j-1}}{f_2^{3j+2}(n_1')^{3j+3}} \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} (d_2')^{3j} \Bigg| \underset{a_1(d_2')}{{\sum}^*} \frac{1}{d_2'} \sum_{b_1(d_2')}e\left(-\frac{b_1a_1}{d_2'}\right)\\ & \quad \times \sum_{n_2} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\frac{b_1n_2}{d_2'}\right) e\left(\pm\frac{n_1'f_2\delta_0\overline{d_2'q}n_2}{g\delta'}\right) H(\mp\overline{gd_2'}n_1'f_2\delta_0\overline{\delta'}n_2,q) \\ & \quad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big)\Bigg|. \end{split}$$ Now by , , and , we have $$\label{eqn: cR_j^d=} \tilde{{\mathcal{R}}}_j^\dag \ll \tilde{{\mathcal{R}}}_j^{\dag,1}+\tilde{{\mathcal{R}}}_j^{\dag,2},$$ where $$\label{eqn: cR_j^d1} \begin{split} \tilde{{\mathcal{R}}}_j^{\dag,1} & := (qT)^{\varepsilon}q^{3j-1} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \\ & \quad \times \frac{g^{3j-1} }{f_2^{3j+2}(n_1')^{3j+3}} \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} (d_2')^{3j} \frac{1}{d_2'} \sum_{b_1(d_2')} |S(0,-b_1;d_2')| \\ & \quad \times \Bigg| \sum_{n_2} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\frac{b_1n_2}{d_2'}\right) e\left(\pm\frac{n_1'f_2\delta_0\overline{d_2'q}n_2}{g\delta'}\right)\\ & \quad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big)\Bigg|, \end{split}$$ and $$\label{eqn: cR_j^d2} \begin{split} \tilde{{\mathcal{R}}}_j^{\dag,2} & := (qT)^{\varepsilon}q^{3j+\frac{1}{2}} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \\ & \quad \times \frac{g^{3j-1} }{f_2^{3j+2}(n_1')^{3j+3}} \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} (d_2')^{3j} \frac{1}{d_2'} \sum_{b_1(d_2')} |S(0,-b_1;d_2')| \\ & \quad \times \frac{1}{\varphi(q)}\sum_{\psi(q)} \Bigg| \sum_{n_2} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\frac{b_1n_2}{d_2'}\right) e\left(\pm\frac{n_1'f_2\delta_0\overline{d_2'q}n_2}{g\delta'}\right) \psi(n_2)\\ & \quad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big)\Bigg|. \end{split}$$ We will focus on $\tilde{{\mathcal{R}}}_j^{\dag,2}$, since it turns out that $\tilde{{\mathcal{R}}}_j^{\dag,1}$ is easier and has a better upper bound. At first we need to remove the factor $\psi(n_2)$ in the innermost sum of . Again, we split the $n_2$-sum into residue classes mod $q$, and then detect the summation congruence condition by additive characters mod $q$, getting $$ \begin{split} \tilde{{\mathcal{R}}}_j^{\dag,2} & = (qT)^{\varepsilon}q^{3j+\frac{1}{2}} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \frac{g^{3j-1} }{f_2^{3j+2}(n_1')^{3j+3}} \\ & \quad \times \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} \frac{(d_2')^{3j}}{d_2'} \sum_{b_1(d_2')} |S(0,-b_1;d_2')| \frac{1}{\varphi(q)}\sum_{\psi(q)} \frac{1}{q}\sum_{b_2(q)} |S_\psi(0,-b_2;q)| \\ & \quad \times \Bigg| \sum_{n_2} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\frac{b_1n_2}{d_2'}\right)e\left(\frac{b_2n_2}{q}\right) e\left(\pm\frac{n_1'f_2\delta_0\overline{d_2'q}n_2}{g\delta'}\right)\\ & \qquad\qquad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big)\Bigg|, \end{split}$$ where $$S_\psi(m,n;c)=\underset{d(c)}{{\sum}^*} \psi(d) e\left(\frac{m\bar{d}+nd}{c}\right)$$ is the Kloosterman sum with character $\psi$. Note that $S(0,-b_1;d_2')$ is related to the Ramanujan sum, and $S_\psi(0,-b_2;q)$ is related to the Gauss sum, inserting the upper bound for these sums, we have $$\label{eqn: cR_j^d2<<} \begin{split} \tilde{{\mathcal{R}}}_j^{\dag,2} & \ll (qT)^{\varepsilon}q^{3j+1} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}}\\ & \quad \times \frac{g^{3j-1} }{f_2^{3j+2}(n_1')^{3j+3}} \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} \frac{(d_2')^{3j}}{d_2'} \sum_{b_1(d_2')} (b_1,d_2') \frac{1}{q}\sum_{b_2(q)} \\ & \quad \times \Bigg| \sum_{n_2} \frac{A(n_2,n_1'f_2)}{n_2^{3j+1+2u}} e\left(\frac{(b_1qg\delta'+b_2d_2'g\delta'\pm n_1'f_2\delta_0)n_2}{d_2'qg\delta'}\right)\\ & \qquad\qquad \times v\Big(\frac{(n_1'f_2)^4\delta_0^3n_2^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'n_2}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'n_2}\Big)\Bigg|. \end{split}$$ Now we will handle the inner $n_2$-sum, that is, $$\label{eqn: n_2-sum} \sum_{n_2} A(n_2,n_1'f_2) e\left(\frac{b'n_2}{c'}\right) w_j(n_2),$$ where $$\label{eqn: w_j} w_j(y) := \frac{1}{y^{3j+1+2u}} v\Big(\frac{(n_1'f_2)^4\delta_0^3y^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'y}\Big) e\Big(\frac{-T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'y}\Big),$$ and $$\label{eqn: b'/c'} \frac{b'}{c'} := \frac{b_1qg\delta'+b_2d_2'g\delta'\pm n_1'f_2\delta_0}{d_2'qg\delta'}, \quad \textrm{with}\quad (b',c')=1, \quad \textrm{and}\quad c'| d_2'qg\delta'.$$ We apply the Voronoi formula on $GL(3)$ a second time, getting $$\label{eqn: VSF2} \begin{split} & \qquad \sum_{n_2} A(n_2,n_1'f_2) e\left(\frac{b'n_2}{c'}\right) w_j(n_2) \\ & = \frac{c'\pi^{3/2}}{2} \sum_{\pm} \sum_{l_1|c'n_1'f_2} \sum_{l_2=1}^{\infty} \frac{A(l_2,l_1)}{l_1l_2} S\left(n_1'f_2\bar{b'},\pm l_2;\frac{n_1'f_2c'}{l_1}\right) {\mathcal{W}}_j^{\pm}\left(\frac{l_1^2l_2}{(c')^3n_1'f_2}\right), \end{split}$$ where ${\mathcal{W}}_j^\pm$ is defined by with $\psi=w_j$. By the support of $v$, we know $w_j$ is supported in $\big[Y,\sqrt{2}Y\big]$, with $$Y:=\frac{\sqrt{\delta^3 N}}{(n_1'f_2)^2\delta_0^3} \geq 1.$$ By the facts $c'\leq d_2'qg\delta'$ and $q\delta_0gn_1'f_2d_2'\ll C$, the bounds for $N$ and $C$, i.e., and , and the bounds for $q$ and $M$, i.e., , writing $x=\frac{l_1^2l_2}{(c')^3n_1'f_2}$, we have $$xY=\frac{l_1^2l_2}{(c')^3n_1'f_2} \frac{\sqrt{\delta^3 N}}{(n_1'f_2)^2\delta_0^3} \gg \frac{\sqrt{\delta^3 N}}{C^3(\delta')^3} \gg \frac{M^3T^{-{\varepsilon}}}{q^3} \gg T^{{\varepsilon}}.$$ Now by Lemma \[lemma: Psi=M+O\], we have $${\mathcal{W}}_j^\pm(x) = x\int_0^\infty w_j(y) \sum_{\ell=1}^{K} \frac{\gamma_\ell}{(xy)^{\ell/3}} e\left(\pm3(xy)^{1/3}\right) dy + O\left(T^{-A}\right),$$ for some large $K$ and $A$. We will only deal with the term with $\ell=1$, since the others can be handled similarly. By , we are led to estimate $$ {\mathcal{W}}_{j,0}^+(x) := x^{2/3} \int_0^\infty b(y) e\left(\phi_1(y)\right) dy, \quad \textrm{and}\quad {\mathcal{W}}_{j,0}^-(x) := x^{2/3} \int_0^\infty b(y) e\left(\phi_2(y)\right) dy,$$ where $$b(y) := \frac{1}{y^{3j+\frac{4}{3}+2u}} v\Big(\frac{(n_1'f_2)^4\delta_0^3y^2}{(\delta')^3N}\Big) \widehat{k}\Big(\frac{MTqgd_2'\delta'}{2 \pi^2 \delta_0f_2n_1'y}\Big),$$ and $$\phi_1(y) := -\frac{T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'y} + 3(xy)^{1/3}, \quad \textrm{and}\quad \phi_2(y) := -\frac{T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'y} - 3(xy)^{1/3}.$$ By the support of $v$, we have $$ \int_0^\infty b(y) e\left(\phi_i(y)\right) dy = \int_{Y}^{\sqrt{2}Y} b(y) e\left(\phi_i(y)\right) dy, \quad \textrm{for } i=1,2. $$ For $y\in[Y,\sqrt{2}Y]$, we have $$|\phi_i^{(r)}(y)| \leq C_r T_1/M_1^{r}, \quad |b^{(s)}(y)| \leq C_s U_1/N_1^{s},$$ where $$T_1 = \max\left(\frac{T^2qgd_2'\delta'}{\delta_0f_2n_1'Y},(xY)^{1/3}\right),\quad M_1 = Y,\quad U_1 = \frac{1}{Y^{3j+4/3}},\quad N_1 = Y.$$ Since we have $$\phi_1'(y) = \frac{T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'y^2} + x^{1/3}y^{-2/3} \asymp T_1/M_1,$$ by partial integration $r$ times, we have $${\mathcal{W}}_{j,0}^+(x) \ll x^{2/3}\frac{U_1}{T_1^r} \ll \frac{x^{2/3}}{T_1^2} \frac{U_1}{T_1^{r-2}} \ll T^{-A},$$ for sufficient large $r$. So ${\mathcal{W}}_{j,0}^+(x)$ is negligible. Now we turn to ${\mathcal{W}}_{j,0}^-(x)$. Since $$\phi_2'(y) = \frac{T^2qgd_2'\delta'}{4 \pi^2 \delta_0f_2n_1'y^2} - x^{1/3}y^{-2/3},$$ if $$\label{eqn: x><} x\geq \frac{T^6(qgd_2'n_1'f_2\delta_0)^3(n_1'f_2)^2}{10\pi^6 (\delta')^3 N^2}, \quad \textrm{or}\quad x\leq \frac{T^6(qgd_2'n_1'f_2\delta_0)^3(n_1'f_2)^2}{1100\pi^6 (\delta')^3 N^2},$$ one has $$|\phi_2'(y)| \asymp T_1/M_1.$$ By the same argument, we show ${\mathcal{W}}_{j,0}^-(x)$ is negligible. For the remaining case $$\label{eqn: x&l_2} \frac{T^6(qgd_2'n_1'f_2\delta_0)^3(n_1'f_2)^2}{1100\pi^6 (\delta')^3 N^2} \leq x \leq \frac{T^6(qgd_2'n_1'f_2\delta_0)^3(n_1'f_2)^2}{10\pi^6 (\delta')^3 N^2}, \quad \textrm{i.e.}\quad \frac{L_2}{1100}\leq l_2 \leq \frac{L_2}{10},$$ with $$L_2 = \frac{T^6(qgd_2'n_1'f_2\delta_0)^3(n_1'f_2)^3(c')^3}{\pi^6 (\delta')^3 N^2 l_1^2},$$ we have $$|\phi_2''(y)| \gg T_1/M_1^2, $$ for any $y\in[Y,\sqrt{2}Y]$. Note that in this case we have $$T_1\asymp \frac{T^2qgd_2'\delta'}{\delta_0f_2n_1'Y}, \quad \textrm{and}\quad x\asymp T_1^3/Y.$$ Therefore, by the second derivative test [@huxley1996area Lemma 5.1.3 or Lemma 5.5.6], we have $$\label{eqn: cW_j<<} {\mathcal{W}}_{j,0}^-(x) \ll \frac{x^{2/3} U_1}{(T_1/M_1^2)^{1/2}} \ll \frac{T_1^{3/2}}{Y^{3j+1}} \ll T^3 (qgd_2')^{3/2}(n_1'f_2)^{6j+\frac{7}{2}}\delta_0^{9j+\frac{9}{2}} \delta^{\frac{3}{2}}(\delta^3N)^{-\frac{3}{2}j-\frac{5}{4}}.$$ Combining , , and , and invoking the trivial bound for the Kloosterman sum in , one concludes that $$\begin{split} \tilde{{\mathcal{R}}}_j^{\dag,2} & \ll (qT)^{\varepsilon}q^{3j+1} T^{4j} \delta^{3j+3/4} \sum_{\pm}\sum_{\delta_0\delta'=\delta} \frac{1}{\delta_0^{6j+2}} \sum_{\substack{gn_1'f_2\ll C/(q\delta_0) \\ (gn_1'f_2,q\delta)=1 \\(gn_1',f_2)=1,\mu^2(gn_1')=1}} \frac{g^{3j-1} }{f_2^{3j+2}(n_1')^{3j+3}}\\ & \quad \times \sum_{\substack{d_2'\ll C/(q\delta_0gn_1'f_2)\\(d_2',qgn_1'f_2\delta)=1}} \frac{(d_2')^{3j}}{d_2'} \sum_{b_1(d_2')} (b_1,d_2') \frac{1}{q}\sum_{b_2(q)} c' \sum_{l_1|c'n_1'f_2} \sum_{l_2=1}^{\infty} \frac{|A(l_2,l_1)|}{l_1l_2} \\ & \quad \times \frac{n_1'f_2c'}{l_1} T^3 (qgd_2')^{3/2}(n_1'f_2)^{6j+\frac{7}{2}}\delta_0^{9j+\frac{9}{2}} \delta^{\frac{3}{2}}(\delta^3N)^{-\frac{3}{2}j-\frac{5}{4}} \\ & \ll (qT)^{\varepsilon}q^{3j+9/2} T^{4j+3} N^{-3j/2-5/4} \delta^{-3j/2+1/2} \\ & \quad \times \sum_{gn_1'f_2\delta_0d_2'\ll C/q} \frac{(gn_1'f_2\delta_0d_2')^{3j+7/2}}{gf_2(n_1')^{2}\delta_0} \sum_{l_1|c'n_1'f_2}\frac{1}{l_1^2} \sum_{l_2=1}^{\infty} \frac{|A(l_2,l_1)|}{l_2}. \end{split}$$ By , we have $$\tilde{{\mathcal{R}}}_j^{\dag,2} \ll (qT)^{\varepsilon}C^{3j+9/2} T^{4j+3} N^{-3j/2-5/4} \delta^{-3j/2+1/2}.$$ And by and , we have $$\label{eqn: MT<<} \tilde{{\mathcal{R}}}_j^{\dag,2} \ll (qT)^{\varepsilon}\delta^{11/4}NT^{j-3/2}M^{-3j-9/2} \ll (qT)^{\varepsilon}q^3 T^{j+3/2} M^{-3j-9/2} \ll (qT)^{\varepsilon}q^{3},$$ provided $T^{1/3+{\varepsilon}}\leq M\leq T^{1/2}$. This proves Proposition \[prop: t\], and hence, Theorem \[thm: t\]. One can use Theorem \[thm: q\] and Theorem \[thm: t\] directly to give a hybrid subconvexity bound with $\theta=1/70$. To get a better bound, that is, to prove Theorem \[thm: main\] with $\theta=(35-\sqrt{1057})/56$, which we fix from now on, we will modify the proof of Theorem \[thm: t\]. At first, note that if $q\geq T^{\theta/(1/4-\theta)}=T^{(\sqrt{1057}-23)/44}$, then $$q^{5/4}T^{3/2} \leq (qT)^{3/2-\theta}.$$ Hence in this case, Theorem \[thm: main\] follows from Theorem \[thm: q\]. Now we assume $$\label{eqn: q Delta} q \leq T^{\theta/(1/4-\theta)}=T^{(\sqrt{1057}-23)/44} < T^{1/4}.$$ As in §\[sec: setup t\], we only need to prove $$\sum_{u_j\in{\mathcal{B}}^*(q)\atop T-M\leq t_j\leq T+M} L(1/2,\phi\times u_j\times\chi) + \frac{1}{4\pi}\int_{T-M}^{T+M}|L(1/2+it,\phi\times\chi)|^2dt \ll_{\phi,{\varepsilon}} q^{3/2-\theta}TM(qT)^{{\varepsilon}},$$ provided $$\label{eqn: M Delta} T^{1/2-\theta}\ll M\ll T^{1/2}.$$ As in the proof of Proposition \[prop: t\], we only need to bound , , , , and under our new assumptions and . It’s easy to see that the bound in will be $q^{1/2-\theta}TM(qT)^{\varepsilon}$ now. Moreover, , , and are easy to handle too. Now we consider . By , we have $$\tilde{{\mathcal{R}}}_j^{\dag,2} \ll (qT)^{\varepsilon}q^3 T^{j+\frac{3}{2}} M^{-3j-\frac{9}{2}} \ll (qT)^{\varepsilon}q^3 T^{\frac{3}{2}} M^{-\frac{9}{2}} \ll (qT)^{\varepsilon}q^{\frac{1}{2}-\theta} q^{\frac{5}{2}+\theta} T^{\frac{9}{2}\theta-\frac{3}{4}}.$$ So we want $q\leq T^{(\frac{3}{4}-\frac{9}{2}\theta)/(\frac{5}{2}+\theta)}$, which coincides with with our choice of $\theta$. Now we complete the proof of Theorem \[thm: main\]. [**Acknowledgements.**]{} The author would like to thank Professors Dorian Goldfeld, Jianya Liu, and Wei Zhang for their valuable advice and constant encouragement. He also wishes to thank Professor Matthew Young for explaining some details in his paper [@young2014weyl]. The author is grateful to the China Scholarship Council (CSC) for supporting his studies at Columbia University. He also wants to thank the Department of Mathematics at Columbia University for its hospitality. Project is partly supported by the Natural Science Foundation of Shandong Province, China (Grant No. ZR2014AQ002).
--- abstract: 'As of today, object categorization algorithms are not able to achieve the level of robustness and generality necessary to work reliably in the real world. Even the most powerful convolutional neural network we can train fails to perform satisfactorily when trained and tested on data from different databases. This issue, known as domain adaptation and/or dataset bias in the literature, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. Recent work showed that by casting the problem into the image-to-class recognition framework, the domain adaptation problem is significantly alleviated [@danbnn]. Here we follow this approach, and show how a very simple, learning free Naive Bayes Nearest Neighbor (NBNN)-based domain adaptation algorithm can significantly alleviate the distribution mismatch among source and target data, especially when the number of classes and the number of sources grow. Experiments on standard benchmarks used in the literature show that our approach (a) is competitive with the current state of the art on small scale problems, and (b) achieves the current state of the art as the number of classes and sources grows, with minimal computational requirements.' author: - Faraz Saeedan - Barbara Caputo title: 'Towards Learning free Naive Bayes Nearest Neighbor-based Domain Adaptation' --- Introduction ============ In the last years the computer vision research community’s attention has been driven towards the existence of differences across predefined image datasets and the necessity to recompose these idiosyncrasies. The main reason behind this need is the increasing amount of available image data sources and the absence of a unique general learning method that can perform well across all of them. In practice training a classifier on a dataset (e.g. Flicker photos) and testing on another (e.g. images captured with a mobile phone) produces very poor results although the task (i.e. the set of depicted object categories) is the same. In this context the notion of *domain* already used in machine learning for speech and language processing has been extended to visual problems. A source domain ($S$) usually contains a large amount of labeled images, while a target domain ($T$) refers broadly to a dataset that is assumed to have different characteristics from the source, and few or no labeled samples. Formally we can say that two domains differ when for their probability distributions it holds $P_S(x,y)\neq P_T(x,y)$, where $x\in\mathcal{X}$ indicates the generic image sample and $y\in\mathcal{Y}$ the corresponding class label. Specific annotator tendencies may influence the conditional distributions implying $P_S(y|x)\neq P_T(y|x)$. Other typical causes of visual domain shift include changing in the acquisition device, image resolution, lighting, background, viewpoint and post-processing [@bias]. Most of these information are directly encoded in the descriptor space $\mathcal{X}$ chosen to represent the images and may induce a difference among the marginal distributions $P_S(x) \neq P_T(x)$. In 2013, Tommasi and Caputo showed that by casting the domain adaptation problem into the Naive Bayes Nearest Neighbor framework one could achieve a very high level of generalization, thanks to the intrinsic properties of NBNN classifiers [@danbnn]. The proposed approach used distance metric learning to leverage over the source knowledge at the local patch level. This brought strong results in the semi-supervised and unsupervised domain adaptation scenarios, but the method is computationally expensive and thus not suitable to work on real-time systems, like smatphones or robots. Here we propose a simple, learning free domain adaptation method that makes it possible to exploit the generalization power of NBNN in the domain adaptation setting. We leverage over the source patches by randomly selecting a subset of them, and adding them to the target patches. To further increase the descriptive power of the descriptors, we perform data augmentation both on the source and the target data, as it is standard practice in the Convolutional Neural Network literature [@augmentation]. The combined effect of these two simple actions is remarkable: on commonly used benchmark databases, our approach is on par with the current state of the art when there is a single source from which to adapt, and when the number of classes is limited. In the more challenging and more realistic settings of multiple sources and increasing number of classes, our algorithm achieves the state of the art. The rest of the paper is organized as follows: after reviewing previous work (section \[rel-work\]) we revise the basic definitions for domain adaptation (section \[da\]) and the NBNN framework (section \[nbnn\]). Section \[rand-da-nbnn\] introduces our approach, while section \[experiments\] presents its thorough experimental evaluation. We conclude with a summary discussion and outlining possible future avenues for research. Related Work {#rel-work} ============ The problem of domain adaptation stems from the fact that supervised learning methods fail to generalize across datasets [@bias]. Although this problem exists in various applications [@bendavid; @quionero; @daume; @mcdonald], the visual recognition community has just recently shown interest in dealing with it [@gopalan; @bergamo; @fritz]. Failure to generalize across datasets has been attributed to the mismatch among various characteristics of the considered databases, and is usually referred to as the ‘dataset bias’ problem [@bias]. The fact that different image datasets vary considerably in quality, point of view and image contents, reveals that addressing the domain adaptation problem can significantly improve the performance of visual recognition applications. Several approaches have been adopted for reducing the distance between datasets. These approaches vary from transferring source data to the target domain [@fritz] or transferring both source and target to a third space[@gopalan]. Unfortunately, despite all efforts, [@landmark] showed that currently existing selective transfers do not offer significant improvement over random transfers. As an alternative to the enrichment of the target data through instance based transfer from the source, attempts have been made to modify the classifier in order to resolve the mismatch problem [@duan; @bruzzone; @yang; @khosla] during training. While the image to image paradigm is the dominant approach in the above-mentioned methods, [@Q1; @Q2] suggested that one can replace NBNN for Bag Of Words (BOW) combined with an image-to-image classification paradigm, in favor of an image-to-class recognition framework. This idea helps release the domain transfer from the known shortcomings of BOW representations [@boiman]. Even though NBNN has been tested on several visual learning applications, the use of this classification paradigm in domain adaptation has been limited. Only in 2013 [@danbnn] exploited its potential in a metric learning approach, and showed that using NBNN, one can easily surpass the state of the art among BOW-based algorithms presented so far. Be that as it may, the possible usages of this method, called DA-NBNN, are restricted due to the computational complexity. Indeed, once the amount of classes, the number of sources and the number of data for each class and source grow, using DA-NBNN becomes computationally prohibitive. Our approach overcomes these computational limitations while preserving, and often significantly surpassing, the performances of DA-NBNN, proposing the first learning free NBNN-based domain adaptation method in the literature. Problem Setting and Definitions {#prob-setting} =============================== In this section we set the scene by introducing formal definitions for the domain adaptation problem (section \[da\]) and the NBNN classification framework \[nbnn\]. The notion introduced in this section will then be used to present our algorithm. Domain Adaptation {#da} ----------------- Domain Adaptation is the problem where knowledge from the source domain $\mathcal{D}^s$ is used to enrich and hence improve the performance in the target domain $\mathcal{D}^t$. This knowledge from the souce might be in the form of instances or data, or model parameters, or metric induced by the source. It is usually implicitly assumed that the labeled data on the target domain does not exist (unsupervised setting) or it is scarce (semi-supervised setting). Although the source and the target domains are different, they use equal label sets [@musket] $\mathcal{Y}^s=\mathcal{Y}^t$.\ The core cause of mismatch between the two domains is attributed to the difference in the distribution of these labels. The conditional probability of labels given features are not completely coincident $P^s(Y|X) \sim P^t(Y|X)$ and the marginal data distributions are not equal either $P^s(X)\neq P^t(X)$. In this paper, we will focus exclusively on the semi-supervised setting. Naive Bayes Nearest Neighbor {#nbnn} ---------------------------- In the Naive Bayes Nearest Neighbor (NBNN) classification framework, it is assumed that for each class there exists a distribution from which local descriptors are drawn independently of one another. This leads to the use of a Naive Bayes maximum a posteriori classifier [@boiman] where each feature $m$ votes for one of the classes in $c=\{1,...,C\}$. This voting is realized using the local distance between each feature and its nearest neighbor in class c. $Df2C(m,c)=||f_m-f_m^c||$. The generalization of this distance concept to image to class distance is straightforward:$D_I2C(F_i,c)=\sum{m=1}{M_i}D_f2C(m,c)$.\ The output of the classifier would then be $$p=\operatorname*{argmin}_c D_I2C(F_i,c)$$ The distance to this optimum class p is called the positive distance while the distances to the rest of the classes $n:\{c\neq p\}, D_I2c(F_i,n)$ are called the negative distances. Learning free NBNN-based domain adaptation ========================================== ![An overview of our proposed learning free, NBNN-based domain adaptation approach for the class‘cow’: after performing data augmentation on both the source and target data, patches-based features are extracted from both, and a new target data set is created by merging the whole patches-based features extracted from the target with a fraction of those of the source, randomly selected from the whole sample data. This new pool of patches-based features is then used to build an NBNN classifier in the target domain.](faraz18.jpg "fig:"){width="99mm"} \[diagram\] \[rand-da-nbnn\] As outlined above, the problem of domain adaptation emerges when the training data for the target task is scarce. Should it not be the case, any supervised learning algorithm would be capable of learning a classifier, according to its learning abilities. It is also assumed that there exists at least another dataset with enough samples to learn a good classifier (the source), but since the two datasets have been acquired in two different domains, the performance obtained training on the source and testing on the target is not satisfactory. The NBNN algorithm builds support sets for each class made of the collection of all the features extracted from patches of each of the training examples. Due to the scarcity of the data on the target, the support sets that can be built solely using features from the target samples will not contain enough features to guarantee a solid performance. In order to enrich these support sets, *our proposal is to use features extracted from the patches of the source images*. How to select such patches-based features? In [@landmark], the authors investigate a domain adaptation approach based on the idea of landmark samples from the source domain, which are relevant for the modeling of the target classifier. Although their approach is theoretically sound, experiments show that the learning method proposed to select such landmark is often statistically on par, and otherwise within a two percent range of performance, with a random selection of the learning samples. Motivated by this result, we apply the same philosophy here to the patches-based features, and we propose to achieve domain adaptation in an NBNN-based framework by randomly sampling a percentage of the patches-based features from the source, adding them to the patches-based features of the target. We will show with experiments in the next section that this extremely simple and learning free strategy achieves amazingly good results on standard domain adaptation benchmark databases, while being reasonably stable with respect to the amount of features to be samples. To further improve performance in our approach, we have tested the effect of performing data augmentation on the source and target data. Data augmentation is a technique that, since the spectacular success of convolutional neural network in the visual classification arena, has been shown to be very effective in general for any classification algorithm [@augmentation]. Again, our experiments confirm the effectiveness of this strategy, even more so combined with the instance-based domain adaptation approach based on random sampling of patches-based features from the source. A schematic representation of the overall approach for the class ‘cow’ is given in figure \[rand-da-nbnn\]. Note that adding the data augmentation step to our overall approach does not significantly increase the almost non-existent computational load in training. This characteristic, combined with the remarkably good performances achieved especially as the number of classes and sources grow, makes our approach potentially attractive for applications where computational complexity should be low, like mobile robot or online, wearable systems. To the best of our knowledge, there are no previous instance-based, NBNN-based domain adaptation methods in the literature, nor the random sampling strategy has been ever tested in the NBNN learning framework for any learning to learn approach. Experiments =========== In this section we describe the experiments we performed to assess our approach. We first describe the data, features and experimental setup used (section \[Datasets\]), then we report the results obtained (section \[results\]). We discuss our findings in section \[discussion\]. Datasets, Features and setup {#Datasets} ---------------------------- ### Datasets We used the Office dataset, the standard test bed in domain adaptation which addresses the problem of object categorization between any two datasets of objects usually found in offices [@fritz]. This test bed consists of three domains namely Amazon, Webcam and Dslr. The Amazon dataset contains images obtained from online merchants. The images are centered and usually on a white background. Webcam and Dslr are respectively low resolution and high resolution images obtained from web cam and SLR cameras. Unlike Amazon, they could be subject to various environmental disturbances such as lighting or background changes. The Office dataset contain 31 classes of images for each domain.\ Having chosen 10 of the original 31 classes from office, [@geodesic] suggested that we can add images of the same 10 classes from Caltech-256 [@caltech] and form the Office+Caltech test bed in order to add a fourth domain in the office dataset.\ ### Features Following the protocol of [@danbnn], images were all resized to a common width (256px) and then converted to grayscale. SURF features were extracted according to [@surf]. The final result was a set of features of length 64 that were consequently fed to a 1-nearest neighbor classifier.\ The effect of data augmentation on both domains has also been studied. To this end, we have duplicated the exact procedure suggested in [@augmentation] and each image is converted into 10 images through the procedure of cropping and flipping.\ ### Setup Different pairs of datasets are chosen to act as the source and the target from the Office+Caltech group. From the source dataset, 20 images were selected to represent the source data but only 3 were chosen from the target in every class. When the target was Webcam, 15 images were selected instead of 20 as described in [@danbnn]. At this stage, since the Dslr dataset behaves very similarly to Webcam and it contains a lower number of images, we decided not to include it in our benchmarking.\ The same sample selection protocol has been adopted for the 31 class adaptation experiments. The third setup that we considered is domain adaptation from more than one source with one target. To this end, all possible combinations of two sources to one target have been examined and benchmarked against the existing reported results in the literature. Results ------- The first set of experiments was done on a subset of Office+Caltech consisting of 10 classes as explained in [@danbnn]. Figure \[fig:accuracies\] shows the results in comparison to the state of the art and some baseline algorithms.\ \ \ \[fig:accuracies\] Figures \[fig:acc\_caltech256\_classemes\], \[fig:acc\_caltech256\_object\_bank\] and \[fig:acc\_sun09\] show the changes in the recognition rate with the increase of the percentage of descriptors, randomly transferred to the target from the source. For a better understanding of the effects of different factors, four cases have been demonstrated together. Original data is where there is no augmentation done neither on the target nor on the source domains. The cases where only the source and only the target domains have been augmented are referred to as Source augmented and Target Augmented respectively. Source and Target Augmented is where both domains have been over-sampled.\ The second set of experiments is done on the 31 class Office dataset. The experiments are done exactly inline with what explained and done in [@danbnn]. Table \[31 class\] shows the results with comparison to the state of the art both using NBNN and the state of the art based on a method other than NBNN. Some further baselines are also included for better comparison.\ Algorithm $A\longrightarrow W$ $W\longrightarrow D$ $D\longrightarrow W$ ------------------- ------------------------- ----------------------- ----------------------- BOW $34.9\pm 0.6$ $48.9\pm 0.5$ $38.4\pm 0.4$ GFK $46.4\pm 0.5$ $66.3\pm 0.4$ $61.3\pm 0.4$ NBNN $40.0\pm 2.0$ $67.2\pm 2.5$ $70.7\pm 1.2$ I2CDML $47.9\pm 1.3$ $72.8\pm 2.1$ $73.8\pm 1.6$ $H-L2L(hp-\beta)$ $\mathbf{76.2\pm 0.02}$ $67.8\pm 0.05$ $66.0\pm0.01$ DA-NBNN $52.8\pm 3.7$ $76.2\pm 2.5$ $76.6\pm1.7$ OURS $55.0\pm3.3$ $\mathbf{77.5\pm2.0}$ $\mathbf{78.2\pm1.4}$ : 31 class Office dataset experiments, semi-supervised setting[]{data-label="31 class"} The Third and last set of experiments are those run using more than one source domain. The Results can be seen in Figure \[2to1\]. Not all Algorithms can be extended to cover the case of several sources and so only those who had this advantage were included in the comparison. For the experiments the exact test set of [@l2l] has been used. ![Accuracy on target domains with multiple sources (A:Amazon, W:Webcam, D:DSLR), 31 class, semi-supervised[]{data-label="2to1"}](2to1.pdf){width="99mm"} Discussion ---------- The biggest advantage of our proposed method is its simplicity combined with its strong performance over growing number of classes and source domains. It also performs surprisingly well in comparison to other algorithms. The results in Figure \[fig:accuracies\] show that while different algorithms have varying performances on various test settings, our method is never worse than the second best. In particular, compared to DA-NBNN [@danbnn] (which is the state of the art among all the methods that exploit an NBNN approach), our method outperforms it in 2 cases (A-W and C-W), while DA-NBNN performs better in two cases (C-A and A-C). In the remaining two cases (W-A, W-C) their performance is close. In fact, the $p$ test shows that in these two experiments there is no statistical evidence of superiority for either of the algorithms. Our method performs significantly better than L2L [@l2l] where L2L is the state of the art among methods that do not use NBNN. In four of the experiments, L2L achieves inferior results than ours, while only in one setting shows superiority. Note that the accuracy values reported for L2L have been taken from [@l2l], where no result was reported for the C-W experiment. Using the 31 class Office setting, one can study and compare the scalability of the algorithms with respect to the number of classes. Addressing this type of scalability for our method appears very straightforward. The fact that there is no training, makes things very easy and faster. Table \[31 class\] shows that, performance-wise, our method scores higher than DA-NBNN in all three experiments and better than L2L in two out of three cases. Figure \[2to1\] Compares the recognition rate for all possible combinations of two sources and one target in the Office dataset. For DA-NBNN it is not clear how it could be extended to this case and no experiments of the kind have been reported by its authors. L2L supports this case and it has been included in the benchmark. It can be seen that our method outperforms all the others for all three cases of experiments. An open issue in our method is of course which percentage of the source data should be randomly selected and then added to the target data, in relation to the data augmentation procedure. Results shown in figures \[fig:acc\_caltech256\_classemes\]-\[fig:acc\_sun09\] show that in general the combination of source plus target data augmentation and random sampling of around 20% of patches-based features from the source seems to achieve strong performance, always better than the original data. Still, as it can be seen from the figures, the actual optimal performance might vary in terms of percentage of sampling and/or data augmentation strategy for different settings. Although accuracy results are on average quite stable, and therefore the algorithm could be used in online systems even in its current form with good expectations about performance, it would be desirable to explore further the issue of the data selection and find principled ways of selecting the patches to transfer from the source to the target so to have guarantees about the optimality of the procedure. Of course, that would come at the expenses of the current negligible computational cost of the approach. Conclusions =========== The contribution of this paper is a learning free Naive Bayes Nearest Neighbor based domain adaptation method that is competitive with the current state of the art on the standard Office-Clatech benchmark database, and that achieves the state of the art when the number of classes and sources grows. The method consists in performing a random selection of patches-based local features from the source to the target, combined with a data augmentation strategy mutated from the CNN literature. The resulting algorithm is extremely simple but also remarkably effective, especially when the number of classes and sources grows. An open challenge is how to select the best percentage of source data to add to the target: even though our experimental evaluation indicates that as a rule of thumb sampling around twenty percent of the overall sample data (i.e. after data augmentation) in general leads to very good results, future work will focus on how to determine how much to sample in a principled manner, while at the same time not increasing excessively the computational cost of the approach. [5]{} Boiman O. , Shechtman E. , Irani M.: In defense of nearest-neighbor based image classification. In CVPR, 2008. Tommasi T. , Quadrianto N. , Caputo B. , Lampert C.: Beyond Dataset Bias: Multi-task Unaligned Shared Knowledge Transfer. In ACCV, 2012. Ben-David S. , Blitzer J. , Crammer K. , Pereira F.: Anaylsis of representations for domain adaptation. In NIPS, 2007. Gopalan R. , Li R. , Chellappa R.: Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011 Quionero-Candela J. , Sugiyama M. , Schwaighofer A. , Lawrence N.: Dataset Shift in Machine Learning. The MIT Press, 2009. Bergamo A. , Torresani L.: Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, 2010. Daume H. III.: Frustratingly easy domain adaptation. In ACL, 2007. Saenko K., Kulis B. , Fritz M. , Darrell T.: Adapting visual category models to new domains. In ECCV, 2010. Blitzer J. , McDonald R. , Pereira F.: Domain adaptation with structural correspondence learning. In EMNLP, 2006 Torralba A., Efros A. A.: Unbiased look at dataset bias. In CVPR, 2011. Duan L., Tsang I. W.-H. , Xu D. , Maybank S. J.: Domain transfer svm for video concept detection. In CVPR, 2009. Gong B. , Grauman K. , Sha F.: Connecting the Dots with Landmarks:Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation. In JMLR, 2013 Gong B. , Shi Y. , Sha F. , Grauman K.: Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012 Yang J. , Yan R. , Hauptmann A. G.: Cross-domain video concept detection using adaptive svms. In ACM Multimedia,2007. Bruzzone L. , Marconcini M.: Domain adaptation problems:A DASVM classification technique and a circular validation strategy. IEEE PAMI, 32(5):770–787, 2010 Khosla A. , Zhou T. , Malisiewicz T. , Efros A: , Torralba A.: Undoing the damage of dataset bias. In ECCV, 2012 Q. Qiu, V. M. Patel, P. Turaga, and R. Chellappa. Domain adaptive dictionary learning. In ECCV, 2012. J. Ni, Q. Qiu, and R. Chellappa. Subspace interpolation via dictionary learning for unsupervised domain adaptation. In CVPR, 2013. Tommasi T. , Caputo B.: Frustratingly easy NBNN domain adaptation. In ICCV, 2013 Patricia N. , Caputo B.: Learning to Learn, from Transfer Learning to Domain Adaptation: A Unifying Persspective. In CVPR 2014 Chatfield K. , Simonyan K. , Vedaldi A. , Zisserman A.: Return of the Devil in the Details: Delving Deep into Convolutional Nets Griffin G. , Holub A. , Perona P.: Caltech 256 object category dataset. Technical Report UCB/USD-04-1366, California Institute of Technology, 2007. Bay H. , Ess A. , Tuytelaars T. , Gool V.: SURF: Speeded up robust features. CVIU, 110:346-359,2008.
--- abstract: 'We consider minimal, aperiodic symbolic subshifts and show how to characterize the combinatorial property of bounded powers by means of a metric property. For this purpose we construct a family of graphs which all approximate the subshift space, and define a metric on each graph which extends to a metric on the subshift space. The characterization of bounded powers is then given by the Lipschitz equivalence of a suitably defined infimum metric with the corresponding supremum metric. We also introduce zeta-functions and relate their abscissa of convergence to various exponents of complexity of the subshift.' author: - | J. Kellendonk$^{1}$, D. Lenz$^{2}$, J. Savinien$^{3}$\ [$^{1}$ Université Lyon I, Lyon, France.]{}\ [$^{2}$ Friedrich Schiller Universität, Jena, Germany.]{}\ [$^{3}$ Université de Lorraine, Metz, France.]{} title: A characterization of subshifts with bounded powers --- Introduction {#sect-intro} ============ In symbolic dynamics one studies subshifts of the so-called full shift over a finite alphabet ${{\mathcal A}}$; the latter is the ${{\mathbb Z}}$-action given by the left shift $\sigma$ on the set of infinite sequences with values in ${{\mathcal A}}$ and a subshift is the restriction of this dynamical system to a closed shift invariant subspace $\Xi$. Among the fields of interest are the combinatorial properties of such subshifts. The most prominent combinatorial properties occurring in the literature are recurrence and its stronger variant linear recurrence, repulsiveness which is equivalent to bounded powers (also referred to as power freeness), richness, and various forms of complexity. Such combinatorial properties often correspond to properties of the dynamical system and hence of the $C^*$-algebras $C(\Xi)$ and $C(\Xi)\rtimes_\sigma{{\mathbb Z}}$. So it is a natural idea to consider non commutative Riemannian geometries [@Co94]\[Chap. VI\], that is, spectral triples, on these algebras and see how these can be used to characterize combinatorial properties of the subshift. While spectral triples for crossed product algebras of the above type seem hard to set up - we are only aware of the recent attempt [@BMR] which only gives a partial result, and a version for the related crossed product with ${{\mathbb R}}$ [@Whi] which seems very implicit - there has been quite some activity in constructing spectral triples for commutative $C^*$-algebras $C(X)$ whose space $X$ does not carry an obvious differential Riemannian structure. A series of works has been devoted to metric spaces [@Ri99; @Ri04; @CI07] or more specifically to fractals [@GI03; @GI05; @CIL08] and Cantor sets [@Co94]. In particular, for ultrametric Cantor sets the work of Pearson & Bellissard [@Pea08; @PB09] can be regarded as a mile stone. They introduced and emphasized the importance of choice functions. In recent work [@KS10], two of the authors proposed a modification of Pearson & Bellissard’s triple obtaining in particular a characterization of the combinatorial property of bounded powers for subshifts with a unique right-special word per length. A subshift has bounded powers if its sequences do not contain arbitrarily high powers of words, [*i.e.*]{} there is an integer $p$ such that $n$-fold repetitions $w^n=w\cdots w$ of a word $w$ cannot occur for $n>p$. Note that linearly recurrent subshifts, which are commonly regarded as highly ordered [@LP03; @Dur00; @Dur03], share this property. A subshift has a unique right-special word per length if, for each $n$, there exists a unique word of length $n$ which can be extended to the right in more than one way to a word of length $n+1$. The purpose of the present work is to generalize this characterization of bounded powers to the whole class of minimal and aperiodic subshifts. The essential ingredient in the construction of [@KS10] is a family of graphs which approximate the subshift: its vertices are dense and its edges encode adjacencies. Each graph gives rise to spectral triple and its associated Connes distance, and taking extrema over the family yields two metrics on the subshift space. The result is then that the subshift has bounded powers if and only if the two metrics are Lipschitz equivalent. The generalization to all subshifts given in the present work is based on the use of [*a priori*]{} different approximation graphs. These are obtained by trading right-special words, which played a decisive role for the old graphs, against what we call here [*privileged words*]{}. Privileged words are iterated complete first returns to letters of the alphabet. They have met a lot of interest recently. For the class of rich subshifts the privileged words are exactly the palindromes (see Section \[pf11.ssect-spwords\] for further details). As is often the case that, once one is lead to consider certain objects by an abstract theory (here non commutative Riemannian geometry) and these objects turn out useful in the context of another field (here subshifts) one finds out that they can also be defined [*ad hoc*]{}, i.e. without any knowledge of the abstract theory. This is the case here and so we present our construction [*ad hoc*]{} and add a final section in which we explain the spectral triples underlying it. The paper is organized as follows: We recall basic definitions about subshifts in Section \[pf11.sect-subshifts\]. We explain bounded powers and repulsiveness, we introduce [*privileged words*]{} and explain their relation to palindromes (Proposition \[pf11.prop-icr\]), and define subshifts of [*almost finite ranks*]{}. Section \[pf11.sect-treegraph\] is devoted to the construction of the approximation graphs. For that we first recall the definition of the [*tree of words*]{} ${{\mathcal T}}$ of a right-infinite subshift $\Xi$. We introduce two types of [*horizontal edges*]{}: one type for right-special words and another for privileged words (Definition \[def-H\] and \[def-tH\]). The above mentioned main result of this work will make use only of privileged horizontal edges but for comparison with [@KS10] we consider right-special horizontal edges as well. Similar to [@PB09] and as in [@KS10], [*choice functions*]{} (Definition \[pf11.def-choices\]) will play a role to define the approximation graphs for the subshift space and a [*weight function*]{} will be used to give a length to the horizontal edges. In Section \[pf11.sect-metric\], we define [*ad hoc*]{} a metric on $\Xi$ by $${{\widetilde}d_\tau}(\xi, \eta): = \sup_{f\in C(\Xi)} \Bigl\{ |f(\xi)-f(\eta)| \, : \, |f(s(e)) - f(r(e))| \le l(e), \, \forall e \in {\widetilde}E_\tau \Bigr\}$$ where $s(e)$ and $r(e)$ denote the source and range vertex of the edge $e$, $l(e)$ its length, and ${\widetilde}E_\tau$ the realization of the horizontal edges of the approximation graph defined by the choice function $\tau$. We provide an explicit formula for ${\widetilde}d_\tau$ in Lemma \[pf11.lem-specdist\]. We define the extremal metrics ${{\widetilde}{d}_{\text{\rm inf}}}$ and ${{\widetilde}{d}_{\text{\rm sup}}}$ and derive explicit criteria for their Lipschitz equivalence. We also compare the above metrics with the metrics which were obtained in [@KS10] (Prop. \[prop-old-new\]). In Section \[pf11.sect-powerfree\] we state and prove our main result: [**Theorem \[pf11.thm-characterization\]**]{} [*Let $\Xi$ be a minimal and aperiodic ${{\mathbb Z}}$-subshift over a finite alphabet. Then $\Xi$ has bounded powers if and only if ${{\widetilde}{d}_{\text{\rm sup}}}$ and ${{\widetilde}{d}_{\text{\rm inf}}}$ are Lipschitz equivalent.* ]{} In Section \[pf11.sect-Zeta\] we introduce two families of [*zeta-functions*]{}. These are defined by Dirichlet series and their summability is related to various exponents of complexity of the subshift. In the last Section \[pf11.sect-spectrip\] we briefly explain the non commutative geometrical constructions underlying this work. We provide the spectral triple associated to an approximation graph, show that the associated Connes distance is ${{\widetilde}d_\tau}$, and relate the zeta-function of the spectral triple to the zeta-functions defined in Section \[pf11.sect-Zeta\]. #### Acknowledgments This work was supported by the ANR grant [*SubTile*]{} no. NT09 564112. The authors would like to thank Luca Zamboni for useful discussions; in particular he explained them the notion of rich words and showed them Proposition \[pf11.prop-icr\]. Subshifts {#pf11.sect-subshifts} ========= A subshift is a subspace $\Xi \subset {{\mathcal A}}^{{\mathbb Z}}$ of sequences over a finite alphabet ${{\mathcal A}}$, that is closed (for the product topology) and invariant under the left-shift map $\sigma$. A (finite) word occurring in some infinite word $\xi \in \Xi$ is called a [*factor*]{}. The set ${{\mathcal L}}$ of all factors of all $\xi \in \Xi$ is called the [*language*]{} of the subshift. We consider subshifts that are [*aperiodic*]{}: $\forall \xi \in \Xi, \, \sigma^n(\xi) = \xi \Rightarrow n=0$, and for which the dynamical system given by the action of ${{\mathbb Z}}$ by the shift is [*minimal*]{} (every orbit is dense). The length of a word $u$ is written $|u|$. Given $u,v \in {{\mathcal L}}$, we write $v \preceq u$ to mean that $v$ is a prefix of $u$, and $v\prec u$ if $v$ is a proper prefix ([*i.e.*]{} $|v|<|u|$). Similarly we write $ u \succeq v$ or $u \succ v$ if $v$ is a suffix or proper suffix of $u$. Bounded powers {#pf11.ssect-bdpowers} -------------- A subshift $\Xi$ has [*bounded powers*]{} if there exists an integer $p$ such that any word can occur at most $p$ times consecutively: $ \forall u \in {{\mathcal L}},\; u^{p+1} \notin {{\mathcal L}}$. This is sometimes also called power free. The following characterization of bounded powers will be useful. Define the [*index of repulsiveness*]{} of a subshift $\Xi$ with language ${{\mathcal L}}$ as $$\label{pf11.eq-indexrepuls} \ell := \inf\Big\{ \frac{|W| - |w|}{|w|} \, : \, w, W \in {{\mathcal L}}, \; w \; \text{\rm is a proper prefix and suffix of } W \Big\}\,.$$ A subshift is called [*repulsive*]{} if $\ell>0$. \[pf11.rem-powers\] A subshift with has bounded powers if and only if it is repulsive. If $\Xi$ has arbitrarily large powers, for all integer $p$ there exists a word $u \in {{\mathcal L}}$ such that $u^p \in {{\mathcal L}}$. Take $w=u^{p-1}$ and $W=u^p$ in equation , to get $\ell \le 1/(p-1)$. Since this must hold for any $p$, we conclude that $\ell = 0$. Conversely, if $\ell=0$, then for any $\epsilon>0$ arbitrarily small, there exists words $w,W \in {{\mathcal L}}$ as in equation  such that the ratio $(|W|-|w|)/|w|$ is less than $\epsilon$. This implies that the two occurrences of $w$ in $W$ overlap, and in turns that one can write $w=u^{p-1}v$ and $W=u^pv$ for some $u,v\in {{\mathcal L}}$ with $0<|v|\le |u|$, and with $p$ greater than or equal to the integer part of $1/\epsilon$. Hence $\Xi$ has arbitrarily large powers. One defines a right- of left-infinite subshift similarly as a subset $\Xi \subset {{\mathcal A}}^{{\mathbb N}}$ of right- or left-infinite sequences. Given a subshift $\Xi$ one denotes by $\Xi^\pm$ the right- and left-infinite subshifts derived from $\Xi$ (by dropping the left or right parts of infinite words in $\Xi$). \[pf11.lem-repulsive\] Let $\Xi$ be a minimal and aperiodic subshift. The following assertions are equivalent: (i) $\Xi$ has bounded powers; (ii) $\Xi^+$ has bounded powers; (iii) $\Xi^-$ has bounded powers. Since the three subshifts have the same language, the indices of repulsiveness of $\Xi^\pm$ are equal to that of $\Xi$: $\ell^\pm=\ell$. Privileged words {#pf11.ssect-spwords} ---------------- We consider a minimal and aperiodic [*right-infinite*]{} subshift $\Xi$ with language ${{\mathcal L}}$ over a finite alphabet. As a consequence of minimality, given a word $u\in {{\mathcal L}}$, there exists finitely many non-empty words $u' \in {{\mathcal L}}$, called [*complete first return*]{} words to $u$, such that (i) $u$ is a prefix and a suffix of $u'$, (ii) $u$ occurs exactly twice in $u'$. If $u$ is the empty word, its complete first returns are by definition the letters of the alphabet. An $n$-th iterated complete first return of $u$ is a word $u^{(n)}$ for which there exists words $u^{(j)}, j=0, \cdots n-1$, such that $u^{(0)}=u$ and $u^{(j+1)}$ is a complete first return to $u^{(j)}$, for $j=0, \cdots n-1$. An $n$-th iterated complete first return word $u$ of the empty word will be called an $n$-th order [*privileged word*]{}, and we will denote by ${O}(u)=n$ its order. So for instance the unique $0$-th order privileged word is the empty word, and the $1$-th order privileged words are the letters of the alphabet. We say that a subshift has [*finite privileged rank*]{} if there is a finite number $N$ such that any privileged word $u$ has only finitely many complete first return words $u'$. Using Bratteli Vershik diagram techniques [@HPS] to describe the subshift, based on a Kakutani-Rohlin towers whose bases are cylinder sets of privileged words (see [@Du10]), one easily sees that this implies that the rationalized Čech-cohomology of the subshift space is finite generated. We will need a generalization: We say that a subshift has [*almost finite privileged rank*]{} if there are constants $a,b>0$ such that the number of complete first return words of a privileged word $u$ is bounded by $ a \log(|u|)^b$. We now show the relation between privileged words and palindromes. An infinite word $\xi$ is called [*rich*]{} [@GJWZ09] if any factor $u$ of $\xi$ contains exactly $|u|+1$ palindromes. The notion of privileged words is a “maximal generalization” of palindromes: indeed one can easily see that any factor $u$ of any infinite word contains exactly $|u|+1$ privileged words. A characteristic property of rich words ([@BLGZ09] Proposition 1) is that any complete first return to a palindrome is a palindrome. \[pf11.prop-icr\] Let $\xi$ be an infinite word over a finite alphabet, and $u$ a factor of $\xi$. (i) If $u$ is a palindrome then it is a privileged word. (ii) If $\xi$ is rich, then $u$ is a palindrome if and only if $u$ is a privileged word. We prove this by induction on $|u|$. The statements are trivial if $|u|=0,1$. \(i) Choose a palindrome $u$, with $|u|>1$, and assume that the statement holds for any word of length less than $|u|$. Let $v$ be the largest proper palindromic prefix of $u$. Since $u$ is a palindrome, $v$ is also a suffix of $u$. Now by maximality of $|v|$, $v$ can only occur twice in $u$. Hence $u$ is a complete first return of $v$, and therefore a palindrome. \(ii) Choose a privileged word $u$, with $|u|>1$, and assume that the statement holds for any word of length less than $|u|$. Let $v$ be the privileged word to which $u$ is the complete first return word (note that $v$ is unique). As $|v|<|u|$, $v$ is a palindrome, and therefore $u$ is a palindrome (as a complete first return to a palindrome). A word $u\in {{\mathcal L}}$ is called [*right-special*]{} if it has more than one one-letter right extension: $\exists a,b \in {{\mathcal A}},\; a\neq b, \; ua, ub\in {{\mathcal L}}$. If for all $n\in{{\mathbb N}}$ the subshift has a unique right-special word of length $n$, one says that the subshift has [*a unique right-special word per length*]{}. Given a word $u$ we denote by $S(u)$ the set of all right-special words $r$, for which there exists a complete first return $u'$ to $u$ such that $ u \preceq r \prec u'$. The following assertions are equivalent: (i) Given a privileged word $u$ and any complete first return $u'$ to $u$, there exists a unique right-special word $r$ such that $ u \preceq r \prec u'$; (ii) Given a right-special word $r$ and the smallest proper right-special extension $r'$ of $r$, there exists a unique privileged word $u$ such that $ r \preceq u \prec r'$; (iii) Given a privileged word $u$, $S(u)$ contains exactly one (right-special) element. Equivalence of the first two conditions follows easily from aperiodicity, and the fact that if $u$ is privileged and $u'$ a complete first return to $u$ then there exists no privileged word $v$ such that $u\prec v \prec u'$. The third condition clearly implies the first. Suppose the first and consider $u'_1,u'_2$, two different complete first returns to $u$. Then the unique right-special word between $u$ and $u'_1$ coincides with that between $u$ and $u'_2$. It follows that $S(u)$ contains only one element. We call a subshift satisfying the above equivalent conditions [*right-special balanced*]{}. The following lemma shows that subshifts studied in [@KS10] are right-special balanced. If a subshift has a unique right-special word per length then it is right-special balanced. Let $u'$ be a complete first return to $u$ and $r_1,r_2$ two right-special words satisfying $u\preceq r_1\prec r_2 \prec u'$. By uniqueness of right-special factors of length $|r_1|$, $r_1$ must be a suffix of $r_2$. Hence, if $r_2 \neq r_1$, then $r_2$ is a non-trivial complete first return to $r_1$ and thus contains a non-trivial complete first return to $u$, which is a contradiction. Trees and graphs {#pf11.sect-treegraph} ================ We consider a minimal and aperiodic [*right-infinite*]{} subshift $\Xi$ over a finite alphabet ${{\mathcal A}}$, with language ${{\mathcal L}}$. The tree of words {#pf11.ssect-tree} ----------------- As in [@KS10] we consider the tree of words ${{\mathcal T}}=({{\mathcal T}}^{(0)},{{\mathcal T}}^{(1)})$: the vertices are the words in ${{\mathcal L}}$ (the root being the empty word), and there is an edge linking a word to each of its one-letter right extension. The set of infinite rooted paths $\Pi_\infty$ on ${{\mathcal T}}$ can be seen as a subset of ${{\mathcal A}}^{{\mathbb N}}$ and shall be equipped with the relative topology of the product topology on ${{\mathcal A}}^{{\mathbb N}}$. It is well known that $\Pi_\infty$ is homeomorphic to $\Xi$ and hence we identify the two. In fact, the cylinder sets $[v]$, of all infinite rooted paths through $v\in {{\mathcal T}}^{(0)}$, form a basis of clopen (closed and open) sets for the topology. Let us denote by $H^{(0)}$ the set of right-special words and by ${\widetilde}H^{(0)}$ the set of privileged words. It is clear that the above base of the topology is given by $\{[v]:v\in H^{(0)}\}$. \[pf11.lem-infinitepaths\] The cylinder sets $[v]$ for $v\in {\widetilde}H^{(0)}$ also form a basis of clopen sets for the topology. Fix a word $u \in {{\mathcal L}}$, and let $v_1$ be its first (left) letter. Consider the complete first return $v_2$ of $v_1$ which is a prefix of $u$. Let $v_3$ be the complete first return word of $v_2$ which is a prefix of $u$, and so on. We define this way a finite sequence $v_1, v_2, \cdots v_{p}$ of elements in ${\widetilde}H^{(0)}$, such that $v_1 \prec v_2 \prec \cdots v_{p-1} \preceq u \prec v_p$. Identifying the cylinders $[v], v\in {{\mathcal T}}^{(0)}$, with cylinders of $\Xi$, we have the inclusions $[v_{p-1}] \subset [u] \subset [v_{p}]$ which proves the homeomorphism. Given two distinct infinite words $\xi, \eta \in \Xi$, we denote by $$\begin{array}{cl} \xi \wedge \eta\in H^{(0)}\,, & \text{\rm the longest common prefix to $\xi$ and $\eta$, and by} \\ \xi {\, {\widetilde}\wedge \,}\eta\in {\widetilde}H^{(0)}\,, & \text{\rm the longest common privileged prefix to $\xi$ and $\eta$}\,. \end{array}$$ Notice that $\xi{\, {\widetilde}\wedge \,}\eta$ is always a prefix of $\xi \wedge \eta$. Horizontal edges {#pf11.ssect-horizontal} ---------------- \[pf11.def-av\] For $v\in{{\mathcal T}}^{(0)}$ define: (i) $a(v)=$ number of one-letter right extensions of $v$ minus one; (ii) ${\widetilde}a(v)=$ number of complete first returns to $v$ minus one if $v$ is privileged, and $0$ if $v$ is not privileged. Note that $0\leq a(v)\leq |{{\mathcal A}}|-1$, and $a(v)\geq 1$ whenever $v$ is right-special. By aperiodicity, for all $n$ there is at least one $v$ of length $n$ such that $a(v)\geq 1$. Aperiodicity also implies that ${\widetilde}a(v)\geq 1$ for all privileged words. The following relation between the two definitions will be useful later on. \[lem-av\] If $u$ is privileged then $${\widetilde}a(u) = \sum_{r\in S(u)} a(r)\,.$$ In particular ${\widetilde}a(u)$ bounds the number of right-special words in $S(u)$. The proof is rather straightforward. Figure \[pf11.fig-av\] illustrates the idea of the proof: the white square stands for a privileged word $u$, the white circles for its complete first returns, and the black circles for the right-special words in $S(u)$. ![[]{data-label="pf11.fig-av"}](av.eps){width="9cm"} The following set has also been used in [@KS10]. \[def-H\] Let $H^{(1)}$ be the set of pairs $(u,v)$ given by distinct one-letter right extensions of the same word (necessarily right-special). We view these as new edges in the graph ${{\mathcal T}}$ calling them [*right-special horizontal edges*]{}. We denote by $u\wedge v$ the corresponding right-special word (the longest common prefix of $u$ and $v$). Note that $H^{(1)}$ contains $a(r)(a(r)+1)$ edges with longest common prefix $r$. The data $({{\mathcal T}}^{(0)},{{\mathcal T}}^{(1)}, H^{(1)})$ together with a choice function and a weight function determine a metric on $\Xi$, as we recall below, and gave rise to the characterization of power boundedness in [@KS10] in the case of when $\Xi$ has a unique right-special word per length. The main new idea in this article is to use another set of horizontal edges. \[def-tH\] Let ${\widetilde}H^{(1)}$ be the set of pairs $(u,v)$ given by distinct complete first return words of the same privileged word. We view these as new edges in the graph ${{\mathcal T}}$ calling them [*privileged horizontal edges*]{}. We denote by $u{\, {\widetilde}\wedge \,}v$ the corresponding privileged word (the longest common privileged prefix of $u$ and $v$). As for infinite words, $u {\, {\widetilde}\wedge \,}v$ is always a prefix of $u \wedge v$. The new general characterization of power freeness will be obtained from the data $({{\mathcal T}}^{(0)},{{\mathcal T}}^{(1)}, {\widetilde}H^{(1)})$. \[pf11.rem-Gambaudo\][*The horizontal data ${\widetilde}H^{(0)}$ and ${\widetilde}H^{(1)}$ can be made into a new graph, by adding vertical edges linking a privileged word to any of its complete first returns. This “graph of privileged words” can then be interpreted as a symbolic analogous of a general construction for tilings and Delone sets of ${{\mathbb R}}^d$ introduced by Gambaudo [*et al.*]{} in [@BBG06].* ]{} There are natural maps: $${\varphi}^{(0)}:{\widetilde}H^{(0)}\to H^{(0)} \,, \qquad {\varphi}^{(1)}:{\widetilde}H^{(1)}\to H^{(1)}\,,$$ defined as follows. Given a privileged word $u$, ${\varphi}^{(0)}(u)$ is the shortest right-special word containing $u$ as a prefix (which, by minimality, always exists). Given $(u_1,u_2)\in{\widetilde}H^{(1)}$, $u_1\wedge u_2$ is a right-special word and there is a unique one-letter extension $v_i$ of $u_1\wedge u_2$ which is a prefix of $u_i$, $i=1,2$. We define ${\varphi}^{(1)}((u_1,u_2)) = (v_1,v_2)$. \[lem-phi\] The map ${\varphi}^{(0)}$ is always injective. It is surjective if and only if the subshift is right-special balanced. For any $(u_1,u_2)\in {\widetilde}H^{(1)}$ we have: $${\varphi}^{(0)}(u_1{\, {\widetilde}\wedge \,}u_2) = u_1\wedge u_2\,.$$ Furthermore, if the subshift is right-special balanced then $a({\varphi}^{(0)}(u)) = {\widetilde}a(u)$. The map ${\varphi}^{(1)}$ always surjective. It is injective if and only if the subshift is right-special balanced. The statements concerning ${\varphi}^{(0)}$ are obvious. That right-special balanced implies injectivity is a simple counting argument following from the fact that $a({\varphi}^{(0)}(u))={\widetilde}a(u)$ in that case. As for the converse, if $S(u)$ contains two distinct $r_1,r_2$ then it must contain two distinct $r_1,r_2$ with $r_1\prec r_2$. It follows that there are distinct complete first returns $u'_1,u'_2,u'_3$ of $u$ such that $r_1$ is the longest common prefix of them all but $r_2$ is the longest common prefix of $r_2$ and $r_3$ only. It follows that ${\varphi}^{(1)}((u_1,u_2))={\varphi}^{(1)}((u_1,u_3))$. An important technical point for this paper is the following lemma: it says that the set of privileged words keeps track of the combinatorics of powers in the subshift. \[pf11.lem-powers\] Consider a word $u \in {{\mathcal L}}$. If there exists an integer $p\ge 2$ such that $u^p\in{{\mathcal L}}$, then there are $p$ non-empty privileged words $v_1, v_2, \cdots v_p$, and a prefix ${\widetilde}{u}$ of $u$, satisfying (i) $u^p$ is a proper prefix of $v_p$, (ii) $v_j = u^j {\widetilde}{u}$, for $j=1, 2, \cdots p-1$, (iii) $v_{j+1}$ is a complete first return to $v_j$, for $j=1, 2, \cdots p-1$. Let $v_p$ be the shortest privileged proper extension of $u^p$, and let $v_{p-1}$ be the (unique) privileged word whose complete first return is $v_p$. By minimality of $|v_p|$, $v_{p-1}$ is a prefix of $u^p$, so we have $v_{p-1} \preceq u^p \prec v_p$. Hence there is a prefix ${\widetilde}{u}$ of $u$ such that $v_{p-1}=u^k{\widetilde}{u}$ for some $k\le p-1$. If $k<p-1$, then the first complete first return to $v_{p-1}$, [*i.e.*]{} $v_p$, would be shorter than $u^p$, a contradiction. Thus we have $v_{p-1} = u^{p-1} {\widetilde}{u}$. Consider now the (unique) privileged word $v_{p-2}$ whose complete first return is $v_{p-1}$. The same reasoning, namely that its first complete first return $v_{p-1}$ must be longer than $u^{p-1}$, shows that $v_{p-2} = u^{p-2} {\widetilde}{u}'$, for some prefix ${\widetilde}{u}'$ of $u$. But $v_{p-2}$ is also a suffix of $v_{p-1}$, and hence ${\widetilde}{u}'={\widetilde}{u}$. And we complete the proof with a finite induction. Approximation graphs {#pf11.ssect-approxgraph} -------------------- We consider a minimal and aperiodic right-infinite subshift $\Xi$ with language ${{\mathcal L}}$ over a finite alphabet, and its tree of words ${{\mathcal T}}=({{\mathcal T}}^{(0)},{{\mathcal T}}^{(1)})$ and the horizontal structures $H$ and ${\widetilde}H$ as defined in the previous Sections \[pf11.ssect-tree\] and \[pf11.ssect-horizontal\]. \[pf11.def-choices\] A [*choice function*]{} is a map $ \tau : {{\mathcal T}}^{(0)} \rightarrow \Pi_\infty$ which satisfies (i) $\tau(v)$ goes through $v$, (ii) If $\tau(v)$ goes through $w$, with $|w|>|v|$, then $\tau(w) = \tau(v)$. Given a choice function $\tau$ we define the [*approximation graphs*]{} $\Gamma_\tau = (V,E)$ and ${\widetilde}\Gamma_\tau = ({\widetilde}V,{\widetilde}E)$ by $$V = \tau(H^{(0)})\,, \qquad \qquad E = \bigl\{ \bigl(\tau(u) ,\tau(v) \bigr) \, : \, (u,v) \in H^{(1)} \bigr\} \,,$$ and $${\widetilde}V = \tau({\widetilde}H^{(0)})\,, \qquad \qquad {\widetilde}E = \bigl\{ \bigl(\tau(u) ,\tau(v) \bigr) \, : \, (u,v) \in {\widetilde}H^{(1)} \bigr\} \,.$$ Given an edge $e=(\xi,\eta)$ in $E$ or ${\widetilde}E$, we write $s(e)=\xi$ and $r(e)=\eta$ for its source and range vertices, and $e^{\text{\rm op}}= (\eta,\xi)$ for its opposite edge. Notice that $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$ are both connected graphs. The graph $\Gamma_\tau$ was introduced in [@KS10]. For the class of subshifts studied in [@KS10], the two graphs are the same. \[pf11.prop-apgraph\] If the subshift is right-special balanced then $\Gamma_\tau={\widetilde}\Gamma_\tau$. For all subshifts, $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$ have the same vertices. We need to show that for all $(u_1,u_2)\in{\widetilde}H^{(1)}$ there are $(v_1,v_2)\in H^{(1)}$ such that $\tau(u_i) = \tau(v_i), i=1,2,$ and vice versa. By Lemma \[lem-phi\], ${\varphi}^{(1)}$ induces a bijection between the two types of horizontal edges. By the second property of choice functions we have $(\tau\times\tau)\circ{\varphi}^{(1)} = \tau\times\tau$. We now introduce a weight function which will be used to define a metric on the graphs. \[def-weight\] A [*weight function*]{} is a strictly decreasing function $\delta:{{\mathbb Z}}\to{{\mathbb R}}^+$ which tends to $0$ at infinity and for which there exist constants $\overline{c},\underline{c}>0$ such that (i) $\delta(ab)\leq \overline{c}\delta(a)\delta(b)$, (ii) $\delta(2a)\geq \underline{c}\delta(a)$. Our characterization will not depend on the choice of weight function. So the reader may simply choose one so that $\delta(n) = \frac{1}{n+1}$ for $n\in{{\mathbb N}}$ to get the usual word metric below in Remark \[pf11.rem-approxgraph\] (ii). Given a weight function $\delta$ we associate the following length to the horizontal edges: $$l((u,v)) = \left\{ \begin{array}{ll} \delta(|u\wedge v|)\, & (u,v)\in H^{(1)} \,,\\ \delta(|u {\, {\widetilde}\wedge \,}v|)\, & (u,v)\in {\widetilde}H^{(1)}\,. \end{array} \right.$$ We have the following elementary inequalities, on $H^{(0)}$ and ${\widetilde}H^{(1)}$ respectively: $$\delta \circ {\varphi}^{(0)}\leq \delta\,, \qquad \text{\rm and } \qquad l \circ {\varphi}^{(1)} \leq l \,.$$ The length function allows us to define a graph metric on $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$: $$d_g (\xi,\eta) = \inf \sum_{j=1}^n l(e_j)\,, \ \xi, \eta \in V\,, \qquad {\widetilde}d_g (\xi,\eta) = \inf \sum_{j=1}^n l(e_j)\,, \ \xi, \eta \in {\widetilde}V\,,$$ the infimum running over all (finite) sequences $(e_j)_{1\leq j\leq n}$ of edges in $E$ or ${\widetilde}E$ such that $s(e_1)=\xi, \cdots r(e_j)=s(e_{j+1}), \cdots r(e_n)=\eta$. \[pf11.rem-approxgraph\] (i) We call $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$ approximation graphs because $V$ and ${\widetilde}V$ are dense in $\Xi$, and $E$ and ${\widetilde}E$ encode neighboring infinite words. Indeed, since $\tau$ picks an infinite word for each cylinder $[v]$, $v$ in $H^{(0)}$ or ${\widetilde}H^{(0)}$, [*i.e.*]{} for each basis clopen set for the topology of $\Xi$ by Lemma \[pf11.lem-infinitepaths\], we see that $V$ and ${\widetilde}V$ are dense in $\Xi$. Now given $e = (\xi, \eta)$ in $E$ or ${\widetilde}E$, both $\xi$ and $\eta$ belong to the cylinder $[\xi \wedge \eta]$ or $[\xi {\, {\widetilde}\wedge \,}\eta]$, and can thus be considered “neighbors” (see the next item). (ii) The function $\delta$ allows us to define metrics $d$ and ${\widetilde}d$ on $\Xi$ as follows: $$\label{pf11.eq-metric} d(\xi, \eta) = \left\{ \begin{array}{ll} \delta(|\xi\wedge \eta|) & \text{\rm if } \xi \neq\eta \,, \\ 0 & \text{\rm if } \xi = \eta \,. \end{array} \right. \qquad {\widetilde}d(\xi, \eta) = \left\{ \begin{array}{ll} \delta(|\xi{\, {\widetilde}\wedge \,}\eta|) & \text{\rm if } \xi \neq\eta \,, \\ 0 & \text{\rm if } \xi = \eta \,. \end{array} \right.$$ Notice that $d$ and ${\widetilde}d$ actually define ultrametrics on $\Xi$. Now $x{\, {\widetilde}\wedge \,}y$ is always a prefix of $x\wedge y$, so we have $$d(\xi, \eta) \le {\widetilde}d(\xi, \eta)\,,\quad \forall \xi,\eta \in \Xi\,,$$ and $$d(\xi, \eta) \le d_g(\xi, \eta)\,, \quad \text{\rm and } \quad {\widetilde}d(\xi, \eta) \le {\widetilde}d_g(\xi, \eta)\,,\qquad \xi,\eta \in V={\widetilde}V\,.$$ Metrics {#pf11.sect-metric} ======= Metrics associated to the approximation graphs {#pf11.ssect-metrics} ---------------------------------------------- The construction given in [@KS10] of a metric on the subshift space followed the recipes of spectral triples. Indeed, the length function on the edges the graph $\Gamma_\tau$ gives rise to a spectral triple so that the famous Connes-formula yields a metric (the spectral distance) which extends to $\Xi$. The situation is analogous with ${\widetilde}\Gamma_\tau$ as we now show. \[pf11.def-metrics\] We define two metrics on $\Xi$: the metric $d_\tau$ given by: $$\label{def-m1} {d_\tau}(\xi, \eta) = \sup_{ f\in C(\Xi)} \bigl\{ |f(\xi) - f(\eta)| \, :\, \forall e\in E, \; |f(s(e))-f(r(e))| \le l(e) \bigr\}\,,$$ and the metric ${\widetilde}d_\tau$ given by: $$\label{def-m2} {{\widetilde}d_\tau}(\xi, \eta) = \sup_{ f\in C(\Xi)} \bigl\{ |f(\xi) - f(\eta)| \, :\, \forall e\in {\widetilde}E, \; |f(s(e))-f(r(e))| \le l(e) \bigr\}.$$ Given an infinite word $\xi \in \Xi$, we denote by $\xi_n$ its $n$-th right-special prefix, and by ${\widetilde}\xi_n$ its $n$-th order privileged prefix. We define $${b_\tau}(\xi_n) = \left\{ \begin{array}{cl} 1 & \text{\rm if } \tau(\xi_n) \wedge \xi = \xi_n \,,\\ 0 & \text{\rm else} \,, \end{array} \right. \qquad \text{\rm and } \qquad {{\widetilde}b_\tau}({\widetilde}\xi_n) = \left\{ \begin{array}{cl} 1 & \text{\rm if } \tau({\widetilde}\xi_n) {\, {\widetilde}\wedge \,}\xi = {\widetilde}\xi_n \,,\\ 0 & \text{\rm else} \,, \end{array} \right. $$ which we use to provide explicit formulas for ${d_\tau}$ and ${\widetilde}{d_\tau}$. \[pf11.lem-specdist\] The metrics ${d_\tau}$ and ${{\widetilde}d_\tau}$ are extensions of the graph metrics $d_g$ and ${\widetilde}d_g$, on $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$, respectively. For $\xi,\eta\in \mbox{\rm Im} (\tau)$ they are given by $$\label{eq-dtau} d_\tau(\xi, \eta) = \delta(|\xi\wedge\eta|) + \sum_{n>|\xi\wedge\eta|} {b_\tau}(\xi_n) \delta(|\xi_n|) + \sum_{n>|\xi\wedge\eta|} {b_\tau}(\eta_n) \delta(|\eta_n|)\,,$$ $$\label{eq-tdtau} {{\widetilde}d_\tau}(\xi, \eta) = \delta(|\xi{\, {\widetilde}\wedge \,}\eta|) + \sum_{n> {O}(\xi{\, {\widetilde}\wedge \,}\eta)} {{\widetilde}b_\tau}({\widetilde}\xi_n) \delta(|{\widetilde}\xi_n|) + \sum_{n>{O}(\xi{\, {\widetilde}\wedge \,}\eta)} {{\widetilde}b_\tau}({\widetilde}\eta_n) \delta(|{\widetilde}\eta_n|)\,,$$ where ${O}(\xi{\, {\widetilde}\wedge \,}\eta)$ is the order of $\xi{\, {\widetilde}\wedge \,}\eta$ ([*i.e.*]{} ${O}(\xi{\, {\widetilde}\wedge \,}\eta)=m \iff \xi_m=\eta_m=\xi{\, {\widetilde}\wedge \,}\eta$). If ${d_\tau}$ or ${{\widetilde}d_\tau}$ is continuous then the corresponding formula extends to any $\xi, \eta\in \Xi$. As in [@KS10], Lemma 4.1, with the obvious adaptation in the case of privileged horizontal edges. Notice that a sufficient condition for ${d_\tau}$ or ${{\widetilde}d_\tau}$ to be continuous is that $\sup_{\xi} \sum_{n} \delta(|\xi_n|) < +\infty$ or $\sup_{\xi} \sum_{n} \delta(|{\widetilde}\xi_n|) < +\infty$, respectively, (see [@KS10] Corollary 4.2). \[prop-old-new\] Suppose that the subshift is right-special balanced. (i) For all $\xi,\eta\in \mbox{\rm Im}(\tau)$, we have ${d_\tau}(\xi,\eta)\leq {{\widetilde}d_\tau}(\xi,\eta)$. (ii) Suppose that the function ${\widetilde}H^{(0)}\ni u\mapsto \frac{\delta(|u|)}{\delta(|{\varphi}^{(0)}(u)|)}\in {{\mathbb R}}^+$ is bounded. Then the restrictions of ${d_\tau}$ and ${{\widetilde}d_\tau}$ to the graph $\Gamma_\tau={\widetilde}\Gamma_\tau$ are Lipschitz equivalent. In particular, if ${d_\tau}$ and ${{\widetilde}d_\tau}$ are continuous then they are Lipschitz equivalent. We have ${{\widetilde}b_\tau}({\widetilde}\xi_n) = 1 \Leftrightarrow {b_\tau}( \xi_n)=1$, because ${\varphi}^{(0)}$ is an isomorphism and ${\varphi}^{(0)}({\widetilde}\xi_n) = \xi_n$. Furthermore ${\widetilde}\xi_n \preceq {\varphi}^{(0)}({\widetilde}\xi_n) =\xi_n$ so $\delta(|\xi_n|) \le \delta({\widetilde}\xi_n)$. Hence equations  and  imply that the restrictions to the graph satisfy ${d_\tau}\leq {{\widetilde}d_\tau}$. Since the subshift is right-special balanced we also must have $${\widetilde}\xi_n \preceq \xi_n \prec {\widetilde}\xi_{n+1}\,,$$ for all $n$ and all $\xi$. Furthermore, $ {b_\tau}(\xi_n) = {{\widetilde}b_\tau}({\widetilde}\xi_n), $ which directly implies that $${{\widetilde}d_\tau}(\xi,\eta) \leq C {d_\tau}(\xi,\eta)\,$$ where $C = \sup_u \frac{\delta(|u|)}{\delta(|{\varphi}^{(0)}(u)|)}$. The above Proposition \[prop-old-new\] allows us to compare our present work with our previous results in [@KS10]. For right-special balanced subshifts with a weight function satisfying the condition given in (ii), both approaches are equivalent. Indeed we will prove in Section \[pf11.sect-powerfree\], Theorem \[pf11.thm-characterization\], that a subshift has bounded powers if and only if the infimum and supremum of ${{\widetilde}d_\tau}$ over $\tau$ are Lipschitz equivalent. An interesting question is to determine which right-special balanced subshifts fulfil condition (ii) in Proposition \[prop-old-new\]. We answer this for Sturmian subshifts. Sturmian subshifts have a unique right-special word per length, hence are right-special balanced. It is well-known that for these subshifts bounded powers is equivalent to linear recurrence, see for instance [@Dur00; @Len03; @KS10]. Here, linear recurrence means that there exist a constant $C$ such that the gap between two consecutive occurrences of a word is bounded by $C$ times its length. A Sturmian subshift satisfies condition (ii) in Proposition \[prop-old-new\] if and only if it is linearly recurrent. We use the notations of e.g. [@DL03]: $a_n, n\ge 0$, is the $n$-th coefficient in the continuous fraction expansion of the irrational associated to the Sturmian. As is well known linear recurrence (or bounded powers) is equivalent to $\sup_n a_n < +\infty$ (see e.g. [@Len03] Theorem 1 or [@KS10] Lemma 4.9). We write the subshift over the alphabet $\{0,1\}$, and set $s_0=0, s_1=0^{a_1-1} 1$, $s_n=s_{n-1}^{a_n}s_{n-2}, n\ge 2$, and $q_n=|s_n|$. Consider $u_n=s_{n-1}s_n$. Words of this type have the longest possible first returns, and since $\delta$ is decreasing it is enough to consider these words to compute the supremum in condition (ii) of Proposition \[prop-old-new\]. The complete first returns to $u_n$ are $v_n = u_n s_n^{a_{n+1}-1}u_n$ and $v'_n= u_n s_n^{a_{n+1}}u_n$. The word $v_n \wedge v'_n = u_n s_n^{a_{n+1}-1}$ is right-special, and since the subshift is right-special balanced, one has $${\varphi}^{(0)}(u_n) = u_n s_n^{a_{n+1}-1}\,.$$ One therefore has: $$\frac{|{\varphi}^{(0)}(u_n)|}{|u_n|} = 1 + (a_{n+1}-1)\frac{q_n}{q_n+q_{n-1}}\,,$$ and gets the inequalities $$\delta(a_{n+1}|u_n|) \le \delta(|{\varphi}^{(0)}(u_n)|) \le \delta(\frac{a_{n+1}+1}{2}|u_n|)\,.$$ Let $m_n$ be the integer such that $2^{m_n-1} < a_{n+1} \le 2^{m_n}$. Using properties (ii) and (i) of the weight $\delta$ in Definition \[def-weight\], one respectively gets $$\underline{c}^{m_n} \delta(|u_n|) \le \delta(a_{n+1}|u_n|) \quad \text{\rm and } \quad \delta(\frac{a_{n+1}+1}{2}|u_n|) \le \overline{c}\delta(\frac{a_{n+1}+1}{2})\delta(|u_n|)\,,$$ (notice that $0<\underline{c}<1$) and substituting in the previous inequalities yields $$\frac{1}{\overline{c}\delta(\frac{a_{n+1}+1}{2})} \le \frac{\delta(|u_n|)}{\delta(|{\varphi}^{(0)}(u_n)|)} \le \frac{1}{\underline{c}^{m_n}} \,.$$ Now if the subshift is lineraly recurrent, then $\sup_n a_n <+\infty$ and thus $\sup_n m_n <+\infty$ and condition (ii) of Proposition \[prop-old-new\] follows from the above right inequality. If condition (ii) of Proposition \[prop-old-new\] holds, then the above left inequality imply $\sup_n 1/\delta(a_n) <+\infty $ and it follows that $\inf_n\delta(a_n) > 0$ and so $\sup_n a_n <+\infty$ which proves linear recurrence. Criterion for Lipschitz equivalence {#pf11.ssect-Lipequiv} ----------------------------------- We consider now the infimum and supremum of the metrics over all choice functions: $${d_{\text{\rm inf}}}:= \inf_\tau d_\tau\,, \qquad \qquad {d_{\text{\rm sup}}}:= \sup_\tau d_\tau\,.$$ and $${{\widetilde}{d}_{\text{\rm inf}}}= \inf_\tau {\widetilde}d_\tau\,, \qquad \qquad {{\widetilde}{d}_{\text{\rm sup}}}= \sup_\tau {\widetilde}d_\tau\,.$$ Lemma \[pf11.lem-specdist\] allows us to obtain explicit formulas. \[prop-inf\] We have $${d_{\text{\rm inf}}}(\xi, \eta) = \delta(|\xi\wedge\eta|)\,, \qquad \text{\rm and} \qquad {{\widetilde}{d}_{\text{\rm inf}}}(\xi, \eta) = \delta(|\xi{\, {\widetilde}\wedge \,}\eta|)\,.$$ In particular, both metrics induce the topology. The formulas are proven as in [@KS10], Corollary 4.5, and the latter statement follows from Lemma \[pf11.lem-infinitepaths\]. \[pf11.prop-dinfsup\] For any $\xi, \eta\in \Xi$ we have $${d_{\text{\rm sup}}}(\xi, \eta) = \delta(|\xi\wedge\eta|) + \sum_{n>|\xi\wedge\eta|} \delta(|\xi_n|) + \sum_{n>|\xi\wedge\eta|} \delta(|\xi_n|)$$ and $${{\widetilde}{d}_{\text{\rm sup}}}(\xi, \eta) = \delta(|\xi{\, {\widetilde}\wedge \,}\eta|) + \sum_{n>{O}(\xi{\, {\widetilde}\wedge \,}\eta)} \delta(|{\widetilde}\xi_n|) + \sum_{n>{O}(\xi{\, {\widetilde}\wedge \,}\eta)} \delta(|{\widetilde}\xi_n|) \,.$$ In particular, ${{\widetilde}{d}_{\text{\rm inf}}}$ and ${{\widetilde}{d}_{\text{\rm sup}}}$ are Lipschitz equivalent if and only if there exists $C>0$ such that for all $\xi \in \Xi$ and all $m$ we have $$\label{pf11.eq-equivlip} \delta(|{\widetilde}\xi_m|)^{-1} \sum_{n>m} \delta(|{\widetilde}\xi_n|) \le C$$ As in [@KS10], Corollary 4.4, with the added remark that by continuity of ${{\widetilde}{d}_{\text{\rm inf}}}$ (Proposition \[prop-inf\]) the inequality (\[pf11.eq-equivlip\]) implies the continuity of ${{\widetilde}{d}_{\text{\rm sup}}}$. Characterization of bounded powers {#pf11.sect-powerfree} ================================== As mentioned in the introduction, the characterization of power boundedness hinges on a comparison of ${{\widetilde}{d}_{\text{\rm inf}}}$ with ${{\widetilde}{d}_{\text{\rm sup}}}$. We follow again here closely [@KS10] replacing right-special horizontal edges by privileged horizontal edges. We state our main theorem. \[pf11.thm-characterization\] Let $\Xi$ be a minimal and aperiodic subshift over a finite alphabet. Then $\Xi$ has bounded powers if and only if ${{\widetilde}{d}_{\text{\rm sup}}}$ and ${{\widetilde}{d}_{\text{\rm inf}}}$ are Lipschitz equivalent. By Lemma \[pf11.lem-repulsive\] we can assume that $\Xi$ is a right-infinite subshift: if $\Xi$ is bi-infinite we consider its right-infinite restriction $\Xi^+$, if $\Xi$ is left-infinite we simply consider its right-infinite “mirror image”. Up to rescaling the weight function $\delta$, we can assume that $\overline{c}=1$, and that $\delta(1) \le 1$. Assume that $\Xi$ has bounded powers, with index of repulsiveness $\ell>0$. Fix $\xi\in \Pi_\infty$ and $m \in {{\mathbb N}}$. By definition of privileged words, ${\widetilde}\xi_n$ is a prefix and suffix of ${\widetilde}\xi_{n+1}$, so we have $(|{\widetilde}\xi_{n+1}| - |{\widetilde}\xi_n|) / |{\widetilde}\xi_n| \ge \ell$, and therefore $ |{\widetilde}\xi_{m+k}| \ge (\ell+1)^k |{\widetilde}\xi_{m}|$ for all $k\ge 1$. The series in equation  in Proposition \[pf11.prop-dinfsup\] can then be bounded as follows $$\delta(|{\widetilde}\xi_{m}|)^{-1} \sum_{n>m} \delta(|{\widetilde}\xi_n|) \le \frac{1}{\delta(|{\widetilde}\xi_{m}|)} \sum_{k>1} \delta( (\ell+1)^k |{\widetilde}\xi_m| ) \le \sum_{k>1} \delta(\ell + 1)^k \,$$ where the last inequalities follow from condition (i) in Definition \[def-weight\] of a weight function. The right-hand-side is a convergent geometric series ($\delta(\ell+1) < 1$) and gives a uniform constant to apply Proposition \[pf11.prop-dinfsup\] and conclude that ${{\widetilde}{d}_{\text{\rm sup}}}$ and ${{\widetilde}{d}_{\text{\rm inf}}}$ are Lipschitz equivalent. Assume now that $\Xi$ does [*not*]{} have bounded powers. Fix an odd integer $p=2q+1$ (large). By Remark \[pf11.rem-powers\] there exists a word $u\in{{\mathcal L}}$ such that $u^p\in{{\mathcal L}}$. By Lemma \[pf11.lem-powers\], there are $p$ (non-empty) privileged words $v_1, \cdots v_p$, such that $v_1 \prec v_2 \prec \cdots v_{p-1} \preceq u^p \prec v_p$. Pick an infinite word $\xi$ with prefix $v_p$, and write $m=|v_q|$. We have $$\delta(|{\widetilde}\xi_m|)^{-1} \sum_{n>m} \delta(|{\widetilde}\xi_n|) \ge \delta(|v_q|)^{-1} \sum_{j=q+1}^{2q} \delta(|v_j|) \ge \delta(|v_q|)^{-1} \; q \; \delta(|v_{2q}|) \ge \underline{c} \,q \,,$$ where the last inequalities follow from (ii) in Definition \[def-weight\] of a weight function. Since $p$, hence $q$, was chosen arbitrarily large, the criterion for Lipschitz equivalence of Proposition \[pf11.prop-dinfsup\] cannot be satisfied, and we conclude that ${{\widetilde}{d}_{\text{\rm sup}}}$ and ${{\widetilde}{d}_{\text{\rm inf}}}$ are not Lipschitz equivalent. Zeta-functions and complexity {#pf11.sect-Zeta} ============================= We define the following zeta-functions, $k\in{{\mathbb N}}$: $$\zeta_k(s) := \sum_{v\in{{\mathcal T}}^{(0)}} a(v)^k \delta(|v|)^s\,, \qquad \text{\rm and} \qquad {\widetilde}\zeta_k(s) := \sum_{v\in{{\mathcal T}}^{(0)}} {\widetilde}a(v)^k \delta(|v|)^s\,,$$ where we use the convention $0^0=0$. One expects that the sums converge for $\Re(s)$ sufficiently large and calls the smallest $s_0$ such that the series converges for $\Re(s)>s_0$ the abscissa of convergence for the series. The functions have the following interpretations: - $\frac12(\zeta_2(s)+\zeta_1(s))$ and $\frac12({\widetilde}\zeta_2(s)+{\widetilde}\zeta_1(s))$ are the zeta-functions of the spectral triples defined by right-special and by privileged words, respectively, see Section \[pf11.sect-spectrip\] equation  and . - $\zeta_1$ which was denoted $\frac12\zeta_{low}$ in [@KS10] (see Section 5.1) is related to the word complexity of the subshift. Indeed, if we denote by $p(n)$ the number of words of length $n$ then $$\zeta_1(s) = \sum_n (p(n+1)-p(n))\delta(n)^s\,,$$ and if the complexity has a weak complexity exponent $\beta$ (which is the case, if the upper and the lower box counting dimension of the subshift space exist and the complexity is polynomially bounded, see [@KS10] Section 1.2, and Lemma 5.4 in Section 5.1) then the abscissa of convergence of $ \zeta_1(s)$ equals $\beta$ (we assume that $\delta\in\ell^{1+\epsilon}\backslash\ell^{1-\epsilon}$ for all $\epsilon>0$, see [@KS10] Section 5.1). - $\zeta_0$ and ${\widetilde}\zeta_0$ are related to the complexity ${p_{\text{\rm rs}}}$ of right-special words and the complexity ${p_{\text{\rm pr}}}$ of privileged words, respectively: $$\zeta_0(s) = \sum_n {p_{\text{\rm rs}}}(n) \delta^s(|v|),\quad {\widetilde}\zeta_0(s) = \sum_n {p_{\text{\rm pr}}}(n) \delta^s(|v|)\,.$$ If these complexities have weak complexity exponents ${\beta_{\text{\rm rs}}}$ or ${\beta_{\text{\rm pr}}}$ then the abscissa of convergence for $\zeta_0$ or ${\widetilde}\zeta_0$ are ${\beta_{\text{\rm rs}}}+1$ or ${\beta_{\text{\rm pr}}}+1$, respectively. Given that $a(v)$ is bounded we have $\zeta_0(s)\leq \zeta_k(s) \leq |{{\mathcal A}}|^k \zeta_0(s)$ and hence all $\zeta_k$ have the same abscissa of convergence. Thanks to Lemma \[lem-av\] we can compare $\zeta_k$ to ${\widetilde}\zeta_k$. \[pf11.prop-zeta\] We have ${\widetilde}\zeta_k\geq \zeta_k$ and $\zeta_1(s)\geq\frac12 {\widetilde}\zeta_0(s) -\frac12 \delta(0)^s$. In particular, if the subshift has almost finite rank and $\delta\in\ell^{1+\epsilon}\backslash\ell^{1-\epsilon}$ for all $\epsilon>0$, then all zeta-functions have the same abscissa of convergence. We start with the first inequality. For a privileged word $u$, we let $R(u)$ denote the set of its complete first returns, and let $S(u)$ denote the set of all right-special words $r$, for which there exists $u'\in R(u)$ such that $ u \preceq r \prec u'$. By Lemma \[lem-av\] we have ${\widetilde}a(u)^k \geq \sum_{r\in S(u)} a(r)^k$. Furthermore $\delta(|u|)\geq \delta(|r|)$ for any $r\in S(u)$. Hence $$\sum_u a(u)^k\delta(|u|)^s \geq \sum_{u\in {\widetilde}H^{(0)}} \sum_{r\in S(u)} a(r)^k\delta(|r|)^s = \sum_r a(r)^k\delta(|r|)^s.$$ As for the second inequality we first order the elements of $S(u)$ in such a way that a right-special word which is a prefix of another one comes later in the order. Let’s say we find $r_1$ up to $r_m$. We now choose first the $a(r_1)$ shortest elements $u'_1,\cdots,u'_{a(r_1)}\in R(u)$ with $r\prec u'_k$, and we replace $a(r_1)\delta(|r_1|)^s$ in the sum for $\zeta_1$ by the smaller term $\sum_{i=1}^{a(r_1)}\delta(|u'_i|)^s$. We take out these chosen elements of $R(u)$ to obtain $R_1(u)$ and repeat the procedure with $r_2$, that is, choose the $a(r_2)$ shortest elements $u'_1,\cdots,u'_{a(r_2)}\in R_1(u)$ which satisfy $r_2 \prec u'_k$, and take those chosen elements of $R_1(u)$ to obtain $R_2(u)$. Iterating this construction yields the inequality $$\sum_{r\in S(u)} a(r)^k\delta(|r|)^s \geq \sum_{u'\in R(u)\backslash R_m(u)} \delta(|u'|)^s\,.$$ $R_m(u)$ has exactly one element left (one of the longest returns to $u$), which we call $v'$. Then $$\delta(|v'|)^s \leq\frac12 \sum_{u'\in R(u)\backslash R_m(u)} \delta(|u'|)^s$$ and hence $$\sum_{r\in S(u)} a(r)^k\delta(|r|)^s \geq \frac12 \sum_{u'\in R(u)} \delta(|u'|)^s\,.$$ Summing up one obtains $\zeta_1(s) \geq \frac12 {\widetilde}\zeta_0(s) - \frac12\delta(0)^s$. Now if the subshift has almost finite rank (see Section \[pf11.ssect-spwords\]), then for any $v\in {\widetilde}H^{(0)}$, ${\widetilde}a(v)$ is bounded by $a \log(|v|)^b$ for some uniform constants $a,b>0$. Since the summability of $\sum_{v\in {\widetilde}H^{(0)}} \delta(|v|)^s $ implies the summability of $\sum_{v\in {\widetilde}H^{(0)}} \log(|v|)^b \delta(|v|)^{s+\epsilon} $ for any $\epsilon>0$ we see that ${\widetilde}\zeta^k$ has an abscissa of convergence which does not depend on $k$. It then follows from the first formulas of the lemma that all zeta-functions have the same abscissa of convergence. The last lemma yields immediately relations between the various weak exponents. \[pf11.cor-zeta\] Assume the existence of weak complexity exponents. Then $${\beta_{\text{\rm pr}}}\leq {\beta_{\text{\rm rs}}}= \beta-1$$ and there is equality if the subshift has almost finite rank: ${\beta_{\text{\rm pr}}}= {\beta_{\text{\rm rs}}}$. The latter result can be seen as an asymptotic version of a much more precise equation between ${p_{\text{\rm pr}}}$ and ${p_{\text{\rm rs}}}$ which has been obtained for rich subshifts in [@GJWZ09], namely $${p_{\text{\rm pr}}}(n)+{p_{\text{\rm pr}}}(n+1) = {p_{\text{\rm rs}}}(n) + 2 \,,$$ as in this case privileged words exactly coincide with palindromes by Proposition \[pf11.prop-icr\]. Spectral triples {#pf11.sect-spectrip} ================ In this final section we provide the spectral triples which can be defined from the graphs $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$ yielding via Connes’ formula the metrics $d_\tau$ and ${\widetilde}d_\tau$ and having zeta-functions related to the ones we introduced above. The first spectral triple corresponds to the construction given in [@KS10]. Consider the C$^\ast$-algebra $C(\Xi)$ of continuous functions on $\Xi$. Both spectral triples are over $C(\Xi)$ which means that they are given by - a representation $\pi_\tau$ resp. ${\widetilde}\pi_\tau$ of that algebra on a Hilbert space ${{\mathcal H}}$ and ${\widetilde}{{\mathcal H}}$ resp., - a self adjoint (unbounded) operator $D$ resp. ${\widetilde}D$ of compact resolvent such that the commutator $[D, \pi_\tau(f)]$ and $[{\widetilde}D, {\widetilde}\pi_\tau(f)]$, resp. are bounded for a dense sub-algebra of $C(\Xi)$. Here the Hilbert spaces are given by ${{\mathcal H}}= \ell^2(E)$ and ${\widetilde}{{\mathcal H}}= \ell^2({\widetilde}E)$ (where $E$, ${\widetilde}E$ are the edges of the approximation graphs $\Gamma_\tau$ and ${\widetilde}\Gamma_\tau$ defined in Section \[pf11.ssect-approxgraph\]) and the corresponding representations $\pi_\tau$, ${\widetilde}\pi_\tau$ , and Dirac operators $D$, ${\widetilde}D$, by $$\label{pf11.eq-reprDirac} \left\{ \begin{array}{lll} \pi_\tau(f){\varphi}(e) & = & f( s(e) )\, {\varphi}(e) \\ D{\varphi}(e) &=& l(e)^{-1}{\varphi}(e^{\text{\rm op}}) \end{array} \right. \qquad \text{\rm and} \qquad \left\{ \begin{array}{lll} {\widetilde}\pi_\tau(f) \psi(e) & = & f( s(e) )\, \psi(e) \\ {\widetilde}D \psi(e) &=& l(e)^{-1}\psi(e^{\text{\rm op}}) \end{array} \right.$$ for $f\in C(\Xi)$, ${\varphi}\in {{\mathcal H}}$ $\psi\in {\widetilde}{{\mathcal H}}$, $e \in E$ or ${\widetilde}E$, and we recall that for an edge $e=(\xi,\eta)$ we write $e^{\text{\rm op}}= (\eta,\xi)$. Notice that the commutators of the Dirac operators with the representations read $$\label{pf11.eq-commutator} [D, \pi_\tau(f)] {\varphi}(e) = \frac{f(s(e)) - f(r(e)) }{l(e)} {\varphi}(e^{\text{\rm op}}) \,, $$ and $$\label{pf11.eq-tcommutator} [{\widetilde}D, {\widetilde}\pi_\tau(f)] \psi (e) = \frac{f(s(e)) - f(r(e)) }{l(e)} \psi(e^{\text{\rm op}}) \,, $$ and can be extended to bounded operators on the corresponding Hilbert spaces for all $f$ in the pre-C$^\ast$-algebra of Lipschitz continuous functions over $\Xi$. By definition [@Co94] the distances defined by these spectral triples are, resp.$$\label{pf11.eq-specdist} d_\tau (\xi, \eta) = \sup_{f\in C(\Xi)} \bigl\{ |f(\xi) - f(\eta)| \, : \, \| [D,\pi_\tau(f)] \|_{{{\mathcal B}}({{\mathcal H}})} \le 1 \bigr\} \,,$$ and $$\label{pf11.eq-tspecdist} {\widetilde}d_\tau (\xi, \eta) = \sup_{f\in C(\Xi)} \bigl\{ |f(\xi) - f(\eta)| \, : \, \| [{\widetilde}D,{\widetilde}\pi_\tau(f)] \|_{{{\mathcal B}}({\widetilde}{{\mathcal H}})} \le 1 \bigr\} \,,$$ where $\| \cdot \|_{{{\mathcal B}}({{\mathcal H}})}$ and $\| \cdot \|_{{{\mathcal B}}({\widetilde}{{\mathcal H}})}$ denotes the operator norm on ${{\mathcal H}}$ and ${\widetilde}{{\mathcal H}}$, respectively. Now formulas and directly yield , while and directly yield (\[def-m2\]). \[pf11.prop-spectrip\] Both $\bigl(C(\Xi), {{\mathcal H}}, D\bigr)$ and $\bigl(C(\Xi), {\widetilde}{{\mathcal H}}, {\widetilde}D\bigr)$ are even spectral triples. As in [@KS10] with simple adaptations for the second spectral triple. The zeta-functions of the spectral triples are given by the traces $$\label{pf11.eq-zeta} \zeta_D(s) = {{\rm Tr\,}}_{{{\mathcal H}}} \bigl( |D|^{-s} \bigr) = \frac12 \sum_{v \in {{\mathcal T}}^{(0)}} a(v)(a(v)+1) \; \delta(|v|)^s \,,$$ and $$\label{pf11.eq-tzeta} \zeta_{{\widetilde}D}(s) = {{\rm Tr\,}}_{{\widetilde}{{\mathcal H}}} \bigl( |{\widetilde}D|^{-s} \bigr) = \frac12 \sum_{v \in {{\mathcal T}}^{(0)}} {\widetilde}a(v)({\widetilde}a(v)+1) \; \delta(|v|)^s \,,$$ where $a(v)$ and ${\widetilde}a(v)$ are given in Definition \[pf11.def-av\]. Again one expects convergence for sufficiently large real part of $s$. A direct comparison yields that, indeed, $\zeta_D(s) = \frac12(\zeta_2(s)+\zeta_1(s))$ and $\zeta_{{\widetilde}D}(s)= \frac12({\widetilde}\zeta_2(s)+{\widetilde}\zeta_1(s))$. [BEL]{} J. Bellissard, R. Benedetti, J.-M. Gambaudo. “Spaces of Tilings, Finite Telescopic Approximations and Gap-labelling”. [*Commun. Math. Phys.*]{} [**261**]{} (2006) 1–41. J. Bellissard, M. Marcolli, K. Reihani. “Dynamical Systems on Spectral Metric Spaces” [*arXiv:1008.4617*]{} M. Bucci, A. De Luca, A. Glen, L. Zamboni.“A new characteristic property of rich words”. [*Theoret. Comput. Sci.*]{} [**410**]{} (2009) 2860–2863. A. Connes. [*Noncommutative geometry*]{}. Academic Press Inc., San Diego, CA, 1994. E. Christensen, C. Ivan. “Sums of two-dimensional spectral triples”. [*Math. Scand.*]{} [**100**]{} (2007) 35–60. E. Christensen, C. Ivan, M.L. Lapidus. “Dirac operators and spectral triples for some fractal sets built on curves”. [*Adv. Math.*]{} [**217**]{} (2008), no. 1, 42–78. D. Damanik, D. Lenz. “Powers in Sturmian sequences”. [*European Journal of Combinatorics*]{} [**24**]{} (2003) 377–390. F. Durand. “Linearly recurrent subshifts have a finite number of non-periodic subshift factors”. [*Ergodic Theory Dynam. Systems*]{} [**20**]{} (2000) 1061–1078. F. Durand. Corrigendum and addendum to [@Dur00]. [*Ergodic Theory Dynam. Systems*]{} [**23**]{} (2003) 663–669. F. Durand. “Combinatorics on Bratteli diagrams and dynamical systems”. [*Combinatorics, Automata and Number Theory*]{}, 338–386, Encyclopedia Math. Appl. 135, Cambridge Univ. Press, 2010. A. Glen, J. Justin, S. Widmer, L.Q. Zamboni. “Palindromic richness”. [*European J. Combin.*]{} [**30**]{} (2009) 510–531. D. Guido, T. Isola. “Dimensions and singular traces for spectral triples, with applications for fractals”. [*J. Func. Anal.*]{} [**203**]{} (2003) 362–400. D. Guido, T. Isola. “Dimension and spectral triples for fractals in ${{\mathbb R}}^n$”. In Advances in Operator Algebras and Mathematical Physics, [*Theta Ser. Adv. Math.*]{} [**5**]{}, Theta, Bucarest (2005), 89–108. M. Herman, I.F. Putnam, C. Skau. “Ordered Bratteli diagrams, dimension groups and topological dynamics”. [*Internat. J. Math.*]{} [**3**]{} (1992) 827–864. J. Kellendonk, J. Savinien. “Spectral triples and characterization of aperiodic order”. To appear in [*Proc. London Math. Soc.*]{}, eprint arXiv:1010.0156 (math.OA). J. Lagarias, P. Pleasants. “Repetitive Delone sets and quasicrystals”. [*Ergodic Theory Dynam. Systems*]{} [**23**]{} (2003) 831–867. D. Lenz. “Hierarchical structures in Sturmian dynamical systems. Tilings of the plane”. [*Theoret. Comput. Sci.*]{} [**303**]{} (2003) 463–490. J. Pearson. “Noncommutative Riemannian geometry and diffusion on ultrametric Cantor sets”. PhD Dissertation, Georgia Institute of Technology 2008. J. Pearson, J. Bellissard. “Noncommutative Riemannian geometry and diffusion on ultrametric Cantor sets”. [*J. Noncommut. Geom.*]{} [**3**]{} (2009) 447–481. M. Rieffel. “Metrics on state spaces”. [*Doc. Math.*]{} [**4**]{} (1999) 559–600. M. Rieffel. “Compact Quantum Metric Spaces”. [*Operator algebras, quantization, and noncommutative geometry*]{} 315–330, Contemp. Math. [**365**]{}, Amer. Math. Soc., Providence (2004). M. F. Whittaker. “Spectral triples for hyperbolic dynamical systems”[*arXiv:1011.3292*]{}
--- abstract: 'We study the thermalization of a strongly coupled quantum field theory in the presence of a chemical potential. More precisely, using the holographic prescription, we calculate non-local operators such as two point function, Wilson loop and entanglement entropy in a time-dependent background that interpolates between AdS$_{d+1}$ and AdS$_{d+1}$-Reissner-Nordström for $d=3,4$. We find that it is the entanglement entropy that thermalizes the latest and thus sets a time-scale for equilibration in the field theory. We study the dependence of the thermalization time on the probe length and the chemical potential. We find an interesting non-monotonic behavior. For a fixed small value of $T \ell$ and small values of $\mu/T$ the thermalization time decreases as we increase $\mu/T$, thus the plasma thermalizes faster. For large values of $\mu/T$ the dependence changes and the thermalization time increases with increasing $\mu/T$. On the other hand, if we increase the value of $(T \ell)$ this non-monotonic behavior becomes less pronounced and eventually disappears indicating two different regimes for the physics of thermalization: non-monotonic dependence of the thermalization time on the chemical potential for $T\ell << 1$ and monotonic for $T \ell >>1$.' --- =10000 UTTG-05-12\ NSF-KITP-12-060 **Holographic Thermalization with Chemical Potential** [**Elena Caceres$^{1,2}$ and Arnab Kundu$^{2, 3}$**]{} $^1$ Facultad de Ciencias Universidad de Colima Bernal Diaz del Castillo 340, Colima, Mexico. $^2$ Theory Group, Department of Physics University of Texas at Austin Austin, TX 78712, USA. ${}^{3}$ Kavli Institute for Theoretical Physics University of California Santa Barbara CA 93106-4030, USA. elenac@zippy.ph.utexas.edu, arnab@physics.utexas.edu Introduction ============ A standard method to study near-equilibrium physics is to slightly perturb the system away from the equilibrium state and study the linear response of the system to these perturbations. One of the main virtues of this method is that one can compute observables by calculating correlation functions in the equilibrium ensemble; The response functions are given in terms of Green’s function evaluated in the equilibrium state. However, calculating Green’s functions in a strongly coupled theory is not an easy task. Fortunately, the gauge/gravity duality is a powerful tool to compute correlation functions for a large class of strongly coupled theories. It maps the problem to a classical gravity calculation in higher dimensional curved spacetime that in the case of conformal theories is asymptotically anti–de-Sitter (AdS). The gauge/gravity duality has proven to be very useful in the study of near-equilibrium physics (see [@Son:2007vk] and [@Hubeny:2010ry] and references therein). However, out-of-equilibrium processes cannot be understood using linear response theory. The study of out-of-equilibrium processes in strongly coupled field theories involves non-trivial temporal dynamics and is an extremely challenging problem. The gauge/gravity duality translates this problem into a less daunting though equally fascinating one: the study of time-dependent gravitational dynamics, and in particular, black hole formation. Non-equilibrium states in the gravity dual can be created by turning on time-dependent background fields at the boundary. For example if a field is given a temporal dependence of compact support $(0, \delta t)$ at the boundary it creates a wave that propagates into the bulk. It was shown in [@Bhattacharyya:2009uu] that, under suitable conditions, the gravitational collapse of this wave will form a black brane in the bulk. This process of black hole formation via gravitational collapse is expected to be the dual description of the thermalization process in a strongly coupled field theory. The goal of the present work is to use the holographic setup explained above to study the thermalization in a strongly coupled theory in the presence of chemical potential. A practical motivation for this work comes from the fact that the equilibration time observed at the Relativistic-Heavy-Ion-Collider (RHIC) is much shorter than predicted [*via*]{} perturbative approaches to thermalization. This indicates that the strong coupling is crucial in understanding the thermalization process of the Quark Gluon Plasma (QGP). Hence, this is a natural arena where gauge/gravity duality can be useful. Indeed, in [@Balasubramanian:2010ce; @Balasubramanian:2011ur] the authors studied thermalization probes in a time-dependent background with vanishing chemical potential. They found that, unlike what is expected preturbatively, the thermalization proceeds top-down [*i.e.*]{} UV modes thermalize first, IR modes thermalize later. They also found that among the probes studied it is the entanglement entropy that thermalizes the latest and thus sets the time-scale for thermalization. The thermalization time scales as $\tau_{\rm crit} \sim \ell/2$, where $\ell$ is the typical length of the probe. A numerical estimate yields $\tau_{\rm crit} \sim 0.3 fm/c$. Since the RHIC and the Large Hadron Collider (LHC) processes occur at different values of the chemical potential, a natural question to ask is how do these results change when we consider a theory with a non-vanishing chemical potential. However tempting, our intention is not to make precise numerical predictions based on a toy model but to learn qualitative features related to the effect of the chemical potential in the thermalization process of a strongly coupled theory. Beyond the possible connection to RHIC and LHC physics, the study of non-local probes in a time-dependent background is an interesting problem in its own right and it can teach us important lessons about out-of-equilibrium physics. To explore the thermalization of a strongly coupled field theory with chemical potential we consider a time-dependent spacetime that interpolates between AdS$_{d+1} $ and AdS-Reissner-Nordstöm (AdS-RN) background of the same bulk dimensions. This $(d+1)$-dim interpolating background, also known as the AdS-RN-Vaidya spacetime, can be understood as describing the gravitational collapse of a thin shell of charged null dust. We probe the theory with several non-local observables: two point functions, Wilson loop and entanglement entropy. We find that, as in the case of vanishing chemical potential, the thermalization is top-down and it is the entanglement entropy that thermalizes the latest and thus, sets the time scale for thermalization in the field theory. Undoubtedly, our most interesting result is that the behavior of the thermalization time as a function of the chemical potential seems to separate the physics in two regimes: $T \ell \ll 1 $ and $T \ell \gg 1 $. For a fixed small value of $T \ell$ and small values of $\mu/T$ the thermalization time decreases as we increase $\mu/T$, thus the plasma thermalizes faster. For large values of $\mu/T$ the dependence changes and the thermalization time increases with increasing $\mu/T$. On the other hand, if we increase the value of $T \ell$ this non-monotonic behavior becomes less pronounced and eventually disappears altogether. Hence, we observe two different regimes for the physics of thermalization: non-monotonic dependence of the thermalization time on the chemical potential for $T \ell \ll 1$ and monotonic for $T \ell \gg 1$. The characteristics of the non-monotonic behavior depend on the spacetime dimension, and is suppressed in lower dimensions. The fact that the physics arranges itself[^1] to manifest different qualitative features in the above two regimes tempts us to label them as follows: one “classical" and the other “quantum". For a system in thermal equilibrium, $T\ell \gg 1$ corresponds to a classical regime and $T\ell \ll 1$ to a quantum regime. Somewhere in the middle these two behaviors connect smoothly demonstrating a classical-to-quantum transition. We do observe qualitatively different physics as far as the behavior of the thermalization time is considered in these two regimes. However, we are dealing with a system truly away from equilibrium and it is not clear to us whether a “classical" or a “quantum" regime would stand as a precise notion for such processes, and if so, how the intricacies of such regimes interplay. It should be emphasized that the background we are working with is [*ab initio*]{} “phenomenologically motivated" rather than one obtained from rigorously solving Einstein equations to study the formation of a black hole in AdS-spacetime. Another approach towards analyzing such questions is to start with an out-of-equilibrium initial field configuration and study the thermalization process by solving for the gravitational background itself. Valiant efforts have been made along such directions in [*e.g.*]{} [@Grumiller:2008va]-[@CaronHuot:2011dr] by considering colliding gravitational shock waves. It remains to be seen whether a generalization of [@Bhattacharyya:2009uu] in the presence of a charged matter yields the desired form of the AdS-RN-Vaidya background that we work with. This paper is organized as follows: In Section 2 we introduce the holographic setup [*i.e.*]{} we present the bulk action and the background metric of AdS-Reissner-Nordström in $(d+1)$ dimensions. Next, in Section 3, we study the equilibrium behavior of the non-local observables mentioned above in $d=3,4$. Sections 4 and 5 contain our main results. In section 4 we present the generalization of the Vaidya metric to AdS-RN backgrounds by using a charged null dust to form the black hole. We then explore the non-equilibrium behavior of the non-local probes and determine the thermalization time for each probe. In Section 5 we summarize our results and elaborate on possible future directions. A detailed analysis of minimal surfaces and apparent horizons in AdS-RN-Vaydia is found in Appendix A. Appendix B gathers some standard facts related to the embedding of 4 and 5 dimensional charged AdS black holes in 10 or 11-dim supergravity. [**Note Added**]{}: While this paper was close to completion, we became aware of ref. [@GalanteSchvellinger] which partially overlaps with our work. The bulk action and the background ================================== We will begin with the AdS-RN background and analyze equilibrium properties of the dual CFT by looking at two-point correlation function, Wilson loops and entanglement entropy. The holographic prescription for computing such observables is to compute minimal surfaces of appropriate dimensions in AdS-space. For generality we will analyze these observables in an AdS$_{d+1}$-background, where $d=3, 4$. As far as the background is concerned, our approach is seemingly “phenomenological" or the so called “bottom up" where we simply obtain these backgrounds as solutions to some effective gravity action in the corresponding dimension. However, for $d=3$ and $d=4$ — which are the ones that we will study here — one can embed the corresponding solutions within ($11$ or $10$-dimensional type IIB) supergravity. Also note that within supergravity one usually obtains a more general multi-charged black hole solution. The backgrounds that we consider here corresponds to setting all these charges equal.[^2] It is also noteworthy that one can obtain such charged black hole solutions in $d=6$ as well which comes from the $S^4$-reduction of $11$-dimensional supergravity; we will, however, not consider this case. AdS-RN backgrounds are solutions of Einstein-Hilbert action with a -ve cosmological constant coupled to a Maxwell field. Let us start with the effective action of the form $$\begin{aligned} \label{action1} S_0 = \frac{1}{8\pi G_N^{(d+1)}}\left(\frac{1}{2} \int d^{d+1} x \sqrt{-g} \left(R - 2 \Lambda \right) - \frac{1}{4} \int d^{d+1}x \sqrt{-g} F_{\mu\nu} F^{\mu\nu} \right) \ ,\end{aligned}$$ which gives the following equations of motion $$\begin{aligned} && R_{\mu\nu} - \frac{1}{2} \left(R- 2 \Lambda \right) g_{\mu\nu} = g^{\alpha\rho} F_{\rho\mu} F_{\alpha\nu} - \frac{1}{4} g_{\mu\nu} \left(F^{\alpha\beta} F_{\alpha \beta}\right) \ , \label{eom1} \\ && \partial_\rho \left[ \sqrt{-g} g^{\mu\rho} g^{\nu\sigma} F_{\mu\nu} \right] = 0 \ . \label{eom2}\end{aligned}$$ The solutions to (\[eom1\]) and (\[eom2\]) for a general $d(\ge 3)$ are explicitly given below: $$\begin{aligned} \label{RN} && ds^2 = \frac{L^2}{z^2} \left(- f(z) dt^2 + \frac{dz^2}{f(z)} + d\vec{x}^2 \right) \ , \quad \Lambda = -\frac{d(d-1)}{2 L^2} \ , \nonumber\\ && f(z) = 1- M z^d + \frac{(d-2)}{(d-1) L^2}Q^2 z^{2(d-1)} \ , \quad A_t = Q (z_H^{d-2} - z^{d-2}) \ .\end{aligned}$$ Here $\vec{x}$ is a $(d-1)$-dimensional vector, $M$ is the mass of the black hole and $Q$ is the charge. The constant $z_H$ denotes the location of the horizon which is obtained by solving $f(z)=0$. In order for the one-form $A_t$ to be well-defined at the horizon, we have arranged $A_t(z_H) = 0$. The case of $d=2$ is special and is not captured by the solution given above. In this case, the background takes the following form $$\begin{aligned} \label{RN2} && ds^2 = \frac{L^2}{z^2} \left(- f(z) dt^2 + \frac{dz^2}{f(z)} + dx^2 \right) \ , \nonumber\\ && f(z) = 1- M z^2 + \frac{Q^2 z^2}{L^2} \log \left( z/ L\right) \ , \quad A_t = Q \log\left(z_H/z\right) \ , \quad \Lambda = -\frac{1}{L^2} \ .\end{aligned}$$ In the coordinate system used above the boundary is located at $z=0$. Having a non-zero electric field corresponds to having a chemical potential in the dual field theory and we simply read off this chemical potential from the vector field $A_t$ as $$\begin{aligned} \mu = \lim_{z\to 0} A_t(z) \ .\end{aligned}$$ Note that the above definition does not work for $d=2$.[^3] On the other hand, the temperature of the black hole is given by $$\begin{aligned} T = - \left. \frac{1}{4\pi} \frac{d}{dz} f(z) \right |_{z_H} \ .\end{aligned}$$ We will briefly investigate the behavior of the temperature as the chemical potential of the system is cranked up. This will be a numerical endeavor and for that purpose we will set $L=1$, which implies we will be measuring the temperature and the chemical potential in units of $L$. Typically, for a fixed value of $M$, $Q$ has an upper bound beyond which there is no real positive solution of $f(z)=0$.[^4] As this upper bound is reached, the background temperature approaches zero and we approach an extremal solution with zero temperature but finite horizon radius (giving rise to finite entropy). The corresponding supergravity solutions (in $d=3, 4$) are the BPS black holes discussed in [@Behrndt:1998ns]. These extremal solutions possess a naked singularity. The behavior of the temperature with the chemical potential (for fixed $M=1$) is shown in fig. \[Tvsq\]. It is noteworthy to comment here that the physics of AdS-RN background has many interesting properties investigated earlier in [*e.g.*]{} [@Cvetic:1999ne; @Chamblin:1999tk] and more recently from a more application-towards-condensed matter theory point of view reviewed in [*e.g.*]{} [@Hartnoll:2009sz]. Before we proceed further, note that with the following change of coordinate $$\begin{aligned} \label{EF} dt = dv + \frac{dz}{f} \ ,\end{aligned}$$ the above AdS-RN metrics given in equations (\[RN\]) can be brought in the following form[^5] $$\begin{aligned} \label{metvaidya} ds^2 = \frac{L^2}{z^2} \left( - f(z) dv^2 - 2 dv dz + d\vec{x}^2 \right) \ , \quad A_v = A_v(z) \ ,\end{aligned}$$ where $\vec{x}$ is a $(d-1)$-dimensional vector. The form of the background in (\[metvaidya\]) is suitable for expressing the Vaidya metric, which we will discuss later. Note that the form above is generic for any $d$, where the information of the dimensionality is entirely carried by the function $f(z)$ and the electric field $A_v(z)$. For our purposes, the explicit expression of $A_v(z)$ will not matter at all since ultimately we will study minimal surfaces which do not couple to this vector field. Non-local observables in equilibrium ==================================== Before proceeding to discuss aspects of the thermalization, we begin by exploring the equilibrium behavior of the non-local observables in the presence of a chemical potential. The non-local observables we will study here are the two-point functions (for operators with large dimensions), expectation values for Wilson loops and entanglement entropy. To study this we take the background in the form presented in (\[metvaidya\]). Two point function: the geodesic approximation ---------------------------------------------- The idea here is to study the thermal Wightman function of an operator with large conformal dimension. As shown in [@Balasubramanian:1999zv], in this limit the equal-time two point function is given by $$\begin{aligned} \langle \cO(t, \vec{x}) \cO(t, \vec{x}') \rangle \sim e^{- \Delta \cL_{\rm thermal}} \ ,\end{aligned}$$ where $\Delta$ is the conformal dimension and $\cL_{\rm thermal}$ is the renormalized geodesic length. The above expression makes use of a saddle point approximation. We want to consider space-like geodesic connecting the two boundary points: $(t, x_1) = (t_0, - \ell/2)$ and $(t', x_1') = (t_0, \ell/2)$, where (whenever necessary) all other spatial directions are identical at the two end points. Such a geodesic is parametrized by $v= v(x)$ and $z=z(x)$ where $x_1 \equiv x$. The boundary conditions satisfied by this geodesic is $$\begin{aligned} z(-\ell/2) = z_0 = z(\ell/2) \ , \quad v(-\ell/2) = t_0 = v(\ell/2) \ .\end{aligned}$$ Here $z_0$ is the IR radial cut-off near the boundary. The geodesic length is $$\begin{aligned} \cL = \int_{-\ell/2}^{\ell/2} dx \frac{L}{z} \left[ 1 - f v'^2 - 2 v' z' \right]^{1/2} \ , \quad ' \equiv \frac{d}{dx} \ .\end{aligned}$$ There are two conservation equations: one because the Lagrangian is independent of $x$ and the other because the Lagrangian is independent of $v$. The first one gives $$\begin{aligned} \label{con} 1 - 2 z' v' - f(z) v'^2 = \frac{z_*^2}{z^2} \ ,\end{aligned}$$ where $z_*$ is the value of $z(x)$ at the midpoint. Using the definition of $v$ we get $$\begin{aligned} \label{vdef} v = t_0 - \int_{z_0}^z \frac{dz}{f(z)} \quad \implies \quad \frac{dv}{dx} = - \frac{1}{f(z)} \frac{dz}{dx} \ .\end{aligned}$$ we can substitute $v'$ in favour of $z'$ in (\[con\]) and obtain $$\begin{aligned} \frac{dz}{dx} = \pm \sqrt{f(z)} \left[ \frac{z_*^2}{z^2} - 1 \right]^{1/2} \ ,\end{aligned}$$ where the positive sign is taken for $x>0$ and the negative sign is taken for $x<0$. The boundary separation length is then read off as $$\begin{aligned} \label{ell2} \frac{\ell}{2} = \int_{z_0}^{z_*} \frac{1} {\sqrt{f(z)}} \left[ \frac{z_*^2}{z^2} - 1 \right]^{-1/2} \ ,\end{aligned}$$ where $z_0$ is the IR radial cut-off.[^6] The corresponding integral can be analytically carried out in purely AdS-background to yield $$\begin{aligned} \label{ellads1} \ell_{\rm AdS} = 2 z_* \ .\end{aligned}$$ On the other hand, using the conservation equation, the geodesic length can be obtained to be $$\begin{aligned} \label{Lthermal} \cL = 2 L \int_{z_0}^{z_*} \frac{dz}{z} \frac{1}{\sqrt{f(z) \left( 1 - \frac{z^2}{z_*^2} \right)}} \ .\end{aligned}$$ Note that both formulae in (\[ell2\]) and (\[Lthermal\]) apply for any $d$. The geodesic length is a divergent quantity; we can regularize this length by subtracting off the divergent piece in pure AdS-space. In pure AdS-space, this length is given by $$\begin{aligned} \cL_{\rm AdS} & = & 2 L \int_{z_0}^{z_*} \frac{dz}{z} \frac{1}{\sqrt{ \left( 1 - \frac{z^2}{z_*^2} \right)}} = - 2 L \log \left[ \frac{z_0}{z_* + \sqrt{z_*^2 - z_0^2}} \right] \nonumber\\ & = & - 2 L \log \left(\frac{z_0}{L}\right) + 2 L \log\left(\frac{L}{2 \ell_{\rm AdS}}\right) \ ,\end{aligned}$$ where we have used (\[ellads1\]) to substitute for $z_*$ in the above expression. We are interested in the finite part of this geodesic length. We therefore consider the following renormalized length[^7] $$\begin{aligned} \cL_{\rm thermal} = 2 L \int_{z_0}^{z_*} \frac{dz}{z}\frac{1} { \sqrt{\left( 1 - \frac{z^2}{z_*^2} \right)}} \frac{1}{\sqrt{f(z)}} + 2 L \log\left(\frac{z_0}{L}\right) \ .\end{aligned}$$ Now we can numerically study how $\cL_{\rm thermal}$ behaves in various dimensions as we tune the chemical potential. Recall that our underlying theory is conformal, therefore the only relevant parameter that we can vary is a dimensionless ratio constructed from the temperature and the chemical potential. In $(d+1)$ bulk space-time dimensions (dual to a theory in $d$ boundary space-time dimensions) $$\begin{aligned} \left[ T \right] \sim \frac{1}{\rm length} \ , \quad \left[ \mu \right] \sim \frac{1}{\rm length} \ .\end{aligned}$$ Thus we can consider $$\begin{aligned} \label{ratio} \chi_{(d)} = \frac{1}{4\pi} \left(\frac{\mu}{T} \right)\end{aligned}$$ to be the relevant parameter that we will vary. In practice we can vary $\chi_{(d)}$ by first fixing $M=1$ and then just varying the parameter $Q$ till it reaches its maximum value beyond which naked singularities appear in the gravitational background. It is straightforward to check that for this range of values for $Q$, $\chi_{(d)} \in [0,\infty]$. For a given value of $\chi_{(d)}$, we can now study the behavior of $\cL_{\rm thermal}$ as a function of the boundary separation length. To generate these curves we do the following: To begin with we set $L=1$, which means the dimensionful quantity $\cL_{\rm thermal}$ (and later the area or the volume corresponding to the Wilson loop or the entanglement entropy calculations) is measured in units of the AdS-radius. Now we fix $M=1$ and for a given value of $Q$ start with a $z_*$ very close to the horizon. Next we keep changing $z_*$ till we reach $z_*\to 0$. Each choice of $z_*$ generates an unique value of $\cL_{\rm thermal}$ and $\ell$ through equations (\[Lthermal\]) and (\[ell2\]) respectively. Finally we plot this result. The behaviour of $\cL_{\rm thermal}$ is shown in fig. \[figd34RN\]. The boundary length $\ell$ will be expressed in terms of the boundary temperature, $T$. It is clear from fig. \[figd34RN\], $\cL_{\rm thermal} \sim (T \ell )$ for large $T \ell $. The general observation we see from these plots is increasing $\chi_{(d)}$ monotonically increases the value of $\cL_{\rm thermal}$ for a given boundary separation $\ell$ in both $d=3, 4$. The dependence of $\cL_{\rm thermal}$ on the dimensionless ratio $\chi_{(d)}$ is shown in fig. \[figd34RN\](c): the monotonically increasing function is generally non-linear — however — for $\chi_{(d)}\gg 1$, $\cL_{\rm thermal} \sim \chi_{(d)}$ with a slope that depends on $d$. Non-linearities appear only for small values of $\chi_{(d)}$. Also, we have checked explicitly that in the linear regime the slope of $\cL_{\rm thermal}$ vs $\chi_{(d)}$ curve depends on the fixed value of $(T\ell)$. This will be a generic feature in all the observables we will consider here. Space-like Wilson loops ----------------------- Wilson loop is another gauge invariant non-local observable that can probe thermal properties of a field theory, [*e.g.*]{} the expectation value of the Wilson loop can detect confinement/deconfinement transition in theories like QCD. Here we will study the thermalization of space-like Wilson loops from a holographic approach. In a gauge theory, the Wilson loop operator is defined as a path ordered contour integral over a closed loop $\cC$ of the gauge field $$\begin{aligned} W (\cC) = \frac{1}{N} {\rm Tr} \left( \cP e^{\oint_{\cC} A}\right) \ ,\end{aligned}$$ where $A$ is the gauge field and $N$ is the rank of the gauge group and $\cP$ denotes path ordering. In the AdS/CFT correspondence, the expectation value of the Wilson loop is related to the string partition function $$\begin{aligned} \langle W(\cC) \rangle = \int \cD \Sigma \, e^{- \cA (\Sigma)} \ ,\end{aligned}$$ where $\Sigma$ is the string world sheet which extends in the bulk with the boundary condition $\partial \Sigma = \cC$ and $\cA(\Sigma)$ corresponds to the Nambu-Goto action for the string. In the strongly coupled limit, we can simplify the computation by making a saddle point approximation and evaluating the minimal area surface of the classical string with the same boundary condition $\partial \Sigma_0 = \cC$ $$\begin{aligned} \langle W (\cC) \rangle = e^{- \cA (\Sigma_0)} \ ,\end{aligned}$$ where $\Sigma_0$ represents the minimal area surface. In the AdS/CFT correspondence, such computation of Wilson loop operators were first done in [@Maldacena:1998im]. Here we will consider both rectangular and circular Wilson loops, which are schematically shown in fig. \[rec\_cir\_shape\]. ### Rectangular Wilson loop A rectangular strip Wilson loop can be parametrized by the boundary coordinates $\{x_1, x_2\}$ with the assumption that this infinite rectangular strip is invariant under the $x_2$-direction. Thus the corresponding minimal surface is parametrized by: $z(x)$ and $v(x)$, where $x \equiv x_1$. As in the case of geodesics, the boundary conditions are $$\begin{aligned} z (- \ell/2) = z_0 = z(\ell/2) \ , \quad v(- \ell/2) = t_0 = v(\ell/2) \ ,\end{aligned}$$ where $\ell$ is the length of the rectangular Wilson loop along the $x^1$-direction. The Nambu-Goto action is given by $$\begin{aligned} \label{areafunc} \cA_{\rm NG} = \frac{RL^2}{2\pi\alpha'} \int_{-\ell/2}^{\ell/2} \frac{dx}{z^2} \left(1 - f v'^2 - 2 v' z' \right)^{1/2} \ ,\end{aligned}$$ where $R$ is the length along the $x_2$-direction. Since there is no explicit $x$-dependence in the Lagrangian, the corresponding conservation equation is given by $$\begin{aligned} 1 - f v'^2 - 2 v' z' = \left(\frac{z_*}{z}\right)^4 \ ,\end{aligned}$$ where $z_*$ is the midpoint of $z(x)$. Using the definition of $v$ from (\[vdef\]) we can obtain $$\begin{aligned} \label{eomwlrec} 1 + \frac{z'^2}{f} = \left(\frac{z_*}{z}\right)^4 \quad \implies \quad \frac{dx}{dz} = \pm \frac{1}{\sqrt{f}} \left[ \left(\frac{z_*}{z}\right)^4 - 1\right]^{-1/2} \ ,\end{aligned}$$ where the positive sign is taken for $x >0$ and the negative sign is taken for $x<0$. The boundary separation length is obtained by integrating the above equation from $z_0$ to $z_*$. Once again this length can be analytically computed for the pure AdS-case to give $$\begin{aligned} \label{ellads2} \ell _{\rm AdS} = z_* \sqrt{\pi} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \ .\end{aligned}$$ Using the solution in (\[eomwlrec\]) in (\[areafunc\]), the area of the minimal surface in purely AdS-background is given by $$\begin{aligned} \cA_{\rm AdS} = \alpha' \cA_{\rm NG} = \frac{RL^2}{\pi z_0} - \frac{RL^2}{\ell_{\rm AdS}} \left(\frac{\Gamma(3/4)}{\Gamma(1/4)}\right)^2 \ ,\end{aligned}$$ where the first term in the above expression is divergent. Subtracting this diverging piece from the AdS-RN case, we get the following area of the minimal surface $$\begin{aligned} \label{Athermeq} \cA_{\rm thermal} = \alpha' \cA_{\rm NG} = \frac{RL^2}{\pi} \int_{z_0}^{z_*} \frac{dz}{z^2} \frac{1}{\sqrt{f \left(1 - (z/z_*)^4\right)}} - \frac{1}{z_0} \frac{RL^2}{\pi} \ .\end{aligned}$$ Note also that the formula in (\[Athermeq\]) is valid for any $d$ since the information of the dimensionality is encoded in the function $f$. Here also we use a similar technique as outlined before to generate a curve of $\cA_{\rm thermal}$ (measured in units of appropriate powers of $L$) vs $T \ell$. The dependences are shown in fig. \[figd34RNwl\]. The general observation is once again for increasing $\chi_{(d)}$, $\cA_{\rm thermal}$ increases monotonically: For large values of $T\ell$, we find $\cA_{\rm thermal} \sim (T\ell)$ with a slope that depends on $\chi_{(d)}$. In \[figd34RNwl\](c) we have shown how the area functional scales with the dimensionless ratio $\chi_{(d)}$ for fixed values of $(T\ell)$. Here also we find that the generic behavior is non-linear, but it becomes dominantly linear for large values of $\chi_{(d)}$ with a dimension-dependent slope. Non-linearities only show up for small values of $\chi_{(d)}$. ### Circular Wilson loop Let us now discuss the circular Wilson loop. At the boundary we choose a $2$-dimensional plane $\{x_1, x_2\}$ and rewrite it in the polar coordinate $\{\rho, \phi\}$ $$\begin{aligned} dx_1^2 + dx_2^2 = d\rho^2 + \rho^2 d \phi^2 \ .\end{aligned}$$ Using the azimuthal symmetry in the $\phi$-direction, the minimal area surface can be represented by $z(\rho)$ and $v(\rho)$. The Nambu-Goto action for this Wilson loop is given by $$\begin{aligned} \cA_{\rm NG} = \frac{L^2}{\alpha'} \int_0^R d\rho \frac{\rho}{z^2} \left(1 - f(z) v'^2 - 2 v' z' \right)^{1/2} \ ,\end{aligned}$$ where $' \equiv d/d\rho$ and we have integrated over the azimuthal angular direction that yields a factor of $(2 \pi)$. Using the definition of $v$ once again we can write this as $$\begin{aligned} \label{wleq} \cA_{\rm NG} = \frac{L^2}{\alpha'} \int_0^R d\rho \frac{\rho}{z^2} \sqrt{1 + \frac{z'^2}{f(z)}} \ .\end{aligned}$$ Note that in this case we do not have any conservation equation. The equation of motion for $z(\rho)$ obtained from (\[wleq\]) is given by $$\begin{aligned} \label{wleqn} z'' + z'^3 \frac{1}{\rho f} + z'^2 \left( \frac{2}{z} - \frac{1}{2f} \frac{df}{dz} \right) + \frac{z'}{\rho} + \frac{2f}{z} = 0 \ .\end{aligned}$$ The boundary conditions we should use are given by $$\begin{aligned} \label{bcwlcir1} z(0) = z_* \ , \quad z'(0) = 0 \ .\end{aligned}$$ By expanding near the $\rho=0$ point, we get $$\begin{aligned} \label{bcwlcir2} z_{\rm middle}(\rho) = z_* - \left(\frac{1 - M z_*^3}{2 z_*} + \frac{Q^2}{4} z_*^3 \right) \rho^2 + \ldots \ .\end{aligned}$$ We use the above expansion at $\rho=\epsilon$ to impose the boundary conditions and solve the equation (\[wleqn\]) using Mathematica’s NDSolve, where $\epsilon$ is some small number.[^8] To generate the desired curve we now have to carry out the following steps: First we fix $M$ (set to unity) and $Q$. Then we choose some $z_*$ which is numerically close to the event-horizon. For these choices of the input parameters and the above boundary conditions we first solve the differential equation numerically. To read off $R$, we now impose the (radial) IR boundary condition: $z(R) = z_0$, where $z_0$ is the IR cut-off. Using this numerical solution we can generate an unique value for $\cA_{\rm thermal}$ for a given $R$. Now we can vary $z_*$ and keep repeating the same process to generate the curve we want. Before presenting the results, let us recall again that this area is formally a divergent quantity. To extract the divergent piece, we can again just focus on the pure AdS-case. In this case (which corresponds to setting $M=0$ and $Q=0$), a simple solution of (\[wleqn\]) is given by[^9] $$\begin{aligned} z(\rho) = \sqrt{R^2 - \rho^2} \ ,\end{aligned}$$ which ultimately gives $$\begin{aligned} \cA_{\rm AdS} = \frac{L^2}{\alpha'} \frac{R}{z_0} - \frac{L^2}{\alpha'} \ ,\end{aligned}$$ where $z_0$ is the radial IR cut-off. Thus the finite contribution to the thermal Wilson loop is obtained to be $$\begin{aligned} \cA_{\rm thermal} = \alpha' \cA_{\rm NG} - \frac{RL^2}{z_0} \ .\end{aligned}$$ The numerical results are shown in fig. \[figd34RNwlcir\]. It is clear that the expectation value of the Wilson loop depends on the dimensionless parameter $\chi_{(d)}$. Our general observation again seems to hold here, [*i.e.*]{} increasing $\chi_{(d)}$ increases the area of the minimal surface for a given value of $(TR)$. Note that in this case, for large values of $(TR)$, $\cA_{\rm thermal} \sim (TR)^2$. Entanglement Entropy -------------------- Consider a quantum field theory with many degrees of freedom and assume that the zero temperature ground state of the system is described by the pure ground state denoted by $\left. | \Psi \right >$, which does not have any degeneracy. The von Neuman entropy of this system, defined as $S_{\rm total} = - {\rm tr} \rho \log \rho = 0$, where $\rho = \left | \Psi \right > \left < \Psi \right |$. Now we can imagine dividing the system into two subsystems $A$ and $B$. The total Hilbert space now factorizes as $\cH_{\rm total} = \cH_{A} \otimes \cH_{B}$. Now let us imagine an observer who has access only to the subsystem $A$, consequently the relevant density matrix for this observer is the reduced density matrix defined as $$\begin{aligned} \rho_A = {\rm tr}_B \, \rho \ .\end{aligned}$$ The entanglement entropy is now defined as the von Neuman entropy using this reduced density matrix $$S_A = - {\rm tr}_A \, \rho_A \log \rho_A \ .$$ Entanglement entropy measures the quantum entanglement between the two subsystems $A$ and $B$ and it is non-zero even at zero temperature. Entanglement entropy is also an useful order parameter in quantum phase transitions. Here we will eventually use this observable to probe the physics of thermalization in the presence of a chemical potential and we will explicitly demonstrate that this is the observable that sets the time-scale of equilibration since of all the non-local operators considered, it thermalizes the latest. A prescription for computing entanglement entropy using the AdS/CFT correspondence was suggested by [@Ryu:2006bv] and has been analyzed in details in the literature since then. For time-dependent background, the covariant proposal for entanglement entropy was suggested in [@Hubeny:2007xt]. According to this proposal one needs to consider extremal surfaces instead of minimal area surfaces For a recent review, see [*e.g.*]{} [@Nishioka:2009un]. Let us consider bulk AdS$_{d+1}$ space-time. Then the formula for the entanglement entropy is given by $$\begin{aligned} S_A = \frac{{\rm Area} \left(\gamma_A\right)}{4 G_N^{(d+1)}} \ ,\end{aligned}$$ where $G_N^{(d+1)}$ is the $(d+1)$-dimensional Newton’s constant; $\gamma_A$ denotes the $(d-1)$-dimensional minimal surface whose boundary coincide with the boundary of the region $A$: $\partial \gamma_A = \partial A$. In $d=3$, the computation of the entanglement entropy is identical to the Wilson loop computation. Thus here we will only consider the case $d=4$. As before, we will consider two geometric shapes: infinite rectangular strip or the straight belt and the spherical region. Fig. \[rec\_cir\_shape\] again serves as a pictorial representation of the particular shape considered and the corresponding minimal surface. Thermalization of the entanglement entropy in $d=2$ was first studied in [@AbajoArrastia:2010yt] for vanishing chemical potential. ### The straight belt The straight belt can be parametrized by the boundary coordinates $\{x_1, x_2, x_3\}$ with the assumption that this infinite strip is invariant under both $x_2$ and $x_3$-directions. The corresponding minimal surface can be represented by $z(x)$ and $v(x)$ where $x\equiv x_1$. As before, the boundary conditions imposed are $$\begin{aligned} z(\ell/2) = z_0 = z(- \ell/2) \ , \quad v(\ell/2) = t_0 = v(- \ell/2) \ . \end{aligned}$$ The volume functional is given by $$\begin{aligned} \cV = A \int_{-\ell/2}^{\ell/2} \frac{L^3dx}{z^3} \left( 1 - f v'^2 - 2 v' z' \right)^{1/2} \ , \quad {\rm where} \quad \cV \equiv {\rm Area} (\gamma_A) \ .\end{aligned}$$ Here $A$ is the area that results from integrating over the $x^2$ and the $x^3$-directions. The equation of motion resulting from the conservation equation is given by $$\begin{aligned} 1 - fv'^2 - 2 v' z' = \left(\frac{z_*}{z}\right)^6 \ .\end{aligned}$$ Using the definition of $v$ from (\[vdef\]) we can obtain $$\begin{aligned} 1 + \frac{z'^2}{f} = \left(\frac{z_*}{z}\right)^6 \quad \implies \frac{dz}{dx} = \pm \sqrt{f(z)} \left[ \left(\frac{z_*}{z} \right)^6 - 1 \right]^{1/2} \ ,\end{aligned}$$ where the positive sign is taken for $x>0$ and the negative sign is taken for $x<0$. We can analytically obtain the length for pure AdS-background to give $$\begin{aligned} \ell_{\rm AdS} = z_* \sqrt{\pi} \frac{\Gamma(2/3)}{\Gamma(1/6)} \ .\end{aligned}$$ Finally the volume functional is given by $$\begin{aligned} \cV = 2 A L^3 \int_{z_0}^{z_*} \frac{dz}{z^3} \frac{1}{\sqrt{f}} \frac{1}{\sqrt{1- (z/z_*)^6}} \ ,\end{aligned}$$ which is again a formally divergent quantity. In pure AdS, this volume functional is obtained to be $$\begin{aligned} \cV_{\rm AdS} = 2 A L^3 \int_{z_0}^{z_*} \frac{dz}{z^3} \frac{1}{\sqrt{1- (z/z_*)^6}} & = & \frac{A L^3}{z_0^2} + A L^3 \frac{\sqrt{\pi}}{3 z_*^2} \frac{\Gamma(-1/3)}{\Gamma(1/6)} \nonumber\\ & = & \frac{A L^3}{z_0^2} + \frac{A L^3}{\ell_{\rm AdS}^2} \frac{\pi\sqrt{\pi}}{3} \frac{\Gamma(-1/3) \left(\Gamma(2/3)\right)^2}{\left( \Gamma(1/6) \right)^3} \ .\end{aligned}$$ Thus the finite part of the volume functional can be obtained to be $$\begin{aligned} \cV_{\rm thermal} = 2 A L^3 \int_{z_0}^{z_*} \frac{dz}{z^3} \frac{1}{\sqrt{f}} \frac{1}{\sqrt{1- (z/z_*)^6}} - \frac{A L^3}{z_0^2} \ .\end{aligned}$$ The behaviour of the entanglement entropy is demonstrated in fig. \[figd4RNeerec\]. Here we also notice that for large values of $(T\ell)$, $\cV_{\rm thermal} \sim (T\ell)$ with a slope that depends on $\chi_{(4)}$. On the other hand, the scaling of $\cV_{\rm thermal}$ with $\chi_{(4)}$ displays the non-linear and monotonically increasing behavior where the non-linearities are washed out for large values of $\chi_{(4)}$. Although we have not displayed it in the figure, the slope of the linear behavior of $\cV_{\rm thermal}$ with $\chi_{(4)}$ for large $\chi_{(4)}$ depends on the value of $(T\ell)$. ### The spherical region Now we consider the case where one of the subsystem is a circular disc. To parametrize this circular disc in polar coordinate, we rewrite $$\begin{aligned} \label{polarcir} \sum_{i=1}^{d-1} dx_i^2 = d\rho^2 + \rho^2 d\Omega_{d-2}^2 \ .\end{aligned}$$ The minimal area surface can now be parametrized by $z(\rho)$ and $v(\rho)$. The volume element in this case is given by $$\begin{aligned} \label{ee5} \cV & = & 4 \pi \int_0^R d\rho \frac{L^3 \rho^2}{z^3} \left( 1 - f v'^2 - 2 z' v' \right)^{1/2} \nonumber\\ & = & 4 \pi \int_0^R d\rho \frac{L^3 \rho^2}{z^3} \left( 1 + \frac{z'^2}{f} \right)^{1/2} \ .\end{aligned}$$ We do not have the integral of motion anymore and the full equations of motions are obtained by varying the functional in (\[ee5\]). To avoid clutter, we do not present the specific form of the equation of motion here. The boundary conditions are once again $$\begin{aligned} z(0) = z_* \ , \quad z'(0) = 0 \ .\end{aligned}$$ As before, solving the equation near the $\rho=0$ region we get $$\begin{aligned} z(\rho) = z_* - \frac{3 - 3 M z_*^4 + 2 Q^2 z_*^6}{6 z_*} \rho^2 + \ldots \ .\end{aligned}$$ In practice we use the above expansion at $\rho = \epsilon$ to impose the boundary conditions. Here $\epsilon$ is a small number typically of the order of $10^{-3}$. To determine the divergent piece in the volume, let us evaluate it in the pure AdS-case. The solution of the minimal surface is simple: $z^2 = R^2 - \rho^2$. So we get $$\begin{aligned} \cV_{\rm AdS} = 4 \pi R L^3 \int_0^{\rho(z_0)} d \rho \frac{\rho^2}{\left(R^2 - \rho^2 \right)^2} = 2\pi L^3 \left(\frac{R^2}{z_0^2} + \log \frac{z_0}{\sqrt{2} R}\right) + {\rm finite} \ .\end{aligned}$$ Hence the finite part we are interested in is given by $$\begin{aligned} \cV_{\rm thermal} = 4 \pi L^3 \int_0^R d\rho \frac{\rho^2}{z^3} \left( 1 + \frac{z'^2}{f} \right)^{1/2} - 2\pi L^3 \left(\frac{R^2}{z_0^2} + \log \frac{z_0}{\sqrt{2} R}\right) \ .\end{aligned}$$ The dependence is shown in fig. \[figd4RNeecir\]. ![The case $d=4$. The blue curve corresponds to $\chi_{(4)} \approx 0.002$ and the red curve corresponds to $\chi_{(4)} \approx 0.24$. The dimensionful quantity $\cV_{\rm thermal}$ is measured in units of the AdS-radius.[]{data-label="figd4RNeecir"}](d=4RN_EEcir.pdf){width="8.5cm"} Once again we observe the behavior that increasing $\chi_{(4)}$ increases the volume functional for a spherical subsystem of a given radius. For large values of $(TR)$, we recover a quadratic behavior of the volume functional, which is expected from the general area-law scaling of entanglement entropy. Non-equilibrium physics ======================= The bulk action and the background ---------------------------------- Our goal here is to study the same set of non-local observables in a time-dependent background. This time-dependent background should capture the physics of the formation of a black hole — in the present context — a black hole with a definite mass and charge. To that end, we will use a generalized version of the AdS-Vaidya background including a charge for the black hole. For obvious reasons, we will call this the AdS-RN-Vaidya background. To find the corresponding Vaidya background, we have to couple the above action in (\[action1\]) with an external source $$\begin{aligned} S = S_0 + \kappa S_{\rm ext} \ ,\end{aligned}$$ where $\kappa$ is a constant and we do not specify the form of $S_{\rm ext}$. The equations of motion in this case will take the following form $$\begin{aligned} && R_{\mu\nu} - \frac{1}{2} \left(R- 2 \Lambda \right) g_{\mu\nu} - g^{\alpha\rho} F_{\rho\mu} F_{\alpha\nu} + \frac{1}{4} g_{\mu\nu} \left(F^{\alpha\beta} F_{\alpha \beta}\right) = 2 \left( 8\pi G_N^{(d+1)} \kappa \right) T_{\mu\nu}^{\rm ext} \ , \\ && \partial_\rho \left[ \sqrt{-g} g^{\mu\rho} g^{\nu\sigma} F_{\mu\nu} \right] = \left( 8\pi G_N^{(d+1)}\kappa\right) J_{\rm ext}^{\sigma} \ .\end{aligned}$$ Let us start with the case where the matter field (corresponding to $S_{\rm ext}$) is neutral and subsequently the black hole formed will be the AdS-Schwarzschild. The metric is the $(d+1)$-dimensional infalling shell geometry described in the Poincarè patch by $$\begin{aligned} \label{vaid1} ds^2 = \frac{L^2}{z^2} \left( - f(v,z) dv^2 - 2 dv dz + d\vec{x}^2 \right) \ , \quad f(z,v) = 1 - m(v) z^d \end{aligned}$$ and is known as the AdS-Vaidya background. Here $m(v)$ is a function that captures the information of the black hole formation. On physical ground, $m(v)$ should interpolate between zero (in the limit $v\to -\infty$ corresponding to pure AdS) and a constant value (in the limit $v\to \infty$ corresponding to AdS-Sch). A [*choice*]{} of such a function is $$\begin{aligned} \label{mchange} m(v) = \frac{M}{2} \left(1 + \tanh \frac{v}{v_0} \right) \ .\end{aligned}$$ Here $v_0$ is a parameter that denotes the thickness of the shell. With this choice, the external source must yield the following energy-momentum tensor $$\begin{aligned} 2 \left( 8\pi G_N^{(d+1)} \kappa \right) T_{\mu\nu}^{\rm ext} = \frac{d-1}{2} z^{d-1} \frac{dm}{dv} \delta_{\mu v} \delta_{\nu v} \ .\end{aligned}$$ If we identify $k_\mu = \delta_{\mu v}$, then we get[@AbajoArrastia:2010yt] $$\begin{aligned} T_{\mu\nu}^{\rm ext} \sim k_\mu k_\nu \ , \quad {\rm with} \quad k^2 = 0 \ ,\end{aligned}$$ which is characteristic of null dust. Thus the formation of the black hole is realized by a shell of infalling null dust. In [@Balasubramanian:2010ce; @Balasubramanian:2011ur], this metric has been used to study aspects of thermalization. Our first goal here is to generalize the Vaidya metric for the AdS-RN background in $(d+1)$-dimensions. It is clear that now we need a charged null dust for the formation of the black hole. As before, we will present the corresponding Vaidya metric for specific values of $d$.[^10] In $d=2$ we get $$\begin{aligned} \label{RNv2} && ds^2 = \frac{L^2}{z^2} \left( - f(z,v) dv^2 - 2 dv dz + dx^2 \right) \ , \quad A_v = q(v) \log z \ , \\ && f(z,v) = 1 - m(v) z^2 + \frac{q(v)^2}{L^2} z^2 \log z \ , \quad \Lambda = - \frac{1}{L^2} \ , \\ && 2 \kappa T_{\mu\nu}^{\rm ext} = \frac{1}{2} z \left( \frac{dm}{dv} - \frac{2}{L^2} \log (z) q(v) \frac{dq}{dv} \right) \delta_{\mu v } \delta_{\nu v} \ , \quad \kappa J_{\rm ext}^\mu = \frac{1}{L} \frac{dq}{dv} \delta^{\mu z} \ .\end{aligned}$$ In $d=3$ we get $$\begin{aligned} \label{RNv3} && ds^2 = \frac{L^2}{z^2} \left( - f(z,v) dv^2 - 2 dv dz + d\vec{x}^2 \right) \ , \quad A_v = q(v) z \ , \\ && f(z,v) = 1 - m(v) z^3 + \frac{q(v)^2}{2 L^2} z^4 \ , \quad \Lambda = - \frac{3}{L^2} \ , \\ && \kappa J_{\rm ext}^\mu = \frac{dq}{dv} \delta^{\mu z} \ , \quad 2 \kappa T_{\mu\nu}^{\rm ext} = z^2 \left[ \frac{dm}{dv} - \frac{z}{L^2} q(v) \frac{dq}{dv} \right] \delta_{\mu v} \delta_{\nu v} \ .\end{aligned}$$ And, finally in $d=4$ we get $$\begin{aligned} \label{RNv4} && ds^2 = \frac{L^2}{z^2} \left( - f(z,v) dv^2 - 2 dv dz + d\vec{x}^2 \right) \ , \quad A_v = q(v) z^2 \ , \\ && f(z,v) = 1 - m(v) z^4 + \frac{2 q(v)^2}{3 L^2} z^6 \ , \quad \Lambda = - \frac{6}{L^2} \ , \\ && \kappa J_{\rm ext}^\mu = 2 L \frac{dq}{dv} \delta^{\mu z} \ , \quad 2 \kappa T_{\mu\nu}^{\rm ext} = \frac{3}{2} z^3 \frac{dm}{dv} - \frac{2 z^5}{L^2} q(v) \frac{dq}{dv}\delta_{\mu v} \delta_{\mu v} \ .\end{aligned}$$ As before, we will only consider the cases $d=3, 4$.[^11] For all these metrics, we can choose the profile for $m(v)$ as given in (\[mchange\]) and we are also free to pick an analogous profile for $q(v)$. For future purposes, we will set the radius of the AdS-space, $L=1$, and thus all dimensionful quantities (the length, area or the volume functional as the case may be) will be measured in units of this scale. Now we will work with the backgrounds given in (\[RNv3\]) and (\[RNv4\]). As previously stated we will work with the following mass function $$\begin{aligned} \label{mv} m(v) = \frac{M}{2} \left(1 + \tanh\frac{v}{v_0}\right) \ .\end{aligned}$$ To pick a charge function, we note the following[@Albash:2010mv]: For $d=3$ we can rewrite the function $f(z,v)$ as follows $$\begin{aligned} f = 1 - \left(\frac{z}{z_H(v)}\right)^3 + \frac{Q^2}{2} \left(\frac{z}{z_H(v)}\right)^4 \ ,\end{aligned}$$ which gives $$\begin{aligned} \label{qv3} z_H^3 = \frac{1}{m(v)} \ , \quad q(v)^2 = Q^2 m(v)^{4/3} \ .\end{aligned}$$ For $d=4$ we can rewrite the function $f(z,v)$ as follows $$\begin{aligned} f = 1 - \left(\frac{z}{z_H(v)}\right)^4 + \frac{2 Q^2}{3} \left(\frac{z}{z_H(v)}\right)^6 \ ,\end{aligned}$$ which gives $$\begin{aligned} \label{qv4} z_H^4 = \frac{1}{m(v)} \ , \quad q(v)^2 = Q^2 m(v)^{3/2} \ .\end{aligned}$$ From now on, we also set $M=1$. To appeal to the visual cortex, let us plot how the interpolating function $m(v)$ and $q(v)$ behave in various dimensions in fig. \[d3mqv\] and fig. \[d4mqv\]. It is clear that in the thin shell limit, characterized by $v_0 \to 0$, the variation of the mass and the charge functions are sharply changing around $v=0$ and approximates a step function behavior. In the remainder of this paper, we will analyze the observables in these dynamic (time-dependent) backgrounds in the thin-shell approximation. Before proceeding to the analysis of the probes of thermalization, a notational comment is in order: In the previous sections, we have denoted the regularized length, area or the volume functional with a subscript “thermal". From here on, we will keep the subscript “thermal" in direct comparison with the equilibrium cases but the quantities are not be interpreted as thermal ones, rather they have explicit time-evolution. Spacelike geodesics ------------------- We start by analyzing the two-point function. We consider geodesics with a boundary separation along $x_1=x$-direction (all other spatial directions at both the end-points are the same). The profile is described by two functions $z(x)$ and $v(x)$. The length element is $$\begin{aligned} \cL = \int_{-\ell/2}^{\ell/2} \frac{dx}{z} \left( 1 - f v'^2 - 2 v' z' \right)^{1/2} \ .\end{aligned}$$ The conservation equation is given by $$\begin{aligned} 1- f v'^2 - 2 v' z' = \left(\frac{z_*}{z}\right)^{2} \ .\end{aligned}$$ Using this conservation equation, the two equations of motion corresponding to the variation of $z(x)$ and $v(x)$ are obtained to be $$\begin{aligned} && z v'' + 2 z' v' -1 + v'^2 \left(f - \frac{1}{2}z \frac{\partial f}{\partial z}\right) = 0 \ , \\ && z'' + f v'' + \frac{\partial f}{\partial z} z' v' + \frac{1}{2} \frac{\partial f}{\partial v} v'^2 = 0 \ .\end{aligned}$$ We solve the above two differential equations subject to the following boundary conditions $$\begin{aligned} \label{bc} z(\epsilon) = z_* \ , \quad z'(\epsilon) = 0 + {\rm corrections} \ , \quad v(\epsilon) = v_* \ , \quad v'(\epsilon) = 0 + {\rm corrections} \ ,\end{aligned}$$ where $\epsilon$ is a small number. In practice, we fix the slopes $z'$ and $v'$ at $x = \epsilon$ from the equations of motion themselves, which gives the “corrections" terms above. So far $z_*$ and $v_*$ are two free parameters that generate the numerical solutions for $z(x)$ and $v(x)$. The boundary data can be obtained from this numerical solution $$\begin{aligned} z(\pm \ell/2) = z_0 \ , \quad v (\pm \ell/2) = t \ .\end{aligned}$$ Here $z_0$ is the radial IR cut-off and $t$ is the boundary time. To generate the corresponding thermalization curves, we do the following: for a fixed value of $z_*$ we keep varying $v_*$ till the read-off value of $z_0$ is sufficiently low. This generates one desired profile for $z(x)$ and $v(x)$, which we can use to compute the length functional. Now we vary $z_*$ and repeat the process again. Using the data obtained as explained above we show how the thermalization occurs in fig. \[rnv2pt\]. It should be emphasized at this point that we are not being careful about the units in which we measure the boundary time in this figure. In what we have shown in fig. \[rnv2pt\], the boundary time is measured in units of the black hole mass, $M$, which strictly speaking does not have a simple interpretation in terms of any physical quantity of the boundary theory. Moreover, the different curves corresponding to different values of $\chi_{(d)}$ are not obtained for the same equilibrium temperature. Thus fig. \[rnv2pt\] and all such subsequent ones in the later sections should be viewed as schematic representation of the physics that is going on. We need to define a “thermalization time" in order to investigate how it depends on the length scale for various values of $\chi_{(d)}$. Following [@Balasubramanian:2010ce], we define two different time scales denoted by $\tau_{1/2}$ and $\tau_{\rm crit}$ respectively: \(i) $\tau_{1/2}$ is defined as the time required for the curves to reach half of their equilibrium value. \(ii) $\tau_{\rm crit}$ is defined as the critical time at which the geodesic grazes the middle of the shell at $v=0$. This is simply given by the following formula: $$\begin{aligned} \tau_{\rm crit} = \int_{z_0}^{z_*} \frac{dz}{f(z)} \ ,\end{aligned}$$ where, as before, $z_0$ is the IR radial cut-off and $z_*$ determines the value of the boundary separation. Once again, we need to specify the scale in which we measure these thermalization times. From the point of view of the boundary theory the only relevant dimensionful quantity is the equilibrium temperature. Thus we will always study the behavior of the dimensionless quantities $(T\tau_{1/2})$ or $(T\tau_{\rm crit})$. Now we can investigate how these thermalization times depend on the length-scale $\ell$ for various fixed values of $\chi_{(d)}$. The results for $\tau_{1/2}$ are shown in fig. \[thalf\_2pt\]. For small $\ell$, $\tau_{1/2}$ grows linearly with the property that $\tau_{1/2} < \ell/2$. The deviation from linearity for large values of $\ell$ is clear. For increasing $\chi_{(d)}$ this deviation occurs either in the opposite direction or occurs more slowly and for large value of $\ell$ increasing $\chi_{(d)}$ increases the value of $\tau_{1/2}$. For small values of $\ell$, $\tau_{1/2}$ can decrease with increasing $\chi_{(d)}$. Thus we already observe possibly interesting and different physics dominating two different regimes: $(T\ell) \ll 1$ and $(T\ell)\gg 1$ for increasing values of $\chi_{(d)}$. It should be noted however that for very small values of $\ell$, $\tau_{1/2}$ is not very sensitive to the value of $\chi_{(d)}$. On the other hand, the dependence of $\tau_{\rm crit}$ with the boundary separation length is shown in fig. \[tcrit\_2pt\]. The qualitative behavior of this thermalization time is much like the one observed for $\tau_{1/2}$: For small boundary separation, the thermalization time is not very sensitive to the parameter $\chi_{(d)}$ — although there is a range within which increasing $\chi_{(d)}$ actually decreases $\tau_{\rm crit}$. However, for large values of $\ell$, $\tau_{\rm crit}$ clearly increases with increasing $\chi_{(d)}$. The deviation from linearity for large values of $\ell$ is also quite interesting. For small values of $\chi_{(d)}$ this deviation from linearity results in $\tau_{\rm crit} < \ell/2$; whereas for larger values of $\chi_{(d)}$ this deviation results in $\tau_{\rm crit} > \ell/2$, which is seen from fig. \[tcrit\_2pt\] as the blue and the red curves bending away from each other as $\ell$ increases. This bending away effect is more pronounced for $d=3$ as compared to $d=4$. We can also investigate the dependence of $\tau_{\rm crit}$ with $\chi_{(d)}$. To this end, let us define $$\begin{aligned} \tau_{\rm crit}^0 = \lim_{\chi_{(d)} \to 0} \tau_{\rm crit} \left(\chi_{(d)}\right) \end{aligned}$$ and normalize the measured thermalization time in units of $\tau_{\rm crit}^0$. In fig. \[tcrit\_chi\_2pt\] we have shown the results. Once again the results display an interesting interplay of physics for small and large values of $\chi_{(d)}$. This effect is relatively milder in $d=3$ compared to $d=4$. From fig. \[tcrit\_chi\_2pt\](b) we observe that for fixed values of $(T\ell)$, different physics dominates two different regimes: small $\chi_{(4)}$ and large $\chi_{(4)}$. Initially the curve decreases before turning back and increasing monotonically. This monotonically incasing behavior seems to be a linear one, with the slope depending on the fixed value of $(T\ell)$ and also the dimension we are in. This growth seems to be unbounded, implying that for infinitely strong chemical potential, it takes infinitely longer to reach thermalization. On the other hand, $\tau_{\rm crit}$ — measured in units of $\tau_{\rm crit}^0$ — has a minima and both the location and the minimum value depends on the fixed value of $(T\ell)$. From what we have shown in fig. \[tcrit\_chi\_2pt\](b), we have about $4\%$-reduction in thermalization for $(T\ell) = 0.7/(4\pi)$. We will observe that all these features are very generic for all non-local probes that we consider in the subsequent sections. Wilson loops ------------ Once again we will consider two geometric shapes: the rectangular and the circular one and we appeal to fig. \[rec\_cir\_shape\] for a schematic representation. ### Rectangular strip As before the area functional for the rectangular strip is given by $$\begin{aligned} \cA = \frac{R}{2\pi} \int_{-\ell/2}^{\ell/2} \frac{dx}{z^2} \left( 1 - f v'^2 - 2 v' z' \right)^{1/2} \ .\end{aligned}$$ The conservation equation gives $$\begin{aligned} 1 - f v'^2 - 2 v' z' = \left(\frac{z_*}{z}\right)^4 \ .\end{aligned}$$ Using the conservation equation and extremizing the area functional we get the following two equations of motion $$\begin{aligned} && z'' + v'' f + z' v' \frac{\partial f}{\partial z} + \frac{1}{2} v'^2 \frac{\partial f}{\partial z} = 0 \ , \\ && z v'' + 4 z' v' -2 + v'^2 \left( 2 f - \frac{1}{2} z \frac{\partial f}{\partial z }\right) = 0 \ .\end{aligned}$$ Note that compared to the computation of the geodesics, only the second equation changes. We again use the same boundary conditions as outlined in (\[bc\]). Some of the resulting plots are shown in fig. \[rnvwlrec\]. The behavior of the thermalization time with the length of the Wilson loop operator has been displayed in fig. \[thalfWLrec\]. The general behavior of $\tau_{1/2}$ is sub-linear. The deviation from linearity of $\tau_{1/2}$ decreases with increasing $\chi_{(d)}$ and for large enough $\ell$, the thermalization time increases with increasing value of $\chi_{(d)}$. From fig. \[thalfWLrec\](b), we can clearly identify a regime of $(T\ell)$ where increasing $\chi_{(4)}$ actually decreases $\tau_{1/2}$. On the other hand, the behavior of $\tau_{\rm crit}$ with the length of the Wilson loop operator is shown in fig. \[tcritWLrec\]. It turns out that $\tau_{\rm crit}$ exceeds the linear relation in $\ell/2$ and for large enough $\ell$, the red and the blue curves bend away from each other. So increasing $\chi_{(d)}$ enhances the departure from the linear behavior for $\tau_{\rm crit}$. From fig. \[tcritWLrec\](b) we can also identify a regime where $\tau_{\rm crit}$ decreases with increasing chemical potential and hence the thermalization happens faster. In fig. \[tcrit\_chi\_WLrec\], we demonstrate how the thermalization time depends on $\chi_{(d)}$ for both $d=3$ and $d=4$. As in the case of 2-pt function, this response consists of two different regimes: for small values of $\chi_{(d)}$, thermalization seems faster and for larger values of $\chi_{(d)}$ thermalization becomes slower and approaches infinitely large values for infinitely strong chemical potential. The percentage reduction in thermalization time for small chemical potential is about $4\%$ for $d=4$ where this effect is more pronounced. ### Circular Wilson loop {#wlcir} Let us now consider circular Wilson loops in the AdS-RN-Vaidya background. The corresponding minimal surface is parametrized as before and the area functional has the following form $$\begin{aligned} \cA (t, R) = \int_0^R d \rho \frac{\rho}{z^2} \sqrt{1- f v'^2 - 2 z' v'} \ ,\end{aligned}$$ where $' \equiv d/d\rho$. Here the mass and the charge functions are time-dependent and are given by (\[mv\]) and (\[qv3\]) or (\[qv4\]) for $d=3$ or $d=4$ respectively. As before, the equations motion resulting from the area functional are messy and therefore we do not present them explicitly. To impose the boundary conditions, we can use the rotational symmetry along $\phi$-direction and impose $$\begin{aligned} z(\epsilon) = z_* + {\rm corrections}\ , \quad v(\epsilon) = v_* + {\rm corrections} \ ,\end{aligned}$$ where $z_*$ and $v_*$ are the two free parameters, which we will describe how to fix. The “corrections" in the above is obtained in the following manner: we expand the equations motion near $\rho=\epsilon$, where $\epsilon$ is a small number typically of the order of $10^{-3}$. Using these expansions we can fix the “corrections" as we did in, [*e.g.*]{} (\[bcwlcir1\]) and (\[bcwlcir2\]).[^12] Note that this expansion also allows us to set $z'(\epsilon)$ and $v'(\epsilon)$. Once these boundary conditions are fixed, we can obtain a numerical solution for $z(\rho)$ and $v(\rho)$ for every value of $z_*$ and $v_*$. From this solution we can read off the boundary data $$\begin{aligned} z(R) = z_0 \ , \quad v(R) = t \ ,\end{aligned}$$ where $R$ is the radius of the circular Wilson loop, $z_0$ is the radial IR cut-off and $t$ is the boundary time. In practice we do the following: for a given $R$, we fix $z_*$ and keep varying $v_*$ till the obtained value of the radial IR cut-off, denoted by $z_0$, is small enough. Then we vary $z_*$ and repeat the process. Ultimately this generates the required data to produce the thermalization curves for the Wilson loop operator. Some of these curves have been shown in fig. \[rnvwlcir\]. To extract the behavior of the thermalization time $\tau_{1/2}$, we repeat this process for various values of the radius and finally obtain the fig. \[thalfWLcir\]. For small values of $\chi_{(d)}$, the behavior of $\tau_{1/2}$ is primarily sub-linear. The linear regime for small $D$ has a smaller slope in $d=4$ as compared to the one in $d=3$. Increasing $\chi_{(d)}$ suppresses the sub-linear behavior of $\tau_{1/2}$ and for large enough $D$, $\tau_{1/2}$ increases for increasing $\chi_{(d)}$. For small values of $D$, from fig. \[thalfWLcir\](b) we can definitely identify the regime where increasing chemical potential leads to faster thermalization. Also, it is clear from the $d=3$ case that for moderate values of $\chi_{(3)}$, $\tau_{1/2}$ does not change much unless we go to very high values of $D$. On the other hand the thermalization time, denoted by $\tau_{\rm crit}$, is shown in fig. \[tcritWLcir\]. For small values of $\chi_{(d)}$, for a relatively large regime of values for $D$, $\tau_{\rm crit}$ behaves linearly with a slope very close to $1/2$. Increasing $\chi_{(d)}$, promotes a deviation from this linearity for larger values of $D$ and the corresponding curve bends away. This means for large enough $D$, increasing $\chi_{(d)}$ increases $\tau_{\rm crit}$. Entanglement entropy -------------------- ### The rectangular belt Let us first begin with the rectangular strip geometry. The volume functional corresponding to the minimal area surface can be parametrized by $z(x)$ and $v(x)$, where $x\equiv x_1$, with the assumption that this minimal area surface is invariant under the other two planar directions. The corresponding volume functional is now given by $$\begin{aligned} \cV = \int_{-\ell/2}^{\ell/2} \frac{dx}{z^3} \left(1 - f v'^2 - 2 v' z' \right)^{1/2} \ ,\end{aligned}$$ where $'\equiv d/dx$. This gives the conservation equation $$\begin{aligned} 1 - f v'^2 - 2 v' z' = \left(\frac{z_*}{z}\right)^6 \ .\end{aligned}$$ The two equations of motion resulting from the variation of the volume functional can be written as $$\begin{aligned} && z'' + f v'' + \frac{\partial f}{\partial z} z' v' + \frac{1}{2} v'^2 \frac{\partial f}{\partial v} = 0 \ , \\ && z v'' + 6 v' z' -3 + v'^2 \left( 3 f - \frac{1}{2} z \frac{\partial f}{\partial v} \right) = 0 \ .\end{aligned}$$ Once again only the second equation above changes. We will again solve these two equations subject to the boundary conditions outlined in (\[bc\]) and using similar method as outlined earlier. Some of the curves obtained in this way are shown in fig. \[rnveerec\]. In fig. \[td4eerec\] we have shown how the two thermalization times $\tau_{1/2}$ and $\tau_{\rm crit}$ depend on the length $\ell$ of the rectangular region. For small values of $\chi_{(4)}$, $\tau_{1/2}$ behaves linearly with $\ell/2$ for small values of $\ell$: the corresponding slope of this linear behavior is unity. For larger length, $\tau_{1/2}$ becomes sub-linear. On the other hand, for larger values of $\chi_{(4)}$ the deviation from linear behavior of $\tau_{1/2}$ is suppressed and for larger length, increasing $\chi_{(4)}$ increases $\tau_{1/2}$. The qualitative behavior of $\tau_{\rm crit}$ is similar to that of $\tau_{1/2}$. However, the slope of the linear regime is quite different. For $\tau_{\rm crit}$, in the linear regime, $\tau_{\rm crit} \sim \ell$ and therefore has almost twice the slope compared to the curves for $\tau_{1/2}$. Once again we observe that for large enough $\ell$, larger $\chi_{(4)}$ leads to larger $\tau_{\rm crit}$. Finally, in \[td4eerec\](c) we have shown the dependence of $\tau_{\rm crit}$ with the dimensionless ratio $\chi_{(4)}$, which gives a similar result as we have observed earlier, specifically we do seem to have two different regimes for $\chi_{(4)}$ distinguished by a faster or a slower thermalization corresponding to small and large values of $\chi_{(d)}$. A similar behavior is observed as $(T\ell)$ is varied for fixed values of $\chi_{(4)}$. ### Spherical region Now we will investigate the case of a spherical region. To this end, we parametrize the spherical disc in polar coordinates as in (\[polarcir\]) and represent the corresponding minimal are surface by $z(\rho)$ and $v(\rho)$. The corresponding volume functional is given by $$\begin{aligned} \cV = 4 \pi \int_0^R d\rho \frac{\rho^2}{z^3} \sqrt{1 - f v'^2 - 2 v' z'} \ ,\end{aligned}$$ where $'\equiv d/d\rho$. We follow similar methods as outlined in section \[wlcir\] in obtaining the thermalization curves and a few representative curves are shown in fig. \[eecir\]. [ ]{} We have demonstrated the behavior of the thermalization time with the diameter of the spherical region in fig. \[td4eecir\]. In this case, $\tau_{1/2}$ has a sub-linear behavior for the range of the diameter we have explored for both small and relatively large values of $\chi_{(4)}$. The linear regime grows slower than $D/2$. Also, for the range of the diameter that we have explored, increasing $\chi_{(4)}$ decreases $\tau_{\rm 1/2}$. The behavior of $\tau_{\rm crit}$, on the other hand, is in agreement with previously obtained qualitative features. For small value of $\chi_{(4)}$, $\tau_{\rm crit} \sim D/2$ and this linear behavior persists till larger values of $D$. However, as $\chi_{(4)}$ is increased, $\tau_{\rm crit}$ exceeds this linear behavior and the corresponding curve bends away. Summary and Discussion ====================== We have explored, in details, the behavior of thermalization time as the chemical potential is varied by probing the following various non-local observables: two-point function, Wilson loop and entanglement entropy. In this section, we briefly summarize and review some crucial features of the observations we made. For simplicity, we will only comment on the behavior of $\tau_{\rm crit}$, but the behavior of $\tau_{1/2}$ is qualitatively similar. At vanishing chemical potential, which in our notation would be denoted by $\chi_{(d)} = 0$, we reproduce the results obtained in [@Balasubramanian:2010ce; @Balasubramanian:2011ur]. The generic features in the absence of a chemical potential are the following: for the two-point function, the thermalization time $\tau_{\rm crit} = \ell/2$ in AdS$_3$ but $\tau_{\rm crit} < \ell/2$ in AdS$_{4,5}$ and deviates from linearity. The $\tau_{\rm crit} = \ell/2$ behavior is understood from the dual $(1+1)$-dim CFT. On the other hand, the deviation from linearity in higher dimensions is interpreted as an indicator of a faster-than-causal thermalization possibly resulting from the homogeneity of the initial configuration[@Balasubramanian:2010ce]. In higher dimensions, the Wilson loop or the entanglement entropy also behaves in a similar fashion. Before proceeding further let us offer some comments on the operators that we considered. We have studied three different types of non-local operators as probes of thermalization. It is not [*a priori*]{} clear which of these operators provides the correct time-scale for thermalization. It can be explicitly checked (like in the case of vanishing chemical potential) that it is actually the entanglement entropy which thermalizes the latest and thereby sets the equilibration time-scale. This is demonstrated in fig. \[eerocks\]. [ ]{} In the presence of a chemical potential the physics becomes richer. First, the relevant quantities to consider here are the following dimensionless combinations: $(T\tau_{\rm crit})$, $(T\ell)$ and $(\mu/T)$ — where $T$ is the temperature of the thermal background, $\ell$ is the length of the non-local operator, $\mu$ is the chemical potential and $\tau_{\rm crit}$ is the thermalization time. Thus, in general, $(T\tau_{\rm crit})$ is a function of the two independent variables: $(T\ell)$ and $(\mu/T)$ and our goal is to explore this behavior. Based on general physics intuition, once the thermalization has set in, we can identify the following different regimes $$\begin{aligned} T\ell \quad {\rm fixed, \, small:} \quad && \underbrace{\mu/T \gg 1} \quad {\rm and} \quad \underbrace{\mu/T \ll 1} \ , \\ &&{\rm quantum} \quad \quad \quad \quad {\rm classical} \nonumber\end{aligned}$$ and $$\begin{aligned} \mu/T \quad {\rm fixed, \, small:} \quad && \underbrace{T \ell \ll 1} \quad {\rm and} \quad \underbrace{T \ell \gg 1} \ . \\ &&{\rm quantum} \quad \quad \quad {\rm classical} \nonumber\end{aligned}$$ In the previous subsections, by analyzing the various probes of thermalization, we have seen qualitatively different features in precisely these “classical" and “quantum" regimes as defined above. The ubiquitous property that reveals itself through this exercise is that thermalization becomes faster for small values of $\mu/T$, but for large values of $\mu/T$ the thermalization time increases without any upper bound. This non-monotonic behavior in the two regimes are smoothly connected through a minima as depicted in [*e.g.*]{} figs. \[tcrit\_chi\_WLrec\](b), \[td4eerec\](c). This non-monotonicity is enhanced in higher dimensions.[^13] We should also note that the identification of the “classical" or the “quantum" regime is only meaningful when the system has completely thermalized. Although it seems that the thermalization process is sensitive about this equilibrium feature, it is not clear to us whether such a “classical" or a “quantum" regime actually exist during the non-equilibrium period. Let us also briefly comment on the functional dependence of $T\tau_{\rm crit}$ on the dimensionless combinations $T\ell$ and $\mu/T$. If we take constant $T\ell$-slices, our generic observation suggests a linear relation $$\begin{aligned} T\tau_{\rm crit} = A\left(d, T\ell \right) \frac{\mu}{T} \quad {\rm for} \quad \frac{\mu}{T} \gg 1 \ ,\end{aligned}$$ where $A(d, T\ell)$ is the slope of the curve which generally depends on the dimensionality of the problem and the value of $T\ell$. On the other hand, if we take constant $(\mu/T)$-slices we get the following polynomial relation[^14] $$\begin{aligned} T\tau_{\rm crit} = B\left(d, \mu/T \right) \left(T\ell\right)^2 + C \left(d, \mu/T\right) \left(T \ell\right) \ ,\end{aligned}$$ where $B$ and $C$ are two constants which depend on the dimensionality of the problem and the value of the chemical potential. Furthermore, it can also be checked explicitly that $B(d, \mu/T)<0$ for $\mu/T \ll 1$ and $B(d, \mu/T) > 0$ for $\mu/T \gg 1$; whereas $C(d, \mu/T)$ is always positive. For small values of the length of the operator, thermalization time behaves linearly with the length of the operator and the slope is given by $C$. For large values of the length, $\tau_{\rm crit}$ is either sub-linear or super-linear depending on the sign of the other constant $B$. We have shown that the presence of a chemical potential makes the physics of holographic thermalization richer. Some important qualitative differences appear, namely, the non-monotonic behavior of the thermalization time as a function of $\mu/T$ . Some other charcteristics like top-down thermalization (UV modes thermalize first) and the fact that it is the entanglement entropy that sets the thermalization scale are common to the $\mu=0$ case. This perhaps is a tip of the iceberg and there is a richer story waiting to be unfolded. Let us point out some open problems and future directions which would improve the understanding of holographic thermalization of theories with non-zero chemical potential: - [*Fortune favors the brave, or does it:*]{} The time-dependent background we have worked on is more “phenomenologically motivated" than one obtained from a “first principle" calculation even from the point of view of classical gravity. One might therefore wonder about the validity or the applicability of the physics we observe within this framework. In this work we have implicitly assumed the existence of at least some approximation in which the physics we see holds true. In [@Bhattacharyya:2009uu] the authors showed that a dilaton source of small amplitude and finite duration at the boundary produces a wave that propagates in the bulk and collapses to form an AdS-Schwarzschild geometry. The spacetime for this collapse process was constructed as an expansion in the amplitude of the dilaton source and it was shown that it takes the Vaidya form at the leading order. A fascinating question on its own right is the study of the gravitational collapse process that may give rise to the AdS-RN-Vaidya metric. Perhaps a natural point to start from is to consider a generalization of [@Bhattacharyya:2009uu] by considering a charged dilaton field. Such an exercise will either establish the AdS-RN-Vaidya background on a firm footing or we will learn some other interesting lesson. - The fact that thermalization time increases with increasing chemical potential (at least for large enough values of chemical potential as we have observed) can be reconciled with weak-coupling field theory intuition. For a bosonic system, increasing chemical potential “enhances" the Bose-Einstein condensation and therefore hinders the process of thermalization which will populate excited states. For a system with fermions, increasing chemical potential increases the available states by increasing the Fermi energy. Thus it is intuitive to conclude that a system with bosonic or fermionic degrees of freedom will thermalize slower if the chemical potential is increased.[^15] However, to the best of our knowledge, we are unaware of a field theory (even toy-model) calculation that demonstrates this physics explicitly. On the other hand, for small chemical potential the faster thermalization that we observed does not fit our naïve intuition. Thus a field theory computation at weak coupling will shed much light on the underlying physics and we may be able to isolate the features governed by strong coupling and the physics that is governed by the presence of a chemical potential. - It would be interesting to study the spread and evolution of correlations in an out-of-equilibrium system which eventually thermalizes. In [@Balasubramanian:2011at] the authors pointed out that mutual and tripartite information are adequate probes to study this problem and calculated these quantities holographically for a three dimensional bulk theory which is dual to an $(1+1)$-dim CFT. Field theory computations of such observables are so far available only for the $(1+1)$-dim. It will be an interesting exercise to generalize these computations in higher dimensions and analyze the corresponding physics. Furthermore, we have shown that a non-zero chemical potential introduces non-trivial new behavior in the thermalization of entanglement entropy. Thus, we expect that the mutual and tripartite information results will also be substantially modified when $\mu\ne 0$. - More phenomenological quantities can also be studied. For example the stopping distance of a massless particle — related in the weak coupling to the jet quenching parameter for QCD-like theories — was calculated in [@Arnold:2011qi] for an AdS$_5$-Schwarzchild black hole. An interesting question is how the stopping distance is modified in a time dependent background undergoing thermalization with and without chemical potential[@wip]. We hope to address some of these problems in the near future. Acknowledgements ================ We would like to thank Willy Fischler and Berndt Müller for conversations and encouragement about this work. E.C. acknowledges support of CONACyT grant CB-2008-01- 104649 and CONACyTÕs High Energy Physics Network. AK would like to thank the hospitality of the KITP, Santa Barbara during the workshop “Novel Numerical Methods for Strongly Coupled Quantum Field Theory and Quantum Gravity" and financial support in part by the National Science Foundation under Grant No. PHY11-25915. AK is supported by the Simons postdoctoral fellowship awarded by the Simons Foundation. This material is based upon work supported by the National Science Foundation under Grant PHY-0969020. Appendix A. Geodesics in the background {#appendix-a.-geodesics-in-the-background .unnumbered} ======================================= For the sake of completeness, let us first write down the AdS-RN-Vaidya background in general $(d+1)$-bulk dimensions (with $d>2$) $$\begin{aligned} \label{genvaidya} && ds^2 = \frac{L^2}{z^2} \left( - f(z,v) dv^2 - 2 dv dz + d\vec{x}^2 \right) \ , \quad A_v = q(v) z^{d-2} \ , \\ && f(z,v) = 1 - m(v) z^d + \frac{(d-2)q(v)^2}{(d-1) L^2} z^{2(d-1)} \ , \quad \Lambda = - \frac{d(d-1)}{2 L^2} \ ,\end{aligned}$$ which gives $$\begin{aligned} 2 \kappa T_{\mu\nu}^{\rm ext} = \left( \frac{d-1}{2} z^{d-1} \frac{dm}{dv} - \frac{d-2}{L^2} z^{2d-3} q \frac{dq}{dv} \right) \delta_{\mu v } \delta_{\nu v} \ , \quad \kappa J_{\rm ext}^\mu = (d-2) L^{d-3} \frac{dq}{dv} \delta^{\mu z} \ .\end{aligned}$$ Analyzing the general behavior of the geodesics or the minimal surfaces of various dimensions in the AdS-RN-Vaidya background is an interesting problem.[^16] For the rectangular cases we have considered in the main text, the equations of motion of such minimal surfaces take the following form $$\begin{aligned} \label{genmin} z'' + v'' f + z' v' \frac{\partial f}{\partial z} + \frac{1}{2} v'^2 \frac{\partial f}{\partial v} & = & 0 \ , \nonumber\\ z v'' + \left(2 p \right) z' v' - p + v'^2 \left(p f - \frac{1}{2} z \frac{\partial f}{\partial z} \right) & = & 0 \ , \end{aligned}$$ where $$\begin{aligned} p & = & 1 \quad \implies \quad {\rm geodesic} \ , \\ p & = & 2 \quad \implies \quad {\rm Wilson \, loop} \ , \\ p & = & 3 \quad \implies \quad {\rm Entanglement \, entropy} \ .\end{aligned}$$ On the other hand, the action functional for the minimal surface for the circular region is $$\begin{aligned} \label{gencir} \cS = \int_0^R d\rho \frac{\rho^{p-1}}{z^{p}} \sqrt{1 - f v'^2 - 2 z' v' } \ .\end{aligned}$$ The equations of motion obtained for the circular region are involved, hence we do not present their explicit form here. We will briefly comment on a few general properties of the geodesics or the minimal surfaces obtained as solutions of (\[genmin\]). Before doing so, let us introduce the notion of an apparent horizon which will be relevant for the time-dependent backgrounds we have considered in the main text. A trapped surface $T$ is defined as a co-dimension two spacelike submanifold with the property that the expansion of both “ingoing" and “outgoing" future directed null geodesics orthogonal to $T$ is everywhere negative. The boundary of the trapped surfaces associated with a given foliation is defined as the apparent horizon. In what follows, we will closely follow [@Figueras:2009iu]. For the background in (\[genvaidya\]) the vectors tangent to the ingoing and outgoing null geodesics are given by $$\begin{aligned} l_- = - \partial_z \ , \quad l_+ = - \frac{z^2}{L^2} \partial_v + \frac{z^2}{2 L^2} f \partial_z \ \end{aligned}$$ such that $$\begin{aligned} l_- \cdot l_- = 0 \ , \quad l_+ \cdot l_+ = 0 \ , \quad l_- \cdot l_+ = -1 \ .\end{aligned}$$ Now the volume element of the co-dimension two spacelike surface (orthogonal to the above null geodesics) is given by $$\begin{aligned} \Sigma = \left(\frac{L}{z}\right)^{d-1} \ .\end{aligned}$$ The expansions are defined to be $$\begin{aligned} \theta_{\pm} = \cL_{\pm} \log \Sigma = l_{\pm}^\mu \partial_\mu \left( \log\Sigma \right) \ ,\end{aligned}$$ where $\cL_{\pm}$ denotes the Lie derivatives along the null vectors $l_{\pm}$. The apparent horizon is then obtained by solving the equation $\Theta = 0$, where $\Theta = \theta_+ \theta_-$ is the invariant quantity. In this case we find $$\begin{aligned} \label{apphori} \Theta = \frac{(d-1)^2}{2 L^2} f = 0 \end{aligned}$$ gives the location of the apparent horizon which we can find numerically. Now we are ready to present a few representative figures demonstrating how a geodesic or a minimal area surface looks like in our time-dependent backgrounds. Such representative profiles are shown in fig. \[geotype\]. It is clear that depending on the boundary separation length $\ell$, the corresponding minimal surface may or may not penetrate the apparent horizon. If the corresponding geodesic or the minimal surface penetrates the apparent horizon, the derivative of the curve becomes discontinuous. The curve consists of two parts, one inside the apparent horizon and the other outside of it. These two sections of the curve are connected according to an analogue of the Snell’s law of refraction [@Balasubramanian:2011ur]. This discontinuity is visible from the plots in the $\{z(x) - x\}$ and the $\{z(x)- v(x)\}$ planes. An interesting feature that we have observed in [*e.g.*]{} fig. \[rnveerec\] and was also pointed out in [@Albash:2010mv; @Balasubramanian:2011ur] is the swallow-tail behavior of the corresponding evolution function. This is a generic feature that is observed from computing the various probes of thermalization by using solutions obtained from solving the equations in (\[genmin\]) or the equations obtained from minimizing the action functional in equation (\[gencir\]) for large enough boundary separation. This swallow-tail behavior results from a multi-valuedness in $z_*$ as a function of the boundary time $t$ as demonstrated in Fig \[multi\_demo\]. As has been pointed out in [@Balasubramanian:2011ur], this feature does not imply anything unphysical; we just need to be careful and follow the steepest descent procedure. [ ]{} Appendix B. Charged AdS black holes {#appendix-b.-charged-ads-black-holes .unnumbered} =================================== Here we will briefly comment on some properties of charged AdS black holes in four and five bulk dimensions and their embedding in $10$ or $11$-dimensional supergravity. Let us begin with the case $d=4$, [*i.e.*]{} in five bulk dimensions. The relevant theory here is the $S^5$ reduction of type IIB supergravity [@Kim:1985ez; @Gunaydin:1984fk] truncated to the $\cN =2$ sector with an $U(1)^3$ symmetry. The corresponding five dimensional theory [@Gunaydin:1984ak] includes three $U(1)$ gauge fields, denoted by $A_I$ ($I = 1, 2, 3$), and two scalar fields, denoted by $X_I$ subject the constraint $X_1 X_2 X_3 =1$. This theory has a family of black hole solutions parametrized by three charges (corresponding to the three $U(1)$s) and a non-extremality parameter usually denoted by $\mu$ [@Behrndt:1998ns; @Behrndt:1998jd].[^17] The general form of such black hole solutions take the following form $$\begin{aligned} ds^2 & = & \left(H_1 H_2 H_3\right)^{-2/3} f dt^2 + \left(H_1 H_2 H_3 \right)^{1/3} \left( r^2 d\Omega_{3,k}^2 + \frac{dr^2}{f} \right) \ , \\ H_I & = & 1 + \frac{q_I}{r^2} \ , \quad f = k - \frac{\mu}{r^2} + \frac{r^2}{L^2} \left(H_1 H_2 H_3 \right) \ , \quad A_I = \left(H_I^{-1} -1 \right) dt \ , \end{aligned}$$ where $k$ takes values $0$ or $1$ depending on whether we work in the Poincaré patch or the global patch: the line element denoted by $d\Omega_{3,0}^2$ in this case corresponds to that of $\mathbb{R}^3$. The three charges $q_I$, when uplifted to $10$-dim type IIB supergravity, correspond to angular momenta along three Cartan directions on the five-sphere. We have considered only the Poincaré patch solution in the main text. Furthermore, the AdS-RN solution that we have considered in [*e.g.*]{} equation (\[RN\]) corresponds to the case when $q_1 = q_2 = q_3 =q$. In this case, the scalars become trivial and the background metric takes a more familiar form $$\begin{aligned} ds^2 & = & - \frac{\rho^2}{L^2} g(\rho) dt^2 + \frac{\rho^2}{L^2} d\vec{x}^2 + \frac{L^2}{\rho^2} \frac{d\rho^2}{g(\rho)} \ , \quad g(\rho) = 1 - \frac{\mu L^2}{\rho^4} + \frac{\mu q L^2}{\rho^6} \ , \\ A & = & \left( \frac{q}{\rho_H^2} - \frac{q}{\rho^2} \right) dt \ ,\end{aligned}$$ where we have defined $\rho^2 = r^2 + q$ and $\rho_H$ is the location of the event-horizon. We can obtain the background written in equation (\[RN\]) from the above expression by identifying the following $$\begin{aligned} z = \frac{L}{\rho} \ , \quad M = \frac{\mu}{L^2} \ , \quad Q^2 = \frac{3 \mu q}{2 L^2} \ .\end{aligned}$$ It was noted in [@Behrndt:1998jd], in order to have a horizon the non-extremality parameter $\mu$ must satisfy a lower bound which can be analytically obtained. This means for a given $\mu$, there is an upper bound for the charge beyond which there is no horizon. This fact is revealed in the main text in [*e.g.*]{} fig. \[Tvsq\]. If we violate this bound, then a naked singularity appears [@Behrndt:1998jd]. As shown in [@Myers:2001aq], this naked singularity has a natural meaning within string theory as an ensemble of giant gravitons which are distributed over the compact $S^5$. For more details, we will refer the reader to [@Myers:2001aq]. Note, however, that even within this bound the relevant dimensionless quantity $\chi_{(d)}$, defined in equation (\[ratio\]), has a range given by $\chi_{(d)} \in [0, \infty]$. Thus the existence of an upper bound in the charge $Q$ does not imply any corresponding bound for the chemical potential measured in units of an appropriate power of the temperature. It is transparent that if the charge violates this upper bound, then the corresponding physics does not describe a thermal state in the dual field theory. Our primary focus in this article was to study the thermalization process by analyzing various probes of thermalization. It is intriguing to ask what will happen to these probes if we carefully fine-tune the mass parameter $M$ and the charge parameter $Q$ such that the background cannot form a horizon. In this case, the probes should never thermalize. Following the procedure explained in the main text, we can obtain the behavior of probes of thermalization in such a case and one such representative plot is shown in fig. \[notherm\]. It is clear from fig. \[notherm\] that initially $\cL_{\rm therm}$ begins to follow a similar bahviour as in the case when they eventually thermalize; however, after a certain amount of time has elapsed, the curve turns back and never really achieves thermal equilibrium. For larger values of $Q$ (with $M$ fixed), this “turn over" behavior is further enhanced. Note that in the absence of a horizon it is not entirely clear to us what the corresponding non-local observables correspond to; hence we just present one representative plot and refrain from any further claims. Similarly, it is also possible to consider the $S^7$-reduction of $11$-dimensional supergravity which gives rise to $SO(8)$ gauged $\cN =8$ supergravity in four dimensions. As in the five dimensional case, it is possible to take an $\cN=2$ consistent truncation which gives rise to a bosonic sector consisting of a graviton, four Abelian gauge potentials, three axions and three dilatons. This theory also has charged black hole solutions parametrized by four charge parameters and is given by $$\begin{aligned} ds^2 & = & - \left(H_1H_2H_3H_4\right)^{-1/2} f dt^2 + \left(H_1H_2H_3H_4\right)^{1/2} \left(r^2 d\Omega_{2,k}^2 + \frac{dr^2}{f} \right) \ , \\ H_I & = & 1 + \frac{q_I}{r} \ , \quad f = k - \frac{\mu}{r} + \frac{r^2}{L^2} \left(H_1 H_2 H_3 H_4\right) \ , \quad A_I = \left(H_I^{-1} -1 \right) dt \ , \\ I & = & 1, \ldots, 4 \ .\end{aligned}$$ This class of charged black holes can be embedded in M-theory in terms of rotating M2-branes. It is again straightforward to check that the usual AdS-RN background is recovered with four charges set equal. [99]{} D. T. Son and A. O. Starinets, “Viscosity, Black Holes, and Quantum Field Theory,” Ann. Rev. Nucl. Part. Sci.  [**57**]{}, 95 (2007) \[arXiv:0704.0240 \[hep-th\]\]. V. E. Hubeny and M. Rangamani, “A Holographic view on physics out of equilibrium,” Adv. High Energy Phys.  [**2010**]{}, 297916 (2010) \[arXiv:1006.3675 \[hep-th\]\]. S. Bhattacharyya and S. Minwalla, “Weak Field Black Hole Formation in Asymptotically AdS Spacetimes,” JHEP [**0909**]{}, 034 (2009) \[arXiv:0904.0464 \[hep-th\]\]. V. Balasubramanian [*et al.*]{}, “Thermalization of Strongly Coupled Field Theories,” Phys. Rev. Lett.  [**106**]{}, 191601 (2011) \[arXiv:1012.4753 \[hep-th\]\]. V. Balasubramanian [*et al.*]{}, “Holographic Thermalization,” Phys. Rev.  D [**84**]{}, 026010 (2011) \[arXiv:1103.2683 \[hep-th\]\]. D. Grumiller and P. Romatschke, “On the collision of two shock waves in AdS(5),” JHEP [**0808**]{}, 027 (2008) \[arXiv:0803.3226 \[hep-th\]\]. S. S. Gubser, S. S. Pufu and A. Yarom, “Entropy production in collisions of gravitational shock waves and of heavy ions,” Phys. Rev. D [**78**]{}, 066014 (2008) \[arXiv:0805.1551 \[hep-th\]\]. J. L. Albacete, Y. V. Kovchegov and A. Taliotis, “Modeling Heavy Ion Collisions in AdS/CFT,” JHEP [**0807**]{}, 100 (2008) \[arXiv:0805.2927 \[hep-th\]\]. L. Alvarez-Gaume, C. Gomez, A. Sabio Vera, A. Tavanfar and M. A. Vazquez-Mozo, “Critical formation of trapped surfaces in the collision of gravitational shock waves,” JHEP [**0902**]{}, 009 (2009) \[arXiv:0811.3969 \[hep-th\]\]. S. Lin and E. Shuryak, “Grazing Collisions of Gravitational Shock Waves and Entropy Production in Heavy Ion Collision,” Phys. Rev. D [**79**]{}, 124015 (2009) \[arXiv:0902.1508 \[hep-th\]\]. J. L. Albacete, Y. V. Kovchegov and A. Taliotis, “Asymmetric Collision of Two Shock Waves in AdS(5),” JHEP [**0905**]{}, 060 (2009) \[arXiv:0902.3046 \[hep-th\]\]. S. S. Gubser, S. S. Pufu and A. Yarom, “Off-center collisions in AdS(5) with applications to multiplicity estimates in heavy-ion collisions,” JHEP [**0911**]{}, 050 (2009) \[arXiv:0902.4062 \[hep-th\]\]. Y. V. Kovchegov and S. Lin, “Toward Thermalization in Heavy Ion Collisions at Strong Coupling,” JHEP [**1003**]{}, 057 (2010) \[arXiv:0911.4707 \[hep-th\]\]. Y. V. Kovchegov, “Shock Wave Collisions and Thermalization in AdS$_5$,” Prog. Theor. Phys. Suppl.  [**187**]{}, 96 (2011) \[arXiv:1011.0711 \[hep-th\]\]. P. M. Chesler and L. G. Yaffe, “Holography and colliding gravitational shock waves in asymptotically AdS$_5$ spacetime,” Phys. Rev. Lett.  [**106**]{}, 021601 (2011) \[arXiv:1011.3562 \[hep-th\]\]. I. Y. .Aref’eva, A. A. Bagrov and L. V. Joukovskaya, “Critical Trapped Surfaces Formation in the Collision of Ultrarelativistic Charges in (A)dS,” JHEP [**1003**]{} (2010) 002 \[arXiv:0909.1294 \[hep-th\]\]. I. Y. .Aref’eva, A. A. Bagrov and E. O. Pozdeeva, “Holographic phase diagram of quark-gluon plasma formed in heavy-ions collisions,” arXiv:1201.6542 \[hep-th\]. S. Caron-Huot, P. M. Chesler and D. Teaney, “Fluctuation, dissipation, and thermalization in non-equilibrium AdS$_5$ black hole geometries,” Phys. Rev. D [**84**]{}, 026012 (2011) \[arXiv:1102.1073 \[hep-th\]\]. D. Galante and M. Schvellinger, “Thermalization with a chemical potential from AdS spaces,” arXiv:1205.1548 \[hep-th\]. K. Behrndt, M. Cvetic and W. A. Sabra, “Nonextreme black holes of five-dimensional N=2 AdS supergravity,” Nucl. Phys.  B [**553**]{}, 317 (1999) \[arXiv:hep-th/9810227\]. K. Jensen, “Chiral anomalies and AdS/CMT in two dimensions,” JHEP [**1101**]{}, 109 (2011) \[arXiv:1012.4831 \[hep-th\]\]. K. Behrndt, A. H. Chamseddine, W. A. Sabra, “BPS black holes in N=2 five-dimensional AdS supergravity,” Phys. Lett.  [**B442**]{}, 97-101 (1998). \[hep-th/9807187\]. M. Cvetic and S. S. Gubser, “Phases of R charged black holes, spinning branes and strongly coupled gauge theories,” JHEP [**9904**]{}, 024 (1999) \[hep-th/9902195\]. A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, “Charged AdS black holes and catastrophic holography,” Phys. Rev. D [**60**]{}, 064018 (1999) \[hep-th/9902170\]. S. A. Hartnoll, “Lectures on holographic methods for condensed matter physics,” Class. Quant. Grav.  [**26**]{}, 224002 (2009) \[arXiv:0903.3246 \[hep-th\]\]. V. Balasubramanian and S. F. Ross, “Holographic particle detection,” Phys. Rev.  D [**61**]{}, 044007 (2000) \[arXiv:hep-th/9906226\]. J. M. Maldacena, “Wilson loops in large N field theories,” Phys. Rev. Lett.  [**80**]{}, 4859 (1998) \[arXiv:hep-th/9803002\]. D. E. Berenstein, R. Corrado, W. Fischler and J. M. Maldacena, “The Operator product expansion for Wilson loops and surfaces in the large N limit,” Phys. Rev. D [**59**]{}, 105023 (1999) \[hep-th/9809188\]. S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from AdS/CFT,” Phys. Rev. Lett.  [**96**]{}, 181602 (2006) \[hep-th/0603001\]. V. E. Hubeny, M. Rangamani and T. Takayanagi, “A Covariant holographic entanglement entropy proposal,” JHEP [**0707**]{}, 062 (2007) \[arXiv:0705.0016 \[hep-th\]\]. T. Nishioka, S. Ryu and T. Takayanagi, “Holographic Entanglement Entropy: An Overview,” J. Phys. A A [**42**]{}, 504008 (2009) \[arXiv:0905.0932 \[hep-th\]\]. J. Abajo-Arrastia, J. Aparicio and E. Lopez, “Holographic Evolution of Entanglement Entropy,” JHEP [**1011**]{}, 149 (2010) \[arXiv:1006.4090 \[hep-th\]\]. T. Albash and C. V. Johnson, “Evolution of Holographic Entanglement Entropy after Thermal and Electromagnetic Quenches,” New J. Phys.  [**13**]{}, 045017 (2011) \[arXiv:1008.3027 \[hep-th\]\]. V. Balasubramanian, A. Bernamonti, N. Copland, B. Craps and F. Galli, “Thermalization of mutual and tripartite information in strongly coupled two dimensional conformal field theories,” Phys. Rev. D [**84**]{}, 105017 (2011) \[arXiv:1110.0488 \[hep-th\]\]. P. Arnold and D. Vaman, “Jet quenching in hot strongly coupled gauge theories simplified,” JHEP [**1104**]{}, 027 (2011) \[arXiv:1101.2689 \[hep-th\]\]. Work in progress. V. E. Hubeny, “Extremal surfaces as bulk probes in AdS/CFT,” arXiv:1203.1044 \[hep-th\]. P. Figueras, V. E. Hubeny, M. Rangamani and S. F. Ross, “Dynamical black holes and expanding plasmas,” JHEP [**0904**]{}, 137 (2009) \[arXiv:0902.4696 \[hep-th\]\]. H. J. Kim, L. J. Romans and P. van Nieuwenhuizen, “The Mass Spectrum of Chiral N=2 D=10 Supergravity on S\*\*5,” Phys. Rev. D [**32**]{}, 389 (1985). M. Gunaydin and N. Marcus, “The Spectrum of the s\*\*5 Compactification of the Chiral N=2, D=10 Supergravity and the Unitary Supermultiplets of U(2, 2/4),” Class. Quant. Grav.  [**2**]{}, L11 (1985). M. Gunaydin, G. Sierra and P. K. Townsend, “Gauging the d = 5 Maxwell-Einstein Supergravity Theories: More on Jordan Algebras,” Nucl. Phys. B [**253**]{}, 573 (1985). R. C. Myers and O. Tafjord, “Superstars and giant gravitons,” JHEP [**0111**]{}, 009 (2001) \[hep-th/0109127\]. [^1]: Or, should it be [*herself*]{}? [^2]: The $S^5$ reduction of type IIB supergravity yields an $\cN=8$ gauged supergravity in $d=4$ with $SO(6)$ gauge group. This admits a consistent $\cN=2$ truncation coupled to two Abelian vector multiplets with an $U(1)^3$ Cartan subgroup of the full $SO(6)$. In general this admits a three charge black hole solution. Similarly, the $S^7$-reduction of $11$-dimensional supergravity admits an $\cN=2$ consistent truncation with $U(1)^4$ gauge group, which in general admits a four charge black hole solution. See [*e.g.*]{} [@Behrndt:1998jd] for a discussion of such solutions. Some features of such solutions are also discussed in Appendix A. [^3]: The case of $d=2$ is rather special. In this case, the identification of the source and the VEV are subtle and the chemical potential should be identified as the sub-leading term as $z\to 0$: $\mu \equiv Q \log \left( z_H / L\right)$. For more details on this issue, see [@Jensen:2010em]. [^4]: This is true for all cases except $d=2$ where $Q$ can be as large as possible. [^5]: Note that introducing the Eddington-Finkelstein coordinate in (\[EF\]) yields a gauge field of the form: $A_v = A(z)$, $A_z = A(z)/f(z)$, where $A(z)$ is given in (\[RN\]). We can make a gauge choice to recast all physical information by choosing a gauge $A_v(z)$ only. [^6]: The length integral is perfectly convergent, therefore we can take $z_0 = 0$ without any issue. [^7]: Note that in [@Balasubramanian:2011ur], the authors include a finite piece $2\log 2$ in the subtracted part. This should not change the qualitative behaviour since it’s a constant. [^8]: Typically we have used $\epsilon \sim 10^{-3}$. [^9]: This solution was obtained in [@Berenstein:1998ij] by taking an infinite straight Wilson line and then applying conformal transformation to map the line to a circle. [^10]: Note that here we are not careful about the regularity of the one-form $A_v$ at any particular point since the horizon is created only in the $v\to \infty$ limit. Perhaps a better way to write the background is to write the field strength instead of the gauge field itself. For all our purposes though, this gauge field does not play any role. [^11]: A general form of this background in $(d+1)$-dimensions in given in equation (\[genvaidya\]). [^12]: The only difference is we now have to set the boundary conditions for two functions $z(\rho)$ and $v(\rho)$. [^13]: Although we have not considered the case of $d=2$, which corresponds to an $(1+1)$-dim dual CFT, we have explicitly checked that the non-monotonic behavior is further suppressed for $d=2$. For small values of $\mu/T$, the variation in $T\tau_{\rm crit}$ is negligible and within the numerical accuracy it is difficult to conclude whether thermalization time increases or decreases. For larger values of $\mu/T$, the dependence is similar to what we see in higher dimensional examples and $T\tau_{\rm crit}$ increases linearly with $\mu/T$. [^14]: This can be verified by explicitly fitting the data. [^15]: We would like to thank Berndt Müller for suggesting this possibility. [^16]: For a recent exhaustive study of such $n$-dimensional extremal surfaces in general static asymptotically AdS background of various dimensions, see [*e.g.*]{} [@Hubeny:2012ry]. [^17]: This non-extremality parameter $\mu$ is not to be confused with the chemical potential that we have defined in the main text.
--- abstract: 'Recent measurements of the small-$x$ deep-inelastic regime at HERA translate to new expectations for the neutrino-nucleon cross section at ultrahigh energies. We present event rates for large underground neutrino telescopes based on the new cross section for a variety of models of neutrino production in Active Galactic Nuclei, and we compare these rates with earlier cross section calculations.' address: - 'Mehta Research Institute, 10, Kasturba Gandhi Marg, Allahabad 211002, India' - 'Fermi National Accelerator Laboratory, Batavia, IL 60510 USA' - 'Department of Physics and Astronomy, University of Iowa, Iowa City, IA 52242 USA' - 'Department of Physics, University of Arizona, Tucson, AZ 85721 USA' author: - 'Raj Gandhi, Chris Quigg, M. H. Reno, and Ina Sarcevic[^1]' title: New Predictions for Neutrino Telescope Event Rates --- \#1\#2 \#1\#2[\#1\#2]{} \#1\#2\#3 Neutrino telescopes such as AMANDA, BAIKAL, DUMAND II and NESTOR [@Detect] have the potential to extend the particle physics frontier beyond the standard model, as well as probe stars and galaxies. At ultrahigh energies ($E_\nu > 1$ TeV), neutrinos are decay products of pions produced in cosmic ray interactions. Undeflected by magnetic fields and with long interaction lengths, neutrinos can reveal information about astrophysical sources. Gamma-rays, on the other hand, get absorbed by a few hundred gm of material. Active Galactic Nuclei (AGNs) [@review] may be prodigious sources of high energy neutrinos as well as gamma-rays. Neutrino telescopes span a significant fraction of the sky at all times, making the observation of neutrino interactions in or near the detector feasible. If the most optimistic flux predictions are accurate, observations of AGNs via neutrino telescopes may be imminent. Here we present predictions of event rates for several models of the AGN neutrino flux [@stecker; @mannheim; @sp]. We also compare the predicted rates with the atmospheric neutrino background (ATM), [*i.e.*]{}, neutrinos produced by cosmic ray interactions in the atmosphere [@volkova]. These rates reflect a new calculation of the neutrino-nucleon cross section which follows from recent results from the HERA $ep$ collider [@He]. To reduce the background from muons produced in the atmosphere, it is sufficient to consider the upward-going muons produced below the detector via $\nu_\mu$ ($\bar\nu_\mu$)-N interactions. We also give predictions for downward-moving (contained) muon event rates due to $\bar\nu_ee$ interactions in the PeV range. The importance of the HERA experimental results for neutrino-nucleon cross sections is in measurements of structure functions at small parton momentum fractions $x$ and large momentum transfers $Q$. At ultrahigh energies, the $\nu N$ cross section is dominated by $Q\sim M_W$, the mass of the $W$-boson. Consequently, $x\sim M_W^2/(2ME_\nu\langle y \rangle)$, in terms of nucleon mass $M$, and $x$ decreases as the incident neutrino energy $E_\nu$ increases. HERA results cover the interval $10^{-4}\lsim x\lsim 10^{-2}$ and 8.5 GeV$^2 \lsim Q^2\lsim 15 $ GeV$^2$ [@He], and guide theoretical small-$x$ extrapolations of the parton distributions at even smaller values of $x$. Compared to pre-HERA cross section calculations [@Rq], the new cross section calculation is approximately a factor of four to ten times larger at the highest energies ($E_\nu=$10$^9$ TeV) [@Gqrs]. The range of values reflects different parton distribution function parameterizations and extrapolations below $x=10^{-5}$, all consistent with the data at higher $x$. Since the larger cross section also implies greater attenuation of neutrinos in the Earth, upward-muon event rates for neutrino energy thresholds in the 1-10 TeV range are only $15\%$ larger than previous results based on old cross sections [@Rq]. The attenuation of neutrinos in the Earth is described by a shadow factor $S(E_\nu)$, equivalent to the effective solid angle for upward muons, normalized to $2\pi$: $${d S(E_\nu)\over d\Omega}={1\over 2\pi} \exp \Bigl( -z(\theta )N_A \sigma_{\nu N}(E_\nu) \Bigr),$$ where $N_A=6.022\times 10^{23}$ mol$^{-1}=6.022\times 10^{23}$ cm$^{-3}$ (water equivalent) is Avogadro’s number, and $z(\theta)$ is the column depth of the earth, in water equivalent units, which depends on zenith angle [@prem]. The probability that the neutrino converts to a muon that arrives at the detector with $E_\mu$ larger than the threshold energy $E_\mu^{\rm min}$ is proportional to the cross section: $$P_\mu(E_\nu,E_\mu^{\rm min}) = \sigma_{\rm CC}(E_\nu) N_A \langle R(E_\nu,E_\mu^{\rm min} )\rangle ,$$ where the average muon range in rock is $\langle R\rangle$ [@slip]. A more detailed discussion appears in Ref. [@Gqrs]. The diffuse flux of AGN neutrinos, summed over all AGN sources, is isotropic, so the event rate is $$\begin{aligned} {\rm Rate} = A \int dE_\nu P_\mu(E_\nu,E_\mu^{\rm min}) S(E_\nu){dN_\nu\over dE_\nu} ,\end{aligned}$$ given a neutrino spectrum $dN_\nu/dE_\nu$ and detector cross sectional area $A$. As the cross section increases, $P_\mu$ increases, but the effective solid angle decreases. In Tables 1 and 2, we show the event rates for a detector with $A=0.1$ km$^2$ for $E_\mu^{\rm min}=1$ TeV and $E_\mu^{\rm min}=10$ TeV, respectively. These event rates are for upward muons and antimuons with two choices of parton distribution functions: EHLQ parton distribution functions [@Ehlq] used in Ref. [@Rq], and the parton distributions parameterized by the CTEQ Collaboration [@CTEQ], coming from a global fit that includes the HERA data. The muon range is that of Ref. [@slip]. Fluxes EHLQ CTEQ-DIS --------------------- ------ ---------- AGN-SS [@stecker] 82 92 AGN-NMB [@mannheim] 100 111 AGN-SP [@sp] 2660 2960 ATM [@volkova] 126 141 : Number of upward $\mu+\bar{\mu}$ per year per steradian for $A=0.1$ km$^2$ and $E_\mu^{\rm min}= 1$ TeV. Fluxes EHLQ CTEQ-DIS --------------------- ------ ---------- AGN-SS [@stecker] 46 51 AGN-NMB [@mannheim] 31 34 AGN-SP [@sp] 760 843 ATM [@volkova] 3 3 : As in Table 1, but for $E_\mu^{\rm min}=10$ TeV. The representative fluxes in Tables 1 and 2 can be approximated by a simple power law behavior for $E_\nu<100$ TeV. For $dN/dE_\nu\propto E^{-\gamma}$, the fluxes can be approximated by $\gamma=0$ (AGN-SS), $\gamma = 2$ (AGN-NMB and AGN-SP) and $\gamma = 3.6$ (ATM). The AGN-SP rate is large compared to the AGN-NMB rate because additional mechanisms are included. Flux limits from the Fréjus experiment are inconsistent with the SP flux for 1 TeV$< E_\nu <$ 10 TeV [@frejus]. The flatter neutrino spectra have larger contributions to the event rate for muon energies away from the threshold muon energy than the steep atmospheric flux. For the 10 TeV muon energy threshold, the atmospheric neutrino background is significantly reduced. Finally we consider event rates from electron neutrino and antineutrino interactions. For $\nu_eN$ (and $\bar{\nu}_eN$) interactions, the cross sections are identical to the muon neutrino (antineutrino) nucleon cross sections. Because of the rapid energy loss or annihilation of electrons and positrons, it is generally true that only contained events can be observed. Since electron neutrino fluxes are small, unrealistically large effective volume is needed to get measurable event rates. The exception is at $E_\nu=6.3$ PeV, precisely the energy for resonant $W$-boson production in $\bar{\nu}_e e$ collisions. The contained event rate for resonant $W$ production is $${\rm Rate} = {10\over 18} V_{\rm eff} N_A \int dE_{\bar{\nu}} \sigma_{\nu e}(E_\nu) S(E_{\bar{\nu}}){dN\over dE_{\bar{\nu}}} .$$ We show event rates for resonant $W$-boson production in Table 3. Mode AGN-SS [@stecker] AGN-SP [@sp] ---------------------------------- ------------------- -------------- $W\rightarrow \bar{\nu}_\mu \mu$ 6 3 $W\rightarrow {\rm hadrons}$ 41 19 $(\nu_\mu,\bar\nu_\mu)$ N CC 33 (7) 19 (4) $(\nu_\mu,\bar\nu_\mu)$ N NC 13 (3) 7 (1) : Downward resonance $\bar\nu_e e\rightarrow W^-$ events per year per steradian for a detector with effective volume $V_{\rm eff}=1$ km$^3$ together with the potential downward (upward) background from $\nu_\mu$ and $\bar\nu_\mu$ interactions above 3 PeV. From Table 3 we note that a 1 km$^3$ detector with energy threshold in PeV range would be suitable for detecting resonant $\bar\nu_e \rightarrow W$ events, however, the $\nu_\mu N$ background may be difficult to overcome. By placing the detector a few km underground, one can reduce atmospheric-muon background, which is 5 events per year per steradian at the surface of the Earth. In summary, we find that detectors such as DUMAND II, AMANDA, BAIKAL and NESTOR have a very good chance of being able to test different models for neutrino production in the AGNs [@review]. For $E_{\mu}^{\rm min}=1$ TeV, we find that the range of theoretical fluxes lead to event rates of 900-29,600 upward-moving muons/yr/km$^2$/sr originating from the diffuse AGN neutrinos, with the atmospheric background of 1400 events/yr /km$^2$/sr. For $E_{\mu}^{\rm min}=10$ TeV, signal to background ratio becomes even better, with signals being on the order of 500-8,400 events/yr/km$^2$/sr, a factor $\sim$20-300 higher than the background rate. For neutrino energies above $3$ PeV there is significant contribution to the muon rate due to the $\bar \nu_e$ interaction with electrons, due to the W-resonance contribution. We find that acoustic detectors with 3 PeV threshold and with effective volume of 0.2 km$^3$, such as DUMAND, would detect 48 hadronic cascades per year from $W \rightarrow$ hadrons, 7 events from $W \rightarrow \mu \bar\nu_{\mu}$ and 36 events from $\nu_{\mu}$ and $\bar \nu_{\mu}$ interactions with virtually no background from ATM neutrinos. [99]{} J. Babson [*et al.*]{}, (DUMAND Collaboration), [ Phys. Rev. ]{} [D42]{} (1990) 3613; [ Proceedings of the NESTOR workshop at Pylos, Greece]{}, ed. L. K. Resvanis (University of Athens, 1993); D. Lowder [*et. al.*]{}, [ Nature]{} [ 353]{} (1991) 331. See §7 of T. K. Gaisser, F. Halzen and T. Stanev, [ Phys. Rep.]{} [ 258]{} (1995) 173 for a review of several models. F. W. Stecker, C. Done, M. H. Salamon and P. Sommers, [ Phys. Rev. Lett.]{} [ 66]{} (1991) 2697; Errat.: [ Phys. Rev. Lett.]{} [ 69]{} (1992) 2738. Revised estimates of the neutrino flux appear in F. W. Stecker and M. H. Salamon, astro-ph/9501064, submitted to [Space Sci. Rev.]{} L. Nellen, K. Mannheim and P. L. Biermann, [Phys. Rev.]{} [ D47]{} (1993) 5270. A. P. Szabo and R. J. Protheroe, [ Astropart. Phys.]{} [ 2]{} (1994) 375. L. V. Volkova, [ Yad. Fiz.]{} [ 31]{} (1980) 1510 ([ Sov. J. Nucl. Phys.]{} [ 31]{} (1980) 784). ZEUS Collaboration, M. Derrick [*et al.*]{}, [ Phys. Lett.]{} [ B316]{} (1993) 412; H1 Collaboration, I. Abt [*et al.*]{}, [ Nucl. Phys.]{} [ B407]{} (1993) 515. C. Quigg, M. H. Reno and T. Walker, [Phys. Rev. Lett.]{} [ 57]{} (1986) 774; M. H. Reno and C. Quigg, [Phys. Rev.]{} [ D37]{} (1988) 657. R. Gandhi, C. Quigg, M.H. Reno and I. Sarcevic, hep-ph/9512364, submitted to [ Astropart. Phys.]{} A. Dziewonski, “Earth Structure, Global,” in [ The Encyclopedia of Solid Earth Geophysics]{}, ed. D. E. James (Van Nostrand Reinhold, New York, 1989), p. 331. P. Lipari and T. Stanev, [ Phys. Rev.]{} [ D44]{} (1991) 3543. E. Eichten, I. Hinchliffe, K. Lane and C. Quigg, [ Rev. Mod. Phys.]{} [ 56]{} (1984) 579. H. Lai [*et al.*]{}, [ Phys. Rev.]{} [ D51]{} (1995) 4763. W. H. Rhode [*et al.*]{} (Fréjus Collaboration), Wuppertal preprint WUB-95-26, to appear in [ Astropart. Phys.]{} [^1]: Talk presented by I. Sarcevic.
--- abstract: 'For an irreducible orientable compact $3$-manifold $N$ with empty or incompressible toral boundary, the full $L^2$–Alexander torsion $\tau^{(2)}(N,\phi)(t)$ associated to any real first cohomology class $\phi$ of $N$ is represented by a function of a positive real variable $t$. The paper shows that $\tau^{(2)}(N,\phi)$ is continuous, everywhere positive, and asymptotically monomial in both ends. Moreover, the degree of $\tau^{(2)}(N,\phi)$ equals the Thurston norm of $\phi$. The result confirms a conjecture of J. Dubois, S. Friedl, and W. Lück and addresses a question of W. Li and W. Zhang. Associated to any admissible homomorphism $\gamma:\pi_1(N)\to G$, the $L^2$–Alexander torsion $\tau^{(2)}(N,\gamma,\phi)$ is shown to be continuous and everywhere positive provided that $G$ is residually finite and $(N,\gamma)$ is weakly acyclic. In this case, a generalized degree can be assigned to $\tau^{(2)}(N,\gamma,\phi)$. Moreover, the generalized degree is bounded by the Thurston norm of $\phi$.' address: | Beijing International Center for Mathematical Research\ No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China P.R. author: - Yi Liu date: title: 'Degree of $L^2$–Alexander torsion for 3–manifolds' --- Introduction ============ Let $N$ be an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary. Given a homomorphism $\gamma:\pi_1(N)\to G$ to a countable target group $G$ and a cohomology class $\phi\in H^1(N;\,{{\mathbb R}})$, the triple $(\pi_1(N),\gamma,\phi)$ is said to be *admissible* if the homomorphism $\pi_1(N)\to {{\mathbb R}}$ induced by $\phi$ factors through $\gamma$. Associated to any given admissible triple, the *$L^2$–Alexander torsion* has been introduced by Jérôme Dubois, Stefan Friedl, and Wolfgang Lück [@DFL-torsion]. It is a function $$\tau^{(2)}(N,\gamma,\phi):\,{{\mathbb R}}_+\to[0,+\infty),$$ uniquely defined up to multiplication by a function of the form $t\mapsto t^r$ where $r\in{{\mathbb R}}$. In this paper, we use a dotted equal symbol to mean two functions being equal to each other up to such a monic power function factor. When $\gamma$ is taken to be $\mathrm{id}_{\pi_1(N)}:\pi_1(N)\to\pi_1(N)$, the corresponding function is called the *full $L^2$–Alexander torsion* with respect to $\phi$, denoted by $\tau^{(2)}(N,\phi)(t)$. In [@DFL-torsion; @DFL-symmetric], the following properties about the full $L^2$–Alexander torsion are proved: 1. For all $c\in{{\mathbb R}}$, $$\tau^{(2)}(N,\,c\phi)(t)\,\doteq\,\tau^{(2)}(N,\phi)(t^c).$$ 2. $$\tau^{(2)}(N,-\phi)(t)\,\doteq\,\tau^{(2)}(N,\phi)(t).$$ 3. For any fibered class $\phi\in H^1(N;{{\mathbb Z}})$, $$\tau^{(2)}(N,\phi)(t)\,\doteq\,\begin{cases}1&t\in(0,e^{-h(\phi)})\\t^{x_N(\phi)}&t\in(e^{h(\phi)},+\infty)\end{cases}$$ where $h(\phi)$ denotes the entropy of the monodromy, and $x_N(\phi)$ denotes the Thurston norm. 4. Denoting by $\mathrm{Vol}(N)$ the simplicial volume of $N$, $$\tau^{(2)}(N,\phi)(1)\,=\,e^{\frac{\mathrm{Vol}(N)}{6\pi}}.$$ 5. If $\mathrm{Vol}(N)$ equals $0$, $$\tau^{(2)}(N,\phi)(t)\,\doteq\,\begin{cases}1&t\in(0,1]\\t^{x_N(\phi)}&t\in[1,+\infty)\end{cases}$$ For knot complements, the full $L^2$–Alexander torsion recovers the $L^2$–Alexander invariant introduced earlier by Weiping Li and Weiping Zhang [@LZ-Alexander; @LZ-AlexanderConway]. If $\gamma$ is virtually abelian, the $L^2$–Alexander torsion is closely related to the twisted Alexander polynomial through certain function associated to the Mahler measure [@DFL-torsion]. We refer the reader to the survey [@DFL-flavors] for more relations between the $L^2$–Alexander torsion and other flavors of Alexander-type invariants. It is generally anticipated that the degree of Alexander-type invariants conveys topological information about of the cohomology class $\phi$ of $N$. For example, the degree of twisted Alexander polynomials can be used to detect the Thurston norm of $\phi$ due to Stefan Friedl and Stefano Vidussi [@FV-detectThurstonNorm]. Various comparison results are also known, cf. [@Cochran; @FK-norm; @Harvey-degree; @McMullen; @Turaev; @Vidussi-norm]. For $L^2$–Alexander torsion, a fundamental problem is to define the degree in the first place. The following version has been proposed by Dubois–Friedl–Lück [@DFL-torsion Section 1.2], (there simply called the degree): \[degree-a\] Let $f:{{\mathbb R}}_+\to [0,+\infty)$ be a function. Suppose that $f$ is asymptotically monomial in both ends, namely, as $t\to+\infty$, the following asymptotic formula holds for some constants $C_{+\infty}\in {{\mathbb R}}_+$ and $d_{+\infty}\in{{\mathbb R}}$: $$f\sim C_{+\infty}\cdot t^{d_{+\infty}},$$ and the same property holds with $+\infty$ replaced by $0+$. Here the notation $f\sim g$ means that the ratio between the functions on both sides tends to $1$. For such $f$, the *asymptote degree* of $f$ is defined to be the value: $$\mathrm{deg}^{\mathtt{a}}(f)\,=\,d_{+\infty}-d_{0+}\,\in\,{{\mathbb R}}.$$ The main goal of this paper is to establish the existence of the asymptote degree for the full $L^2$–Alexander torsion of $3$-manifolds, and confirm in this case that the degree equals the Thurston norm: \[main-torsion\] Let $N$ be an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary. Given any cohomology class $\phi\in H^1(N;\,{{\mathbb R}})$, the following properties hold true for any representative of the full $L^2$–Alexander torsion $\tau^{(2)}(N,\phi)$. 1. The function $\tau^{(2)}(N,\phi)(t)$ is continuous and everywhere positive, defined for all $t\in{{\mathbb R}}_+$. In fact, the function $\tau^{(2)}(N,\phi)(t)\cdot\max\{1,t\}^m$ is multiplicatively convex for any sufficiently large positive constant $m$, where the bound depends on $N$ and $\phi$. 2. As the parameter $t$ tends to $+\infty$, $$\tau^{(2)}(N,\phi)(t)\,\sim\,C(N,\phi)\cdot t^{d_{+\infty}}$$ for some constant $d_{+\infty}\in{{\mathbb R}}$ and some constant $$C(N,\phi)\,\in\left[1,\,e^{\mathrm{Vol}(N)/6\pi}\right].$$ The same asymptotic formula holds true for with $+\infty$ replaced by $0+$. 3. Hence the asymptote degree of $\tau^{(2)}(N,\phi)$ is valid. Furthermore, $$\mathrm{deg}^{\mathtt{a}}\left(\tau^{(2)}(N,\phi)\right)\,=\,x_N(\phi).$$ 4. The leading coefficient function $$\begin{aligned} H^1(N;{{\mathbb R}})&\to& \left[1,\,e^{\mathrm{Vol}(N)/6\pi}\right]\\ \phi&\mapsto & C(N,\phi) \end{aligned}$$ is upper semicontinuous. In particular, Theorem \[main-torsion\] confirms Conjecture 1.1 (1) of Dubois–Friedl–Lück [@DFL-flavors]. In fact, many aspects of Theorem \[main-torsion\] have also been conjectured, at least for knot complements, cf.  [@DFL-flavors Subsection 5.8]. In particular, the first part of Theorem \[main-torsion\] addresses the question (Q2) of Li–Zhang [@LZ-AlexanderConway]. The full $L^2$–Alexander torsion apparently loses information about the fiberedness of cohomology classes in general. In fact, we have already observed that the full $L^2$–Alexander torsion of graph manifolds is completely determined by the Thurston norm, [@DFL-torsion Theorem 1.2], [@Herrmann]. However, we exhibit an example at the end of this paper to indicate that nontrivial leading coefficients could occur, (Section \[Sec-example\]). The example might suggest that the leading coefficient $C(N,\phi)$ retains some information about the cohomology class $\phi$ which is volume (of the 3-dimensional hyperbolic type) in nature. For a primitive classes $\phi\in H^1(N;{{\mathbb Z}})$, we hence wonder if $C(N,\phi)$ measures certain volume of the guts if one decomposes $N$ along a taut subsurface dual to $\phi$. It is possible to prove an analogous comparison theorem for more general $L^2$–Alexander torsions. To this end, we introduce another degree under less strict requirements: \[degree-b\] Let $f:{{\mathbb R}}_+\to [0,+\infty)$ be a function. Suppose that the following supremum and infimum exist in ${{\mathbb R}}$: $$\mathrm{deg}^{\mathtt{b}}_{+\infty}(f)\,=\,\inf\left\{D_{+\infty}\in{{\mathbb R}}\,:\,\lim_{t\to+\infty}f(t)\cdot t^{-D_{+\infty}}\,=\,0\right\},$$ and $$\mathrm{deg}^{\mathtt{b}}_{0+}(f)\,=\,\sup\left\{D_{0+}\in{{\mathbb R}}\,:\,\lim_{t\to0+}f(t)\cdot t^{-D_{0+}}\,=\,0\right\}.$$ For such $f$, the *growth bound degree* of $f$ is defined to be the value: $$\mathrm{deg}^{\mathtt{b}}(f)\,=\,\mathrm{deg}^{\mathtt{b}}_{+\infty}-\mathrm{deg}^{\mathtt{b}}_{0+}\,\in\,{{\mathbb R}}.$$ By saying that a pair $(N,\gamma)$ is *weakly acyclic*, we mean that there are no non-vanishing $L^2$–Betti numbers for the covering space of $N$ that corresponds to $\mathrm{Ker}(\gamma)$, regarded as an $\mathrm{Im}(\gamma)$–space, cf. [@Lueck-book Section 6.5]. \[main-torsion-weak\] Let $N$ be an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary, and $\gamma:\pi_1(N)\to G$ be a homomorphism. Suppose that $G$ is finitely generated and residually finite, and $(N,\gamma)$ is weakly acyclic. Then the following properties hold true for any representative of the $L^2$–Alexander torsion $\tau^{(2)}(N,\gamma,\phi)$ of any admissible triple $(N,\gamma,\phi)$ over ${{\mathbb R}}$. 1. The function $\tau^{(2)}(N,\gamma,\phi)(t)$ is continuous and everywhere positive, defined for all $t\in{{\mathbb R}}_+$. In fact, the function $\tau^{(2)}(N,\gamma,\phi)(t)\cdot\max\{1,t\}^m$ is multiplicatively convex for any sufficiently large positive constant $m$, where the bound depends on $(N,\gamma,\phi)$. 2. The growth bound degree of $\tau^{(2)}(N,\gamma,\phi)$ is valid. Furthermore, $$\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma,\phi)\right)\,\leq\,x_N(\phi).$$ 3. The degree function $$\begin{aligned} H^1(G;{{\mathbb R}})&\to& {{\mathbb R}}\\ \xi&\mapsto & \mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma,\phi+\gamma^*\xi)\right) \end{aligned}$$ is Lipschitz continuous. In a weaker form, Theorem \[main-torsion-weak\] generalizes the virtually abelian case which has been done in [@DFL-torsion]. For example, if $N$ is a compact orientable surface bundle over the circle and $\gamma$ is a homomorphism of $\pi_1(N)$ onto a residually finite group $G$ such that $\gamma^*:H^1(G;{{\mathbb R}})\to H^1(N;{{\mathbb R}})$ is onto, then the assumptions of Theorem \[main-torsion-weak\] are satisfied. Completely independently from work of this paper, Friedl and Lück have also proved the equality between the (growth bound) degree of the full $L^2$-Alexander torsion and the Thurston norm [@FriedlLueck-new]. In fact, their work implies Theorem \[main-torsion-weak\] (2) as well. Moreover, their work relies on a systematic study of twisting $L^2$-invariants by Lück [@Lueck-new]. We point out that both [@Lueck-new] and [@FriedlLueck-new] keep track of the Euler structures more closely than this paper does, which should be important for potential applications. For example, with a fixed Euler structure, the $L^2$–Alexander torsion becomes a genuine function in the pair $(\phi,t)$, so it would make sense to study its continuity and other properties. In the rest of the introduction, we discuss some ingredients involved in the proof of Theorem \[main-torsion\]. Theorem \[main-torsion-weak\] can be proved along the way. After choosing some CW complex structure of $N$ convenient for calculation as used in [@DFL-torsion], we may manipulate $\tau(N,\phi)(t)$ into an alternating product, where the factors are regular Fuglede–Kadison determinants of the $L^2$–Alexander twist of square matrices over ${{\mathbb Z}}\pi_1(N)$. Except the one coming from the boundary homomorphism between dimension 2 and dimension 1, the factors are all very simple and well understood. Therefore, the proof of Theorem \[main-torsion\] can be reduced to the study of the regular Fuglede–Kadison determinant for an $L^2$–Alexander twist of a single matrix $A$. Associated to the admissible triple $(\pi_1(N),\mathrm{id}_{\pi_1(N)},\phi)$, the factor corresponding to $A$ is a non-negative function defined for $t\in{{\mathbb R}}_+$ of the form $$V(t)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\phi,\mathrm{id}_{\pi_1(N)},t)(A)\right),$$ where $A$ is a square matrix over ${{\mathbb Z}}\pi_1(N)$, (cf. Section \[Sec-prelim\] for the notations). The first ingredient is to show that $V(t)$ is a multiplicatively convex function with bounded exponent. See Section \[Sec-mConvexFunction\] for the terminology. In fact, we show in Theorem \[mConvex-eBounded\] that the asserted property holds true for general admissible triples $(\pi,\gamma,\phi)$ over ${{\mathbb R}}$ and square matrices $A$ over ${{\mathbb C}}\pi$, as long as the target group $G$ of $\gamma$ is residually finite. The exponent bound can be easily perceived, and can be easily proved once the multiplicative convexity is available. When $G$ is finitely generated and virtually abelian, the multiplicative convexity can be verified by computation using Mahler measure of multivariable Laurent polynomials. Therefore, to approach the residually finite case, it is natural to consider a cofinal tower of virtually abelian quotients $G$, denoted as $$G\to\cdots\to\Gamma_n\to\cdots\to\Gamma_2\to\Gamma_1,$$ which gives rise to a sequence of $L^2$–Alexander twist homomorphisms $\kappa(\gamma_n,\phi,t)$, where $\gamma_n:\pi\to\Gamma_n$ is the induced homomorphism. For any given $t\in{{\mathbb R}}_+$, the spectra of the matrices $A_n(t)=\kappa(\gamma_n,\phi,t)(A)$ could become increasingly dense near $0$, as $n$ tends to $\infty$, so it should not be expected in general that the sequence of functions $\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(A_n(t))$ converged pointwise to $V_G(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A(t))$. By introducing a positive $\epsilon$-pertubation of the positive operator $A_n(t)^*A_n(t)$, namely, $$H_{n,\epsilon}(t)\,=\,A_n(t)^*A_n(t)+\epsilon\cdot\mathbf{1},$$ the issue of small spectrum values can be bypassed. However, one has to be careful because of the fact that the $L^2$–Alexander twist does not commute with the operation of taking self-adjoint. For example, $H_{n,\epsilon}(t)$ is in general not a family of $L^2$–Alexander twisted operators, so the regular determinant of $H_{n,\epsilon}(t)$ does not need to be multiplicatively convex in the parameter $t\in{{\mathbb R}}_+$. Instead of arguing that way, for any fixed $T\in{{\mathbb R}}_+$, we look at the functions $$W_{n,\epsilon}(s,T)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}\left(\kappa(\gamma_{n*}\phi,\mathrm{id}_{\Gamma_n},s)(H_{n,\epsilon}(T))\right)$$ in a new parameter $s\in{{\mathbb R}}_+$. As $n\to\infty$ and then $\epsilon\to0+$, we show that $W_{n,\epsilon}(1,T)$ converges to $W_{\infty,0}(1,T)$, while the limit superior of $W_{n,\epsilon}(s,T)$ does not exceed $W_{\infty,0}(s,T)$. Using the fact that $W_{n,\epsilon}(s,T)$ are multiplicatively convex in $s\in{{\mathbb R}}_+$, it can be implied that $V_G(t)$ is multiplicatively convex as well. The growth bound degree is applicable to any (nowhere zero) multiplicatively convex function with bounded exponent. It can be equivalently characterized as the width of the range of all possible exponents (or ‘multiplicative slopes’) between pairs of points. As a consequence of Theorem \[mConvex-eBounded\], we are able to show that the growth bound degree $\mathrm{deg}^{\mathtt{b}}(V)$ depends Lipschitz-continuously on the cohomology class $\phi\in H^1(N;{{\mathbb R}})$, (Theorem \[continuityOfDegree\]). The second ingredient is a criterion to confirm that $V(t)$ is asymptotically monomial as $t$ tends to $+\infty$ or $0+$, or in other words, that $\mathrm{deg}^{\mathtt{a}}(V)$ equals $\mathrm{deg}^{\mathtt{b}}(V)$. To motivate the conditions, consider the sequence of determinant functions $$V_n(t)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\kappa(\gamma_n,\phi,t)(A))$$ associated to the cofinal tower of virtually abelian quotients $\Gamma_n$ above. Using techniques of [@Lueck-approximating], what one can show is that for every $t\in{{\mathbb R}}_+$, as $n\to\infty$, the supremum limit of $V_n(t)$ does not exceed $V(t)$. On the other hand, the functions $V_n(t)$ are all multiplicatively convex and asymptotically monomial in both ends. As $t\to+\infty$, suppose $$V_n(t)\sim C_{+\infty,n}\cdot t^{d_{+\infty,n}},$$ and similarly we introduce the notations $C_{0+,n}$ and $d_{0+,n}$ for $t\to0+$. As $n\to\infty$, if the degrees $\mathrm{deg}^{\mathrm{b}}(V_n)=\mathrm{deg}^{\mathrm{a}}(V_n)=d_{+\infty,n}-d_{0+,n}$ converge to the growth bound degree $\mathrm{deg}^{\mathrm{b}}(V)$, and if the coefficients $C_{+\infty,n}$ and $C_{0+,n}$ are uniformly bounded below by some constant $L\in{{\mathbb R}}_+$, then it can be implied by the geometry of the log–log plots of the functions that $V(t)$ must be asymptotically monomial in both ends as well, (Lemma \[mConvexVersion\]). For our proof of Theorem \[main-torsion\], the convergence of growth bound degrees can be guaranteed by the virtual RFRS property of $3$-manifold groups, at least after excluding the case of graph manifolds, which has been treated by [@DFL-torsion Theorem 1.2], [@Herrmann]. In fact, combined with the continuity of degree that we have already mentioned, the method of [@DFL-torsion Theorem 9.1] can be applied to produce a cofinal tower of virtually abelian quotients such that the growth bound degree of each $V_n(t)$ and $V(t)$ is equal to the Thurston norm of $\phi$. On the other hand, based on the fact that $A$ is a square matrix over ${{\mathbb Z}}\pi_1(N)$, computation shows that the coefficients $C_{+\infty,n}$ and $C_{0+,n}$ are all radicals of the Mahler measure of certain multivariable Laurent polynomial over ${{\mathbb Z}}$. This yields a uniform lower bound $1$ for all the coefficients. Therefore, the criterion is applicable to our situation, and we can complete the proof of Theorem \[main-torsion\]. In Section \[Sec-prelim\], we recall some terminology that is used in this paper. In Section \[Sec-rFKdet\], we introduce regular Fuglede–Kadison determinants and discuss its limiting behavior. In Section \[Sec-mConvexFunction\], we introduce multiplicatively convex functions and mention some basic properties. After these preparing sections, we study the regular Fuglede–Kadison determinants of matrices under $L^2$–Alexander twists in Sections \[Sec-mConvex-eBounded\], \[Sec-continuityOfDegree\], and \[Sec-asymptotics\]: The multiplicative convexity and the existence of the growth bound degree is shown in Section \[Sec-mConvex-eBounded\]; The continuity of degree is derived in Section \[Sec-continuityOfDegree\]; The criterion for monomial asymptotics is introduced in Section \[Sec-asymptotics\]. In Section \[Sec-mainProofs\], we apply the ingredients to $L^2$–Alexander torsions of $3$-manifolds, and prove Theorems \[main-torsion\] and \[main-torsion-weak\]. In Section \[Sec-example\], we give an example regarding nontrivial leading coefficients. Acknowledgements {#acknowledgements .unnumbered} ---------------- The author would like to thank Stefan Friedl and Wolfgang Lück for letting him learn their independent work and for subsequent valuable communications. The author also thanks Weiping Li for interesting conversations. Preliminaries {#Sec-prelim} ============= In this section, we recall some terminology of Dubois–Friedl–Lück [@DFL-torsion]. We also briefly recall some fundamental facts in 3-manifold topology. For background in $L^2$-invariants, including group von Neumann algebras and Fuglede–Kadison determinants, we refer the reader to the book of W. Lück [@Lueck-book]. Admissible triples ------------------ Admissibility conditions have been introduced by S. Harvey for study of higher-order Alexander polynomials [@Harvey-monotonicity Definition 1.4]. In this paper, we adopt the following notations, according to [@DFL-torsion]. Let $L\subset {{\mathbb R}}$ be any additive group of real numbers, for example, ${{\mathbb Z}}$, ${{\mathbb Q}}$, or ${{\mathbb R}}$. Given a countable group $\pi$, and a homomorphism $\phi\in \mathrm{Hom}(\pi,L)$, and a homomorphism $\gamma:\pi\to G$ to any countable group $G$, we say that $(\pi,\phi,\gamma)$ forms *an admissible triple over $L$* if $\phi$ factors through $\gamma$. That is, for some homomorphism $G\to L$, the following diagram commutes: $$\xymatrix{ \pi \ar[r]^\gamma \ar[rd]_\phi &G \ar[d] \\& L}$$ Given any positive real parameter $t\in {{\mathbb R}}_+$, there is a homomorphism of rings: $$\kappa(\phi,\gamma,t):\,{{\mathbb Z}}\pi\longrightarrow {{\mathbb R}}G$$ defined uniquely by $$\kappa(\phi,\gamma,t)(g)\,=\,t^{\phi(g)}\gamma(g)$$ for all $g\in\pi$ via linear extension over ${{\mathbb Z}}$. Then for any positive integer $p$, $\kappa(\phi,\gamma,t)$ naturally extends to be a homomorphism of algebras: $$\kappa(\phi,\gamma,t):\,{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)\to {{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G)$$ by applying $\kappa(\phi,\gamma,t)$ to entries accordingly. Note that $\kappa(\phi,\gamma,t)$ is not a homomorphism of $*$-algebras in general. In fact, $$\kappa(\phi,\gamma,t)(A)^*=\kappa(\phi,\gamma,t^{-1})(A^*).$$ Recall that for any square matrix $A=(a_{ij})_{p\times p}$ over ${{\mathbb C}}G$, as an operator of $\ell^{2}(G)^{\oplus p}$, the adjoint operator can be given by $A^*=(a^*_{ji})_{p\times p}$, where the involution of an element $a=\sum_{k} a_kg_k\in{{\mathbb C}}G$ is given by $a^*=\sum_k \bar{a}_kg_k^{-1}\in{{\mathbb C}}G$. Every admissible triple $(\pi,\phi,\gamma)$ over $L$ sits naturally in an affine family of admissible triples parametrized by $\mathrm{Hom}(G,L)$. Specifically, for any homomorphism $$\xi\in\mathrm{Hom}(G,L),$$ we have a new admissible triple $(\pi,\phi+\gamma^*\xi,\gamma)$, where $\phi+\gamma^*\xi:\pi\to L$ is the homomorphism defined by $$(\phi+\gamma^*\xi)(g)\,=\,\phi(g)+\xi(\gamma(g))$$ for all $g\in\pi$. To speak of continuity, we consider the space $\mathrm{Hom}(G,L)$ to be equipped with the compact-open topology, regarding $G$ to be a discrete group and $L$ have the subspace topology of ${{\mathbb R}}$. \[homologicallyIsomorphic\] If $\gamma:\pi\to G$ induces an isomorphism $\gamma_*:H_1(\pi;\,{{\mathbb R}})\to\,H_1(G;\,{{\mathbb R}})$, then $(\pi,\gamma,\phi)$ is admissible for every homomorphism $\phi:\pi\to{{\mathbb R}}$. In this case, the composition $$\pi\stackrel{\gamma}\longrightarrow G\longrightarrow H_1(G;\,{{\mathbb R}}) \stackrel{\gamma_*^{-1}}\longrightarrow H_1(\pi;\,{{\mathbb R}})\stackrel{\phi_*}\longrightarrow {{\mathbb R}}$$ recovers the homomorphism $\phi$. $L^2$–Alexander torsion ----------------------- Let $X$ be a connected finite CW complex. The universal cover $\widehat{X}$ of $X$ is a CW complex equipped with a free action of the deck transformation group $\pi_1(X)$. We equip the chain complex $C_*(\widehat{X})$ with a left ${{\mathbb Z}}\pi_1(X)$ action induced by the deck transformation. On the other hand, given any admissible triple $(\pi_1(X),\gamma,\phi)$ over ${{\mathbb R}}$, and given a parameter value $t\in{{\mathbb R}}_+$, we may equip the Hilbert space $\ell^2(G)$ with a right ${{\mathbb Z}}\pi_1(X)$–module structure via the representation: $$\kappa(\phi,\gamma,t):\,{{\mathbb Z}}\pi_1(X)\longrightarrow {{\mathbb R}}G.$$ In this paper, we treat $\ell^2(G)$ as a right ${{\mathbb R}}G$–module and a left Hilbert $\mathcal{N}(G)$–module. Here we denote by $$\mathcal{N}(G)\,=\,\mathcal{B}\left(\ell^2(G)\right)^{G}$$ the group von Neumann algebra of $G$ which consists of all the bounded operators that commutes with the right multiplication by elements of $G$. Twisting the chain complex of $\widehat{X}$ by the module $\ell^2(G)$ via the representation $\kappa(\phi,\gamma,t)$ gives rise to a (left) Hilbert $\mathcal{N}(G)$–chain complex: $$\ell^2(G)\otimes_{{{\mathbb Z}}\pi_1(X)}C_*(\widehat{X})$$ and the twisted boundary homomorphism is defined by $\mathbf{1}\otimes \partial_*$. In fact, the twisted complex is finitely generated and free over $\mathcal{N}(G)$. In other words, by choosing a lift of each cell of $X$ in $\widehat{X}$, each chain module of the complex can be identified with a direct sum of the regular Hilbert $\mathcal{N}(G)$-modules: $$\ell^2(G)\otimes_{{{\mathbb Z}}\pi_1(X)}C_k(\widehat{X})\,\cong\,\ell^2(G)^{\oplus p_k}.$$ In this paper, we restrict ourselves to finitely generated, free Hilbert $\mathcal{N}(G)$–chain complexes which are *weakly acyclic and of determinant class*. This means that the $\ell^2$-Betti numbers are all trivial and all the Fuglede–Kadison determinants of the boundary homomorphisms take values in $(0,+\infty)$. In such case, the *$L^2$–Alexander torsion* of $X$ at $t$ with respect to $\gamma$ and $\phi$ is defined to be the multiplicatively alternating product of the Fuglede–Kadison determinants of the boundary homomorphisms: $$\tau^{(2)}(X,\gamma,\phi)(t)\,\doteq\,\prod_{k\in{{\mathbb Z}}} \mathrm{det}_{\mathcal{N}(G)}(\mathbf{1}\otimes\partial_k)^{(-1)^k}.$$ Here the dotted equal means that we treat the $L^2$–Alexander torsion as a function in the parameter $t\in{{\mathbb R}}_+$. In fact, choosing another collection of lifts may result in a change of the value of the right-hand side by a multiplicative factor $t^r$, for some exponent $r\in{{\mathbb R}}$ independent of $t$, so the function $\tau^{(2)}(X,\gamma,\phi)$ is well defined only up to a monic power function factor. We remark that our notational convention follows [@DFL-torsion], and the exponential of the $L^2$-torsion according to [@Lueck-book Definition 3.29] is the multiplicative inverse of the $\tau^{(2)}$ above. To be convenient, a value $0$ is artificially assigned to $\tau^{(2)}(X,\gamma,\phi)(t)$ if the twisted complex fails to be weakly acyclic or of determinant class. With this convention, the $L^2$–Alexander torsion associated to $(X,\gamma,\phi)$ is a function determined up to a monic power function factor: $$\tau^{(2)}(X,\gamma,\phi):\,{{\mathbb R}}_+\to[0,+\infty).$$ Let $N$ be a compact smooth manifold, possibly with boundary, and $\gamma:\pi_1(N)\to G$ be a homomorphism. The *$L^2$–Alexander torsion* of $N$ with respect to any admissible triple $(\pi_1(N),\gamma,\phi)$, denoted as $\tau^{(2)}(N,\gamma,\phi)$, is understood to be the $L^2$–Alexander torsion of any finite CW complex structure of $N$. This notion does not depend on the choice of the CW structure [@DFL-torsion Section 4.2]. When $\gamma$ is taken to be $\mathrm{id}_{\pi_1(N)}:\pi_1(N)\to\pi_1(N)$, the triple $(\pi_1(N),\gamma,\phi)$ is admissible for every class $\phi\in H^1(N;{{\mathbb R}})$. The corresponding $L^2$–Alexander torsion is called the *full $L^2$–Alexander torsion* with respect to $\phi$, denoted as $\tau^{(2)}(N,\phi)$. Thurston norm and virtual fibering ---------------------------------- Let $N$ be an irreducible compact orientable $3$-manifold with empty or incompressible toral boundary. The *Thurston norm*, named after William P. Thurston who discovered it in [@Thurston-norm], is a seminorm of the vector space: $$x_N:\,H_2(N,\partial N;\,{{\mathbb R}})\to[0,+\infty),$$ which takes ${{\mathbb Z}}$ values on the integral lattice $H_2(N,\partial N;\,{{\mathbb Z}})$. The Thurston norm measures certain complexity of the second relative homology classes, and it is known to be non-degenerate if the $3$-manifold $N$ supports a complete hyperbolic structure in its interior. The unit ball $B_x(N)$ of $x_N$ is a convex polyhedron, symmetric about the origin, and supported by finitely many linear faces carried by rational affine hyperplanes. If $N$ fibers over the circle via a map $N\to S^1$, any fiber of the fibration represents a homology class $[\Sigma]\in H_2(N,\partial N;{{\mathbb Z}})$, which depends only on the fibration. We can canonically identify $[\Sigma]$ with a cohomology class $\phi\in H^1(N;{{\mathbb Z}})\cong[N,S^1]$, by Poincaré Duality (after fixing an orientation of $N$). As we have assumed $N$ to be irreducible with incompressible boundary, $x_N(\phi)$ equals $-\chi(\Sigma)$. Any such $\phi$ is called a *fibered class*. Thurston has shown that every fibered class is contained in the open cone over a top-dimensional face of $\partial B_x(N)$, and every integral class of that cone is a fibered class. Such open cones are hence called the *fibered cones* of $x_N$. In general, $N$ may possess no fibered cones at all. However, given any class $\phi\in H^1(N;{{\mathbb R}})$, we can usually pass to a finite cover $p:\,\tilde{N}\to N$, so that $p^*\phi\in H^1(\tilde{N};{{\mathbb R}})$ is *quasi-fibered*, namely, $p^*\phi$ lies on the (point-set theoretic) boundary of a fibered cone possessed by $\tilde{N}$. To be precise, the virtual quasi-fibering property holds true for every class $\phi\in H^1(N;{{\mathbb R}})$ if $\pi_1(N)$ is virtually residually finite rationally solvable (or RFRS), due to a theorem of Ian Agol [@Agol-RFRS]. Based on the confirmations of the Virtual Haken Conjecture and the Virtual Fibering Conjecture due to the works of Ian Agol [@Agol-VHC], Daniel Wise [@Wise-book], and many other authors, it has been known that $\pi_1(N)$ is virtually RFRS if and only if $N$ supports a complete Riemannian metric of nonpositive curvature in its interior, [@Liu; @PW-graph; @PW-mixed]. For example, if the simplicial volume $\mathrm{Vol}(N)$ is positive, or in other words, if $N$ contains at least one hyperbolic piece in its geometric decomposition, then the virtual quasi-fibering property is possessed by $N$. We refer the reader to the survey [@AFW-group] for more background about virtual properties of $3$-manifolds. Regular Fuglede–Kadison determinant {#Sec-rFKdet} =================================== Let $G$ be a countable discrete group. For any $p\times p$ matrix $A$ over $\mathcal{N}(G)$, the *regular Fuglede–Kadison determinant* of $A$ is defined to be $$\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A) =\begin{cases} \mathrm{det}_{\mathcal{N}(G)}(A)&\textrm{if }A\textrm{ is full rank of determinant class} \\0&\textrm{otherwise}\end{cases}$$ This gives rise to a function: $$\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}:\,{{\mathrm{Mat}}}_{p\times p}(\mathcal{N}(G))\to [0,+\infty).$$ Regular Fuglede–Kadison determinants have been used in [@DFL-torsion]. In the rest of the section we study the semicontinuity of this quantity under two kinds of limiting processes. \[norm-semicontinuous\] If a sequence of $p\times p$ matrices $\{A_n\}_{n\in{{\mathbb{N}}}}$ over $\mathcal{N}(G)$ converges to $A\in{{\mathrm{Mat}}}_{p\times p}(\mathcal{N}(G))$ with respect to the norm topology, then $$\limsup_{n\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n)\,\leq\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A).$$ Moreover, if $A$ is a positive operator, then $$\lim_{\epsilon\to0+}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A+\epsilon\cdot\mathbf{1})\,=\, \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A).$$ Since $\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A^*A)$ equals $\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A)^2$, it suffices to show the inequality for positive operators $\{A_n\}_{n\in{{\mathbb{N}}}}$ and $A$. For any arbitrary constant $\epsilon>0$, the positive operators $(A_n+\epsilon\cdot\mathbf{1})$ and $(A+\epsilon\cdot\mathbf{1})$ are invertible, so the regular Fuglede–Kadison determinant agrees with the Fuglede–Kadison determinant. Since the Fuglede–Kadison determinant is continuous on the subgroup of invertible matrices $\mathrm{GL}(p,\mathcal{N}(G))$ with respect to the norm topology [@CFM Theorem 1.10 (d)], $$\lim_{n\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n+\epsilon\cdot\mathbf{1}) \,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A+\epsilon\cdot\mathbf{1}).$$ On the other hand, by [@Lueck-book Lemma 3.15 (6)], or as a trivial fact if $A_n$ fails to be injective, $$\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n)\leq \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n+\epsilon\cdot\mathbf{1}).$$ Therefore, $$\begin{aligned} \limsup_{n\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n) &\leq&\limsup_{n\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_n+\epsilon\cdot\mathbf{1})\\ &=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A+\epsilon\cdot\mathbf{1}). \end{aligned}$$ As $\epsilon>0$ is arbitrary, it suffices to prove $$\lim_{\epsilon\to0+}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A+\epsilon\cdot\mathbf{1})\,=\, \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A).$$ In fact, if $A$ is injective, the last limit follows from [@Lueck-book Lemma 3.15 (5)]. Otherwise, $\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A)$ equals $0$. Denoting by $b\in (0,p]$ the von Neumann dimension $\dim_{\mathcal{N}(G)}\mathrm{Ker}(A)$, it is easy to estimate $$0\leq \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A+\epsilon\cdot\mathbf{1})\leq \epsilon^b(\|A\|+\epsilon)^{p-b}.$$ We again have: $$\lim_{\epsilon\to0+}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)} (A+\epsilon\cdot\mathbf{1})\,=\,0\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A).$$ This completes the proof. \[stable-semicontinuous\] Let $$G\to\cdots\to \Gamma_n\to\cdots\to \Gamma_2\to\Gamma_1,$$ be a cofinal tower of quotients of $G$, and denote by $\psi_n:G\to\Gamma_n$ the quotient homomorphisms. Suppose that all the target groups $\Gamma_n$ are finitely generated and residually finite. Let $A_G$ be a square matrix over ${{\mathbb C}}G$. Then $$\limsup_{n\to\infty}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}A_G)\leq\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_G).$$ Moreover, for any constant $\epsilon>0$, $$\lim_{n\to\infty}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}(A_G^*A_G+\epsilon\cdot\mathbf{1})) =\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_G^*A_G+\epsilon\cdot\mathbf{1}).$$ Here the tower being cofinal means that $$\bigcap_{n\in{{\mathbb{N}}}} \mathrm{Ker}{\psi_n}\,=\,\{\,\mathrm{id}_G\,\}.$$ Assuming that the ‘moreover’ part has been proved, we can derive the first inequality as follows. For any constant $\epsilon>0$, $$\begin{aligned} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}A_G) &=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}(A^*_GA_G))^{1/2}\\ &\leq&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}(A^*_GA_G+\epsilon\cdot\mathbf{1}))^{1/2}. \end{aligned}$$ The last expression tends to the regular Fuglede–Kadison determinant of $A_G$ as $\epsilon$ tends to $0+$, by Lemma \[norm-semicontinuous\]. This implies the asserted inequality $$\limsup_{n\to\infty}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\psi_{n*}A_G)\leq\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_G).$$ It remains to prove the asserted limit in the ‘moreover’ part. For simplicity, given any constant $\epsilon>0$, we rewrite the matrices as $$H_\infty\,=\,A_G^*A_G+\epsilon\cdot\mathbf{1}\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G)$$ and $$H_n\,=\,\psi_{n*} H_\infty\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\Gamma_n).$$ Note that the self-adjoint operators $H_n$ acting on $\ell^2(\Gamma_n)^{\oplus p}$ are positive with spectra bounded uniformly $\epsilon$ from $0$ and the same holds for $H_\infty$. In this case, approximation of determinants should follow from well known techniques. In the rest of the proof, we derive the approximation $$\lim_{n\to\infty}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(H_n) =\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(H_\infty).$$ from a theorem of W. Lück [@Lueck-approximating Theorem 3.4 (3)], which is originally done for cofinal towers of finite quotients. It would be convenient to argue by contradiction, assuming that the limit of the left-hand side did not exist or did not equal to the right-hand side. In either case, possibly after passing to a subsequence, we assume that there exists a constant $\delta>0$ such that the following gap estimate holds for all $n\in{{\mathbb{N}}}$: $$\left|\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(H_n)-\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(H_\infty)\right|\,\geq\,2\delta.$$ By induction, we show that there exists a cofinal tower of finite quotients of $G$ $$G\to\cdots\to \Gamma'_n\to\cdots\to \Gamma'_2\to\Gamma'_1,$$ with the following properties: For all $n\in{{\mathbb{N}}}$, we have that $\Gamma'_n$ is a further quotient of $\Gamma_n$, and moreover, $$\left|\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma'_n)}\left(H'_{n}\right)- \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}\left(H_n\right)\right|\,<\,\delta,$$ where $H'_n$ is the induced matrix of $H_n$ over ${{\mathbb C}}\Gamma'_n$. For $n$ equal to $1$, take a cofinal tower of finite quotients of $\Gamma_1$: $$\Gamma_1\to\cdots\to\Gamma_{1,j}\to\cdots\to\Gamma_{1,2}\to\Gamma_{1,1}.$$ Denote the induced matrix of $H_n$ over ${{\mathbb C}}\Gamma_{1,j}$ by $H_{1,j}$. Since $H_{1,j}$ is positive with spectrum bounded at least $\epsilon$ from $0$, Lück’s theorem implies $$\lim_{j\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_{1,j})}\left(H_{1,j}\right) \,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_1)}\left(H_{1}\right),$$ so we choose $\Gamma'_1$ to be the quotient $\Gamma_{1,j}$ for a sufficiently large $j$. Suppose by induction that $\Gamma'_n$ has been constructed for some $n\in{{\mathbb{N}}}$. To construct $\Gamma'_{n+1}$, we take a tower of finite quotients $$\Gamma_{n+1}\to\cdots\to\Gamma_{n+1,j}\to\cdots\to\Gamma_{n+1,2}\to\Gamma_{n+1,1}.$$ in the same fashion as above, but also require the first term $\Gamma_{n+1,1}$ to be $\Gamma'_n$. The same construction thus yields some sufficiently large $j$ such that $\Gamma_{n+1,j}$ can be chosen as $\Gamma'_{n+1}$. This completes the induction. Provided with the new tower, Lück’s theorem again implies $$\lim_{n\to\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma'_n)}\left(H'_n\right)\,=\, \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_\infty)}\left(H_{\infty}\right).$$ Therefore, for sufficiently large $n$, $$\left|\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}\left(H_n\right) -\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_\infty)}\left(H_{\infty}\right)\right|\,<\,2\delta.$$ This contradicts the assumed gap estimation, and hence completes the proof. Multiplicatively convex function {#Sec-mConvexFunction} ================================ In this section, we give an introduction to multiplicatively convex functions. In subsequent sections, such functions arise naturally as we take the regular Fuglede–Kadison determinants of matrices under $L^2$–Alexander twists. Let $(a,b)\subset{{\mathbb R}}_+$ be an interval of positive real numbers. A function $f:(a,b)\to[0,+\infty)$ is said to be *multiplicatively convex* if for all points $t_0,t_1\in(a,b)$ and every constant $\lambda\in(0,1)$, $$f(t_0^\lambda\cdot t_1^{1-\lambda})\,\leq\, f(t_0)^\lambda\cdot f(t_1)^{1-\lambda}.$$ The product of two multiplicatively convex functions is again multiplicatively convex. Furthermore, if $f(t)$ is multiplicatively convex, then for any constant $r\in{{\mathbb R}}_+$, both $f(t^{\pm r})$ and $f(t)^r$ are multiplicatively convex as well. \[zeroOrNot\] If a function $f:{{\mathbb R}}_+\to [0,+\infty)$ is multiplicatively convex, then $f$ is continuous. Moreover, $f$ is either the constant function $0$ or nowhere zero. If $f$ equals zero at some point $c$, it is clear from the definition that $f$ has to be the constant function $0$. When $f$ is nowhere zero, then $\log\circ f\circ \exp$ is a convex function on ${{\mathbb R}}$. In either case, $f$ is continuous. \[mMidpointConvex\] If $f:{{\mathbb R}}_+\to [0,+\infty)$ is multiplicatively mid-point convex and upper semi-continuous, namely, - for every pair of points $t_0,t_1\in{{\mathbb R}}_+$, $f(\,\sqrt{t_0t_1}\,)\leq \sqrt{f(t_0)\cdot f(t_1)}$, and - for every point $t_0\in {{\mathbb R}}_+$, $\limsup_{t\to t_0} f(t)\leq f(t_0)$, then $f$ is multiplicatively convex. Given any $t_0\in{{\mathbb R}}_+$, let $\{t_n\in{{\mathbb R}}_+\}_{n\in{{\mathbb{N}}}}$ be a sequence of points such that $t_n$ converges to $t_0$ and $f(t_n)$ converges to $\liminf_{t\to t_0} f(t)$. We have $$f(t_0)^2\leq \limsup_{n\to\infty} f(t_n)f(t_0^2/t_n)\leq\liminf_{t\to t_0}f(t)\cdot\limsup_{t\to t_0} f(t)\leq f(t_0)^2.$$ Then $\liminf_{t\to t_0}f(t)=\limsup_{t\to t_0}f(t)=f(t_0)$. It follows that $f$ is continuous. It is clear that $f$ is everywhere positive unless $f$ is constantly zero. When $f$ is everywhere positive, we may take $F=\log\circ f\circ \exp$ which is mid-point convex and continuous, so it is well known that $F$ is convex, or equivalently, that $f$ is multiplicatively convex. \[degree-mConvex-eBounded\] Let $(a,b)\subset{{\mathbb R}}_+$ be an interval of positive real numbers. A nowhere zero multiplicatively convex function $f:(a,b)\to(0,+\infty)$ is said to have *bounded exponent* if there exists some positive constant $R$ such that for all pairs of distinct points $t_0,t_1\in(a,b)$, $$\left|\frac{\log f(t_1)-\log f(t_0)}{\log t_1-\log t_0}\right|\,\leq\, R.$$ For multiplicatively convex functions, the growth bound degree (Definition \[degree-b\]) can be characterized by the limit exponents: \[degree-bCharacterization\] Suppose that $f:{{\mathbb R}}_+\to[0,+\infty)$ is a nowhere zero multiplicatively convex function. Then the growth bound degree $\mathrm{deg}^{\mathtt{b}}(f)\in{{\mathbb R}}$ exists if and only if $f$ has bounded exponent. Moreover, in this case, the following equalities hold true: $$\mathrm{deg}^{\mathtt{b}}_{+\infty}(f)\,=\,\lim_{t_0,t_1\to +\infty} \frac{\log f(t_0)-\log f(t_1)}{\log t_0-\log t_1},$$ and $$\mathrm{deg}^{\mathtt{b}}_{0+}(f)\,=\,\lim_{t_0,t_1\to 0+} \frac{\log f(t_0)-\log f(t_1)}{\log t_0-\log t_1}.$$ We show that the equalities hold if $\mathrm{deg}^{\mathtt{b}}\in{{\mathbb R}}_+$ exists. If there exists $D_{+\infty}\in{{\mathbb R}}$ such that $\lim_{t\to+\infty}f(t)\cdot t^{-D_{+\infty}}\,=\,0$, then $\log f(t)$ is less than or equal to $D_{+\infty}\log t$ for all sufficiently large $t\in{{\mathbb R}}_+$. For all $t_0,t_1\in {{\mathbb R}}_+$, by the multiplicative convexity of $f$, $$\begin{aligned} \frac{\log f(t_0)-\log f(t_1)}{\log t_0-\log t_1} &\leq&\limsup_{t\to +\infty}\frac{\log f(t)-\log f(t_1)}{\log t-\log t_1}\\ &\leq&\limsup_{t\to +\infty}\frac{D_{+\infty}\log t-\log f(t_1)}{\log t-\log t_1}\\ &=& D_{+\infty}. \end{aligned}$$ Denote by $$d_{+\infty}\,=\,\lim_{t_0,t_1\to +\infty} \frac{\log f(t_0)-\log f(t_1)}{\log t_0-\log t_1}\,\in\,{{\mathbb R}}.$$ It is easy to see that for any constant $\delta>0$, $$\lim_{t\to+\infty} f(t)\cdot t^{-(d_{+\infty}+\delta)}=0.$$ Consequently, $$\mathrm{deg}^{\mathtt{b}}_{+\infty}(f)\,=\,d_{+\infty}.$$ The equality for $0+$ can be proved in a similar way. We have shown the ‘only-if’ direction. The existence of exponent bound leads to the existence of $d_{+\infty}$ and $d_{0+}$ in ${{\mathbb R}}$, so $$d_{0+}-1\,<\mathrm{deg}^{\mathtt{b}}_{0+}(f)\,\leq\, \mathrm{deg}^{\mathtt{b}}_{+\infty}(f)\,<\, d_{+\infty}+1.$$ This shows the ‘if’ direction.   1. A *monomial function* on an interval $(a,b)$ is a function of the form $f(t)=Ct^r$ for some constants $C\in {{\mathbb R}}_+$ and $r\in {{\mathbb R}}$. Such a function is *multiplicatively linear* in the sense that for all points $t_0,t_1\in(a,b)$ and for every constant $\lambda\in(0,1)$, $$f(t_0^{1-\lambda}\cdot t_1^{\lambda})\,=\, f(t_0)^{1-\lambda}\cdot f(t_1)^{\lambda}.$$ 2. A *piecewise monomial function* on an interval $(a,b)$ is a continuous function $f:(a,b)\to(0,+\infty)$ such that for finitely many points $a=c_0<c_1<\cdots<c_{n-1}<c_n=b$, the function is a monomial $C_it^{r_i}$ on the subinterval $(c_{i-1},c_i)$ where $i$ runs over $1,\cdots,n$. Such a continuous function is multiplicatively convex if and only if $r_1\leq r_2\leq\cdots\leq r_n$. 3. Given any Laurent polynomial $$p(z)\,=\,D\cdot z^n\cdot \prod_{i=1}^{l} (z-b_i)\in {{\mathbb C}}[z,z^{-1}],$$ with a leading coefficient $D\in{{\mathbb C}}^\times$ and nontrivial zeros $b_i\in {{\mathbb C}}^\times$, the function $$M(p(z);\,t)\,=\,|D|\cdot t^{n}\cdot\prod_{i=1}^{l}\max(t,|b_i|),$$ of the variable $t\in{{\mathbb R}}_+$, is piecewise monomial and multiplicatively convex. Multiplicative convexity and exponent bound {#Sec-mConvex-eBounded} =========================================== In this section, we show that residually finite $L^2$–Alexander twists result in multiplicatively convex determinant functions with bounded exponents. \[mConvex-eBounded\] Given any admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$ and any square matrix $A$ over ${{\mathbb C}}\pi$, denote by $V:\,{{\mathbb R}}_+\to[0,+\infty)$ the regular Fuglede–Kadison determinant function $$V(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\phi,\gamma,t)(A)\right),$$ where $G$ is the target group of $\gamma$ and $\kappa(\phi,\gamma,t)$ is the induced change of coefficients. Suppose that $G$ is finitely generated and residually finite. Then $V(t)$ is either constantly zero or multiplicatively convex with exponent bounded. Moreover, there exists a constant $R(A,\phi)\in[0,+\infty)$ depending only on $A$ and $\phi$ so that $$\mathrm{deg}^{\mathtt{b}}(V)\,\leq\,R(A,\phi).$$ The rest of this section is devoted to the proof of Theorem \[mConvex-eBounded\]. The degree bound ---------------- For any $p\times p$ matrix $A$ over ${{\mathbb C}}\pi$, we can decompose $A$ as a unique sum: $$A\,=\,\sum_{g\in\pi} g\cdot A_g$$ where $A_g$ are $p\times p$ matrices over ${{\mathbb C}}$ and only finitely many $A_g$ are nonzero. Given any homomorphism $\phi\in \mathrm{Hom}(\pi,{{\mathbb R}})$, we define $$R(A,\phi)\,=\,p\cdot\left(\max_{A_g\neq\mathbf{0}} \phi(g)-\min_{A_g\neq\mathbf{0}} \phi(g)\right).$$ The quantity $R(A,\phi)$ behaves well under operations of the matrix and the cohomology class. In fact, we observe the following elementary properties. The proof is straightforward so we omit it in this paper. 1. For all $A\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}})\subset{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, $R(A,\phi)=0$. 2. For all $A,B\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, $$R(AB,\phi)\leq R(A,\phi)+R(B,\phi)$$ and $$R(A+B,\phi)\leq \max(R(A,\phi),R(B,\phi))$$ 3. For all $A\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, and $c\in {{\mathbb R}}$, and $\phi,\psi\in \mathrm{Hom}(\pi,{{\mathbb R}})$, $$R(A,c\phi)\,=\,|c|\cdot R(A,\phi)$$ and $$R(A,\phi+\psi)\leq R(A,\phi)+R(A,\psi).$$ 4. Let $\gamma:\pi\to G$ be a group homomorphism. For all $A\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$ and $\xi\in H^1(G;{{\mathbb R}})$, $$R(A,\gamma^*\xi)\geq R(\gamma_*A,\xi)$$ The following lemma can be combined with Lemma \[degree-bCharacterization\] to yield the degree bound, once we have shown that $V(t)$ is multiplicatively convex. \[eBoundEstimate\] Given any admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$ and any square matrix $A$ over ${{\mathbb C}}\pi$, write $$V(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\phi,\gamma,t)(A)\right)$$ where $G$ is the target group of $\gamma$. Then the following statement holds true. For every constant $R'>R(A,\phi)$, there exist constants $D_{+\infty},D_{0+}\in{{\mathbb R}}$ such that $D_{+\infty}-D_{0+}< R'$. Moreover, $$\lim_{t\to+\infty}V(t)\cdot t^{-D_{+\infty}}\,=\,0,$$ and $$\lim_{t\to0+} V(t)\cdot t^{-D_{0+}}\,=\,0.$$ We adopt the notations at the beginning of this subsection. Given $R'>R(A,\phi)$, we denote by $5\delta$ the difference $R'-R(A,\phi)$. Take $$D_{+\infty}\,=\,2\delta+p\cdot\max_{A_g\neq\mathbf{0}}\,\phi(g),$$ and $$D_{0+}\,=\,-2\delta+p\cdot\min_{A_g\neq\mathbf{0}}\,\phi(g).$$ For sufficiently large $t\in{{\mathbb R}}_+$, the operator norm of $t^{-D_{+\infty}+\delta}\cdot \kappa(\phi,\gamma,t)(A)$ is bounded by $1$. Therefore, $$0\,\leq\,\limsup_{t\to+\infty}V(t)\cdot t^{-D_{+\infty}} \,\leq\, 1^p\cdot \lim_{t\to+\infty} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(t^{-\delta}\cdot\mathbf{1}) \,=\,\lim_{t\to+\infty}t^{-p\delta}\,=\,0.$$ This yields the asserted limit for $t\to+\infty$. The limit for $t\to0+$ can be proved in a similar way. Multiplicative convexity for virtually abelian twists ----------------------------------------------------- In this section, we prove Theorem \[mConvex-eBounded\] under the assumption that $G$ is finitely generated and virtually abelian. Given an admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$ and a parameter value $t\in{{\mathbb R}}_+$, for any $p\times p$ matrix $A$ of ${{\mathbb C}}\pi$, we define $$A_G(t)\,=\,\kappa(\phi,\gamma,t)(A)\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G)$$ and write $$V(t)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(A_G(t)).$$ \[mConvex-eBounded-VA\] Let $(\pi,\phi,\gamma)$ is an admissible triple over ${{\mathbb R}}$. Suppose that $G$ is finitely generated and virtually abelian. Then for every matrix $A\in {{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, the function $V(t)$ is multiplicatively convex. The following lemma treats the essential case where $G$ is finitely generated and free abelian. \[mConvex-eBounded-FA\] Let $(\pi,\phi,\gamma)$ be an admissible triple over ${{\mathbb R}}$. Suppose that $\gamma$ is an isomorphism onto a finitely generated free abelian group $G$. Then for every $A\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, the function $V(t)$ is multiplicatively convex. For any admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$, the image $\phi(\pi)$ is finitely generated as $G$ is finitely generated and free abelian. Take a basis $r_1,\cdots,r_d\in{{\mathbb R}}_+$ of the ${{\mathbb Q}}$-vector space spanned by $\phi(\pi)\subset{{\mathbb R}}$. Possibly after dividing each $r_i$ by a positive integer, we can decompose $\phi$ as a sum: $$\phi\,=\,r_1\phi_1+\cdots+r_d\phi_d$$ where $\phi_i$ are homomorphisms in $\mathrm{Hom}(\pi,{{\mathbb Z}})$. We fix such a basis for the rest of the proof. Consider a multivariable version of twist as follows. Given any vector $\vec{t}=(t_1,\cdots,t_d)\in {{\mathbb R}}_+^d$, there is a homomorphism of rings: $$\kappa(\phi,\gamma,\vec{t}):\,{{\mathbb Z}}\pi\longrightarrow {{\mathbb R}}G$$ defined uniquely by $$\kappa(\phi,\gamma,\vec{t})(g)\,=\,t_1^{\phi_1(g)}\cdots t_d^{\phi_d(g)}\gamma(g)$$ for all $g\in\pi$ via linear extension over ${{\mathbb Z}}$. There are induced homomorphisms between matrix algebras over ${{\mathbb C}}\pi$ and ${{\mathbb C}}G$ as before. We define $$A_G(\vec{t})\,=\,\kappa(\phi,\gamma,\vec{t})(A)\,\in\, {{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G).$$ Denote $$W(\vec{t})\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(A_G(\vec{t})\right).$$ Then $$V(t)\,=\,W((t^{r_1},\cdots,t^{r_d})).$$ On the other hand, we identify $A_G(\vec{t})$ as a family of $p\times p$ matrices over the multivariable Laurent polynomial ring ${{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]$, where $l$ is the rank of $G$. Denote by $\vec{1}$ is the diagonal vector $(1,\cdots,1)\in{{\mathbb R}}_+^d$. If we write the Laurent polynomial matrix at $\vec{1}$ as: $$A_G(\vec{1})\,=\,A_G(\vec{1})\,(z_1,\cdots,z_l),$$ then at $\vec{t}$ the Laurent polynomial matrix can be computed by: $$A_{G}(\vec{t})\,=\,A_{G}(\vec{1})\, (\tilde{t}_1z_1,\cdots,\tilde{t}_lz_l)$$ where, for $j$ running over $1,\cdots,l$, $$\tilde{t}_j=t_1^{\phi_1(z_j)}\cdot\cdots\cdot t_d^{\phi_d(z_j)}.$$ In fact, the relation can be checked by looking at the monomials in each entry of $A_{G}(\vec{1})$. The effect of the twist is that in any monomial, each $z_j$ that appears contributes an exponent $\phi_i(z_j)$ to the associated coefficient $t_i$. The value of $W(\vec{t})$ can be computed by the (multiplicative) Mahler measure of the usual determinant of the Laurent polynomial matrix $A_G(\vec{t})$. Precisely, the usual determinant gives rise to a Laurent polynomial for the square matrix at $\vec{1}$: $$p_A(z_1,\cdots,z_l)\,=\mathrm{Det}_{{{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]}\left(A_G(\vec{1})\right),$$ so $$p_A(\tilde{t}_1z_1,\cdots,\tilde{t}_lz_l)\,=\mathrm{Det}_{{{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]}\left(A_G(\vec{t})\right).$$ By [@DFL-torsion Lemma 2.6], (cf. [@Lueck-book Exercise 3.8] and [@Raimbault Section 1.2]), if $p_A$ is not the zero polynomial, $$\begin{aligned} W(\vec{t})&=&M(p_A(\tilde{t}_1z_1,\cdots,\tilde{t}_lz_l))\\ &=&\exp\left[\frac{1}{(2\pi)^l}\cdot\int_0^{2\pi}\cdots\int_0^{2\pi} \log\left|p_A(\tilde{t}_1e^{\mathbf{i}\theta_1},\cdots,\tilde{t}_l e^{\mathbf{i}\theta_l})\right|{{\mathrm{d}}}\theta_1,\cdots{{\mathrm{d}}}\theta_l\right]. \end{aligned}$$ Note that if $p_A$ is the zero polynomial, then $W(\vec{t})$ and $V(t)$ are constantly zero, so the multiplicative convexity of $V(t)$ holds in this trivial case. We assume in the rest of the proof that $p_A$ is not the zero polynomial. First consider the case when $(\pi,\phi,\gamma)$ is an admissible triple over ${{\mathbb Q}}$. In this case, $d$ is at most $1$. We can assume that $d$ equals $1$, since otherwise $V(t)$ is a constant function. There is a splitting short exact sequence of free abelian groups: $$1\longrightarrow \gamma(\mathrm{Ker}(\phi))\longrightarrow G\stackrel{\phi\circ\gamma^{-1}}{\longrightarrow} \phi(\pi)\longrightarrow 1.$$ We may choose a basis of the free abelian group $G$ such that $\phi(z_l)=mr_1$ for some nonzero integer $m$ and $\phi(z_i)=0$ for all other $z_i$. For any given values $\theta_1,\cdots,\theta_{l-1}\in[0,2\pi]$, we introduce the notations $$q_{\theta_1,\cdots,\theta_{l-1}}(z)\,=\,p_A(e^{\mathbf{i}\theta_1},\cdots,e^{\mathbf{i}\theta_{l-1}},z)\in{{\mathbb C}}[z,z^{-1}],$$ and $$v_{\theta_1,\cdots,\theta_{l-1}}(t)\,=\,\log M(q_{\theta_1,\cdots,\theta_{l-1}}(t^{mr_1}z)).$$ Then $$\begin{aligned} \log V(t)&=& \log W(t^{r_1})\\ &=& \frac{1}{(2\pi)^l}\cdot\int_0^{2\pi}\cdots\int_0^{2\pi} \log\left|p_A(e^{\mathbf{i}\theta_1},\cdots,e^{\mathbf{i}\theta_{l-1}},t^{mr_1}e^{\mathbf{i}\theta_l})\right|\,{{\mathrm{d}}}\theta_1\cdots{{\mathrm{d}}}\theta_l\\ &=&\frac{1}{(2\pi)^{l-1}}\cdot\int_0^{2\pi}\cdots\int_0^{2\pi} v_{\theta_1,\cdots,\theta_{l-1}}(t)\,{{\mathrm{d}}}\theta_1\cdots{{\mathrm{d}}}\theta_{l-1}. \end{aligned}$$ For any one-variable Laurent polynomial $q\in{{\mathbb C}}[z,z^{-1}]$, the Mahler measure can be computed using Jensen’s formula: $$M(q(z))=|D|\cdot\prod_{i=1}^{l}\max(1,|b_i|),$$ where the constants $D\in{{\mathbb C}}$ and $n\in{{\mathbb Z}}$ and $b_i\in{{\mathbb C}}$ are given by any factorization $$q(z)=D\cdot z^n\cdot \prod_{i=1}^{l} (z-b_i)\in {{\mathbb C}}[z,z^{-1}].$$ It is evident that for any such $q$, the following function in $t\in{{\mathbb R}}_+$ is multiplicatively convex: $$M(q(t^{mr_1}z))\,=\, |D|\cdot t^{nmr_1}\cdot\prod_{i=1}^{l}\max(t^{mr_1},|b_i|),$$ possibly constantly zero if $q$ is $0$. Therefore, for all pairs of distinct points $T_0,T_1\in{{\mathbb R}}_+$, and all constants $0<\lambda<1$, we have the comparison: $$(1-\lambda)\cdot v_{\theta_1,\cdots,\theta_{l-1}}(T_0)+\lambda\cdot v_{\theta_1,\cdots,\theta_{l-1}}(T_1)\, \geq\,v_{\theta_1,\cdots,\theta_{l-1}}(T_0^{1-\lambda}\cdot T_1^\lambda).$$ Integrating both sides and taking the exponential yields $$V(T_0)^{1-\lambda}\cdot V(T_1)^\lambda\,\geq\,V(T_0^{1-\lambda}\cdot T_1^\lambda).$$ In other words, $V(t)$ is multiplicatively convex. For the general case over ${{\mathbb R}}$, denote by $\vec{r}$ the vector $(r_1,\cdots,r_d)\in{{\mathbb R}}_+^d.$ Take a sequence of rational vectors $\{\,\vec{r}^{(n)}\in{{\mathbb Q}}_+^d\,\}$ which converges to $\vec{r}$ in ${{\mathbb R}}^d_+$ as $n$ tends to infinity. Observe that for each $\vec{r}^{(n)}$, the function $$V_n(t)\,=\, W((t^{r^{(n)}_1},\cdots,t^{r^{(n)}_d}))$$ is equal to the regular Fuglede-Kadison determinant of the matrix $$\kappa(\phi^{(n)},\gamma,t)(A)\,\in\,{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G),$$ where $$\phi^{(n)}=r_1^{(n)}\phi_1+\cdots+r_d^{(n)}\phi_d$$ is a homomorphism in $\mathrm{Hom}(\pi,{{\mathbb Q}})$. Then $V_n(t)$ are multiplicatively convex by the rational case that we have proved. On the other hand, as $\vec{t}$ varies over ${{\mathbb R}}_+^d$, the coefficients of the Laurent polynomials $p_A(\tilde{t}_1z_1,\cdots,\tilde{t}_lz_l)$ varies continuously, so the Mahler measure of the Laurent polynomials varies continuously by D. Boyd [@Boyd]. In particular, for every $t\in{{\mathbb R}}_+$, $$\lim_{n\to\infty} V_n(t)\,=\,V(t).$$ Given any constants $T_0,T_1\subset{{\mathbb R}}_+$ and $0<\lambda<1$, we have shown the multiplicative convexity for the rational case: $$V_n(T_0)^{1-\lambda}\cdot V_n(T_1)^{\lambda}\geq V_n(T_0^{1-\lambda}\cdot T_1^{\lambda}).$$ Taking the limit as $n\to\infty$, $$V(T_0)^{1-\lambda}\cdot V(T_1)^{\lambda}\geq V(T_0^{1-\lambda}\cdot T_1^{\lambda}).$$ In other words, the function $V(t)$ is multiplicatively convex. This completes the proof. Take a free abelian subgroup $\tilde{G}$ of $\tilde{\pi}$ of finite index, which is hence finitely generated. Denote by $\tilde{\pi}$ the preimage $\gamma^{-1}(\tilde{G})$. Take restrictions $\tilde{\phi}$, $\tilde{\gamma}$ of given homomorphisms to $\tilde{\pi}$ accordingly. The restriction of $A$ to ${{\mathbb C}}\tilde{\pi}$, denoted as $\mathrm{res}^{\tilde\pi}_\pi A$, is a square matrix over ${{\mathbb C}}\tilde{\pi}$ of size $p\cdot[\pi:\tilde{\pi}]$. We observe that the operation of restriction commutes with $\kappa(\gamma,\phi,t)$ and $*$. Denote by $\tilde{V}(t)$ the corresponding determinant function for the admissible triple $(\tilde{\pi},\tilde{\phi},\tilde{\gamma})$ and the matrix $\mathrm{res}^{\tilde\pi}_\pi A$. By basic properties of regular Fuglede–Kadison determinants, $$\begin{aligned} V(t)&=&\mathrm{det}^\mathtt{r}_{\mathcal{N}(G)}(A_G(t))\\ &=&\mathrm{det}^\mathtt{r}_{\mathcal{N}(\gamma(\pi))}\left(\mathrm{res}^{\gamma(\pi)}_G\,(A_G(t))\right)\\ &=&\mathrm{det}^\mathtt{r}_{\mathcal{N}(\tilde{G})}\left(\mathrm{res}^{\tilde{G}}_G\,(A_G(t))\right)^{1/[\gamma(\pi):\tilde{G}]}\\ &=&\mathrm{det}^\mathtt{r}_{\mathcal{N}(\tilde{G})}\left((\mathrm{res}^{\tilde\pi}_\pi A)_{\tilde{G}}(t)\right)^{1/[\pi:\tilde{\pi}]}\\ &=&\tilde{V}(t)^{1/[\pi:\tilde{\pi}]}. \end{aligned}$$ Note that $\tilde{V}(t)$ is constantly zero if and only if $V(t)$ is constantly zero. Suppose that $\tilde{V}(t)$ is not constantly zero. By Lemma \[mConvex-eBounded-VA\], the function $\tilde{V}(t)$ is multiplicatively convex, so $V(t)$ is multiplicatively convex as well. This completes the proof. Multiplicative convexity for residually finite twists ----------------------------------------------------- Let $(\pi,\phi,\gamma)$ be an admissible triple over ${{\mathbb R}}$. Suppose that the target group $G$ of $\gamma$ is finitely generated and residually finite. Take a cofinal tower of normal finite index subgroups of $G$: $$G\geq N_1\geq N_2\geq \cdots\geq N_n\geq\cdots.$$ Here the tower being cofinal means that $$\bigcap_{n=1}^\infty N_n\,=\,\{\,\mathrm{id}_G\,\}.$$ Fix a homomorphism $G\to {{\mathbb R}}$ via which $\phi$ factors through $\gamma$. Denote by $K_n$ the kernel of $N_n\to H_1(N_n;{{\mathbb Q}})$, which remains normal in $G$. Let $$\Gamma_n\,=\,G\,/\,K_n.$$ There are induced homomorphisms by the composition of $\gamma$ and the quotient $G\to\Gamma_n$, denoted as $$\gamma_n:\pi\to\Gamma_n.$$ It is clear that $\Gamma_n$ are all finitely generated and virtually abelian. Therefore, we obtain a tower of admissible triples over ${{\mathbb R}}$: $$\{(\pi,\phi,\gamma_n)\}_{n\in{{\mathbb{N}}}}$$ with finitely generated virtually abelian targets. Given any $p\times p$ matrix $A$ over ${{\mathbb C}}\pi$, and any value of parameter $T\in{{\mathbb R}}_+$, and any constant $\epsilon\in[0,+\infty)$, we introduce a positive operator on $\ell^2(\Gamma_n)^{\oplus p}$: $$H_{n,\epsilon}(T)\,=\,\left(\kappa(\phi,\gamma_n,T)(A)\right)^*\left(\kappa(\phi,\gamma_n,T)(A)\right) +\epsilon\cdot\mathbf{1}$$ which is expressed as a $p\times p$ matrix over ${{\mathbb C}}\Gamma_n$. When the subscript $n$ is replaced with the symbol $\infty$, we adopt the convention that $\Gamma_\infty=G$ and $\gamma_\infty=\gamma$. Given an admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$ and a square matrix $A$ over ${{\mathbb C}}\pi$. We adopt the assumptions and notations of this subsection. Possibly after replacing $G$ with the image of $\gamma$, which does not affect the value of the determinant, we may further assume that $\gamma$ is surjective. Then there are uniquely induced homomorphisms $\gamma_{n*}\phi\in\mathrm{Hom}(\Gamma_n,{{\mathbb R}})$ whose pull-backs through $\gamma$ are $\phi$, and $(\Gamma_n,\mathrm{id}_{\Gamma_n},\gamma_{n*}\phi)$ are admissible triples. For parameters $s,T,t\in {{\mathbb R}}_+$, we write $$W_{n,\epsilon}(s,T)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}\left(\,\kappa(\gamma_{n*}\phi,\mathrm{id}_{\Gamma_n},s)(H_{n,\epsilon}(T))\,\right),$$ and $$V_n(t)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\,\kappa(\phi,\gamma_n,t)(A)\,).$$ Observe that $\kappa(\gamma_{n*}\phi,\mathrm{id}_{\Gamma_n},s)\circ\kappa(\phi,\gamma_n,t)$ equals $\kappa(\phi,\gamma_n,st)$. Therefore, for any given $T_0,T_1\in{{\mathbb R}}_+$, we have the relations: $$W_{n,0}(1,\sqrt{T_0T_1})\,=\,V_n(\sqrt{T_0T_1})^2$$ and $$W_{n,0}(\sqrt{T_1/T_0},\sqrt{T_0T_1})\,=\,V_n(T_0)V_n(T_1),$$ which hold for both $n\in{{\mathbb{N}}}$ and $\infty$. Note that $W_{n,\epsilon}(1,T)$ is always the regular Fuglede–Kadison determinant for a positive operator, but the twisted matrix in the expression of $W_{n,\epsilon}(s,T)$ is not self-adjoint in general. We claim that the following comparison holds for all $s,T\in{{\mathbb R}}_+$: $$W_{\infty,0}(1,T)\,\leq\,W_{\infty,0}(s,T).$$ In fact, by Lemma \[mConvex-eBounded-VA\], the function $W_{n,\epsilon}(s,T)$ is multiplicatively convex in $s\in{{\mathbb R}}_+$ for all $n\in{{\mathbb{N}}}$ and $\epsilon\in[0,+\infty)$. Observe that $H_{n,\epsilon}(T)$ is self-adjoint, so the anti-commutativity of $\kappa(\phi,\gamma_n,s)$ and $*$ yields $W_{n,\epsilon}(s,T)=W_{n,\epsilon}(s^{-1},T)$. This implies that for all $\epsilon\in[0,+\infty)$ and $n\in{{\mathbb{N}}}$, $$W_{n,\epsilon}(1,T)\,\leq\,W_{n,\epsilon}(s,T).$$ Given any arbitrary $\epsilon>0$, Lemma \[stable-semicontinuous\] and the above imply $$\begin{aligned} W_{\infty,\epsilon}(1,T)&=&\lim_{n\to\infty}\,W_{n,\epsilon}(1,T)\\ &\leq& \limsup_{n\to\infty}\, W_{n,\epsilon}(s,T)\\ &\leq& W_{\infty,\epsilon}(s,T). \end{aligned}$$ As $\epsilon$ tends to $0+$, Lemma \[norm-semicontinuous\] and the above imply $$\begin{aligned} W_{\infty,0}(1,T)&=&\lim_{\epsilon\to0+}\,W_{\infty,\epsilon}(1,T)\\ &\leq& \limsup_{\epsilon\to0+}\, W_{\infty,\epsilon}(s,T)\\ &\leq& W_{\infty,0}(s,T). \end{aligned}$$ This proves the claim. Note that the family of operators $\kappa(\phi,\gamma,s)(A)$ is continuous in $s\in{{\mathbb R}}_+$ with respect to the norm topology. Lemma \[norm-semicontinuous\] implies that $V_\infty(t)$ is upper semicontinuous in $t\in{{\mathbb R}}_+$. On the other hand, the claim implies that $V_\infty(t)$ is multiplicatively mid-point convex in $t\in{{\mathbb R}}_+$. By Lemma \[mMidpointConvex\], the function $V_\infty(t)$, or $V(t)$ as in the statement of Theorem \[mConvex-eBounded\], is multiplicatively convex. Provided with the multiplicative convexity, assuming that $V(t)$ is nowhere zero, the exponent bound and the degree estimate $$\mathrm{deg}^{\mathtt{b}}(V)\leq R(A,\phi)$$ follow from Lemma \[eBoundEstimate\] and Lemma \[degree-bCharacterization\]. This completes the proof of Theorem \[mConvex-eBounded\]. Continuity of degree {#Sec-continuityOfDegree} ==================== In this section, we show that the growth bound degree of the regular Fuglede–Kadison determinant of $L^2$–Alexander twists varies continuously as we deform the cohomology class. \[continuityOfDegree\] Given any admissible triple $(\pi,\phi,\gamma)$ over ${{\mathbb R}}$ and any square matrix $A$ over ${{\mathbb C}}\pi$, denote by $G$ the target group of $\gamma$. For any vector $\xi\in H^1(G;\,{{\mathbb R}})$, denote by $$V_\xi(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\phi+\gamma^*\xi,\gamma,t)(A)\right)$$ the determinant function of $A$ associated with the deformed admissible triple $(\pi,\phi+\gamma^*\xi,\gamma)$. Suppose that $G$ is finitely generated and residually finite. Then the function $V_\xi(t)$ is constantly zero at every vector $\xi\in H^1(G;\,{{\mathbb R}})$ whenever it is constantly zero somewhere. Apart from that exception, for all pairs of vectors $\xi,\eta\in H^1(G;{{\mathbb R}})$, $$|\mathrm{deg}^{\mathtt{b}}(V_\xi)-\mathrm{deg}^{\mathtt{b}}(V_\eta)|\,\leq\,2R(A,\gamma^*(\xi-\eta)).$$ In particular, the assignment with the degree $\xi\,\mapsto\,\mathrm{deg}^{\mathtt{b}}(V_\xi(t))$ defines a Lipschitz continuous function on $H^1(G;\,{{\mathbb R}})$ valued in $[0,+\infty)$. The continuity of degree is a consequence of Theorem \[mConvex-eBounded\]. The rest of this section is devoted to the proof of Theorem \[continuityOfDegree\]. We may assume without loss of generality that $\eta\in H^1(G;\,{{\mathbb R}})$ is trivial. In fact, otherwise we can replace the reference class $\phi$ by $\phi+\gamma^*\eta$. Hence $\xi$ and $\eta$ are replaced by $\xi-\eta$ and $0$ respectively. We adopt the following notations. Given any matrix $A\in{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}\pi)$, denote $$A_G(t)\,=\,\kappa(\phi,\gamma,t)(A)\,\in\,{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G).$$ For any vector $\xi\in H^1(G;{{\mathbb R}})\cong\mathrm{Hom}(G;\,{{\mathbb R}})$, we consider the canonical admissible triple $(G,\xi,\mathrm{id}_G)$, so for every constant $s\in {{\mathbb R}}_+$, there is a matrix deformed from $A_G(t)$, namely: $$A_G(t,s)\,=\,\kappa(\xi,\mathrm{id}_G,s)(A_G(t))\,\in\,{{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}G).$$ We introduce $$W(t,s)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(A_G(t,s)\right).$$ Note that $$W(t,1)=V_{0}(t)$$ and $$W(t,t)=V_\xi(t).$$ \[alwaysConstantlyZero\] If the function $V_0(t)$ is constantly zero, then for all vectors $\xi\in H^1(G;\,{{\mathbb R}})$, the function $V_\xi(t)$ is constantly zero as well. Suppose $V_0(t)$ is constantly zero. Given any constant $T_0\in{{\mathbb R}}_+$, apply Theorem \[mConvex-eBounded\] to the family of matrices $A_G(T_0,s)$, we see that $W(T_0,s)$ is multiplicatively convex in the parameter $s\in{{\mathbb R}}_+$. At $s=1$, we have $W(T_0,1)=V_0(T_0)=0$. This implies that $W(T_0,s)$ is constantly zero in $s$ by Lemma \[zeroOrNot\]. In particular, $V_\xi(T_0)=W(T_0,T_0)=0$. As $T_0\in{{\mathbb R}}_+$ is arbitrary, it follows that $V_\xi(t)$ is constantly zero. Now it suffices to assume that the functions $V_\xi(t)$ are nowhere zero, for all $\xi\in H^1(G;\,{{\mathbb R}})$. By Theorem \[mConvex-eBounded\], $V_\xi(t)$ are multiplicatively convex and have bounded exponent. \[topAndBottomEstimates\]  1. $|\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_\xi)-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)|\,\leq\,R(A,\,\gamma^*\xi)$; 2. $|\mathrm{deg}^{\mathtt{b}}_{0+}(V_\xi)-\mathrm{deg}^{\mathtt{b}}_{0+}(V_0)|\,\leq\,R(A,\,\gamma^*\xi)$. We prove the first estimate and the second can be proved in the same way. Given any constant $T_0\in{{\mathbb R}}_+$ and $K>0$, it follows from the multiplicative convexity of $W(T_0^{1+K},s)$ in the parameter $s\in{{\mathbb R}}_+$ that $$\left|\frac{\log W(T_0^{1+K},T_0^{1+K})-\log W(T_0^{1+K},1)}{\log T_0^{1+K} -\log 1}\right| \,\leq\, R(A_G(T_0^{1+K}),\xi)\leq R(A,\gamma^*\xi),$$ so $$\left|\log W(T_0^{1+K},T_0^{1+K})-\log W(T_0^{1+K},1)\right| \,\leq\, R(A,\gamma^*\xi)\cdot(1+K)\log T_0.$$ Similarly, $$\left|\log W(T_0,T_0)-\log W(T_0,1)\right| \,\leq\, R(A,\gamma^*\xi)\cdot\log T_0.$$ By the multiplicative convexity of $W(t,1)=V_0(t)$, for any arbitrary $\delta>0$, the following estimate holds for sufficiently large $T_0>1$ and any arbitrary $K>0$: $$\left|\frac{\log W(T_0^{1+K},1)-\log W(T_0,1)}{\log T_0^{1+K} -\log T_0}-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)\right| \,<\, \delta,$$ so $$\left|\log W(T_0^{1+K},1)-\log W(T_0,1)-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)K\log T_0\right|<\delta\cdot K\log T_0.$$ Therefore, for any arbitrary $\delta>0$, the following estimate holds for sufficiently large $T_0>1$ and any arbitrary $K>0$: $$\begin{aligned} &&\left|\log W(T_0^{1+K},T_0^{1+K})-\log W(T_0,T_0)-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)K\log T_0\right|\\ &<&R(A,\gamma^*\xi)\cdot(2+K)\log T_0+\delta\cdot K\log T_0, \end{aligned}$$ or equivalently, $$\left|\frac{\log V_\xi(T_0^{1+K})-\log V_\xi(T_0)}{\log T_0^{1+K}-\log T_0}-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)\right|\\ \,<\,R(A,\gamma^*\xi)\cdot(1+\frac{2}{K})+\delta.$$ Take the limit as $T_0\to+\infty$, and then take the limit as $K\to +\infty$: $$\left|\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_\xi)-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)\right|\,\leq\,R(A,\gamma^*\xi)+\delta.$$ As $\delta>0$ is an arbitrary constant, the estimate $$|\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_\xi)-\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_0)|\,\leq\,R(A,\,\gamma^*\xi)$$ follows. The second estimate can be done similarly using $1/T_0$ instead of $T_0$. Combining the estimates of Lemma \[topAndBottomEstimates\], we obtain $$|\mathrm{deg}^{\mathtt{b}}(V_\xi)-\mathrm{deg}^{\mathtt{b}}(V_0)|\,\leq\,2R(A,\gamma^*\xi).$$ This completes the proof of Theorem \[continuityOfDegree\]. Asymptotics for integral matrices {#Sec-asymptotics} ================================= In this section, we give a criterion for checking under special circumstances that the regular Fuglede–Kadison determinant of $L^2$–Alexander twists is asymptotically monomial. Let $(\pi,\gamma,\phi)$ be an admissible triple with a countable target group $G$, and $$G\to\cdots\to\Gamma_n\to\cdots\to\Gamma_2\to\Gamma_1$$ be a cofinal tower of quotients of $G$. Denote by $\psi_n:G\to\Gamma_n$ the quotient homomorphisms. A sequence of admissible triples $$\{(\pi,\gamma_n,\phi)\}_{n\in\mathbb{N}}$$ with target groups $\{\Gamma_n\}_{n\in\mathbb{N}}$ is said to form a *cofinal tower of quotients* of $(\pi,\gamma,\phi)$ if $\gamma_n=\psi_n\circ\gamma$ holds for every $n\in{{\mathbb{N}}}$. For simplicity, we often speak of cofinal towers of admissible triples without explicitly mentioning the cofinal tower of quotients of $G$. In the statement of the theorem below, we adopt the notation $$V_n(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}\left(\kappa(\phi,\gamma_n,t)(A)\right).$$ The notation $V_G(t)$ is understood similarly. \[rationalAsymptotic\] Let $(\pi,\gamma_G,\phi)$ be an admissible triple over ${{\mathbb R}}$ with a finitely generated target group $G$. Let $A$ be a square matrix over ${{\mathbb Z}}\pi$. Suppose that there exists a sequence of admissible triples $\{(\pi,\gamma_n,\phi)\}_{n\in{{\mathbb{N}}}}$ over ${{\mathbb R}}$ satisfying all the following conditions: - The target groups $\Gamma_n$ of $\gamma_n$ are finitely generated and virtually abelian. - The sequence of admissible triples $\{(\pi,\gamma_n,\phi)\}_{n\in{{\mathbb{N}}}}$ forms a cofinal tower of quotients of $(\pi,\gamma_G,\phi)$. - The sequence of degrees $\{\mathrm{deg}^{\mathtt{b}}(V_n)\}_{n\in{{\mathbb{N}}}}$ converges to $\mathrm{deg}^{\mathtt{b}}(V_G)$ in $[0,+\infty)$. In particular, note that $V_G(t)$ should not be constantly zero. Then, as $t\to+\infty$, $$V_G(t)\,\sim\,C_{+\infty}\cdot t^{\mathrm{deg}^{\mathtt{b}}_{+\infty}(V_G)}$$ for some constant $$C_{+\infty}\in[1,\,V_G(1)].$$ The same statement holds true with $+\infty$ replaced by $0+$. We point out that among the three conditions the convergence of degrees is usually the hardest to satisfy or to verify. The ${{\mathbb Z}}\pi$–matrix assumption is responsible for the lower bound $1$ of the coefficients $C_{+\infty}$ and $C_{0+}$ in an essential way. In particular, the argument does not apply to matrices over ${{\mathbb C}}\pi$ to yield similar monomial asymptoticity. The rest of this section is devoted to the proof of \[rationalAsymptotic\]. \[mConvexVersion\] Let $\hat{f}$ be a nowhere zero multiplicatively convex function on ${{\mathbb R}}_+$ with bounded exponent. Suppose that there exists a sequence of nowhere zero multiplicatively convex functions on ${{\mathbb R}}_+$ with bounded exponent $\{f_n\}_{n\in{{\mathbb{N}}}}$ satisfying all the following conditions: - There exists a uniform constant $L\in{{\mathbb R}}$ such that for all $n\in{{\mathbb{N}}}$ and for all pairs of distinct points $t_0,t_1\in{{\mathbb R}}_+$, $$\frac{\log f_n(t_0)\log t_1-\log f_n(t_1)\log t_0}{\log t_1-\log t_0}\,\geq\, L.$$ - For every point $t\in{{\mathbb R}}_+$, $$\limsup_{n\to\infty} f_n(t)\leq\hat{f}(t).$$ - $$\lim_{n\to\infty}\mathrm{deg}^{\mathtt{b}}(f_n)\,=\,\mathrm{deg}^{\mathtt{b}}(\hat{f}).$$ Then as $t\to+\infty$, $$\hat{f}(t)\,\sim\,C_{+\infty}\cdot t^{\mathrm{deg}^{\mathtt{b}}_{+\infty}(\hat{f})}$$ for some constant $$C_{+\infty}\in [e^L,\,\hat{f}(1)].$$ The same statement holds true with $+\infty$ replaced by $0+$. To understand the geometric meaning of the terms in presence, consider the log–log plot of a function $f:\,{{\mathbb R}}_+\to{{\mathbb R}}_+$, namely, the parametrized curve $$\mathcal{P}_f(t)\,=\,(\log t,\,\log f(t)),\,t\in{{\mathbb R}}_+$$ on the Cartesian XY plane. The line through a pair of distinct points $\mathcal{P}_f(t_0)$ and $\mathcal{P}_f(t_1)$ has the slope $$\alpha_f(t_0,t_1)\,=\,\frac{\log f(t_1)-\log f(t_0)}{\log t_1-\log t_0},$$ and it has the Y-intercept $$\beta_f(t_0,t_1)\,=\,\frac{\log f(t_0)\log t_1-\log f(t_1)\log t_0}{\log t_1-\log t_0}.$$ If $f$ is multiplicatively convex with bounded exponent, then $\mathcal{P}_f$ is a convex graph. The constants $\mathrm{deg}^{\mathtt{b}}_{+\infty}(f)$ and $\mathrm{deg}^{\mathtt{b}}_{0+}(f)$ are exactly the supremum and the infimum for slope of chords of $\mathcal{P}_f$, respectively, (Lemma \[degree-bCharacterization\]). For any such $f$, it is easy to see that in as $t\to+\infty$, the asymptotic formula $$f(t)\sim C_{+\infty}\cdot t^{\mathrm{deg}^{\mathtt{b}}_{+\infty}(f)}$$ holds for some constant $C_{+\infty}\in{{\mathbb R}}_+$ if and only if the following limit exists in ${{\mathbb R}}$: $$\beta_{+\infty}(f)\,=\,\lim_{t_0,t_1\to+\infty}\beta_f(t_0,t_1),$$ (which otherwise diverges to $-\infty$). Moreover, $\log C_{+\infty}$ must be $\beta_{+\infty}(f)$ if the asymptotic formula holds. The same criterion holds for $0+$ in place of $+\infty$. We also observe that if $\beta_f(t_0,t_1)$ is uniformly bounded below by some constant $L\in{{\mathbb R}}$ for all pairs of distinct parameters $t_0,t_1\in{{\mathbb R}}_+$, then equivalently, the curve $\mathcal{P}_f$ is contained entirely in the wedge region $\mathcal{V}(L,f)$ supported on the two rays emanating from the point $(0,L)$ along the directions $(-1,-\mathrm{deg}^{\mathtt{b}}_{0+}(f))$ and $(1,\mathrm{deg}^{\mathtt{b}}_{+\infty}(f))$. To prove Lemma \[mConvexVersion\], we observe from the geometric meaning that the limit Y-intercept $C_{+\infty}$ is at most $\hat{f}(1)$. It remains to bound $C_{+\infty}$ from below by $e^L$, or equivalently, to show that the log–log plot of the function $\hat{f}$ is contained in the wedge region $\mathcal{V}(L,\hat{f})$. We argue by contradiction, supposing that there were a point $P=\mathcal{P}_{\hat{f}}(T_0)$ lying outside $\mathcal{V}(L,\hat{f})$. By the first condition, the curves $\mathcal{P}_n$ of $f_n$ are all contained in their own wedge regions $\mathcal{V}(L,f_n)$. In particular, the second condition implies that $T_0\neq1$. Let $3\delta\cdot |\log T_0|$ be the vertical distance of $P$ from $\mathcal{V}(L,\hat{f})$. For all sufficiently large $n$, the second condition implies that the right side of $\mathcal{V}(L,f_n)$ is at most $\delta \cdot|\log T_0|$ above $P$. Then the third condition forces the slope of the left side of $\mathcal{V}(L,f_n)$ to be at least $\delta$ less than that of $\mathcal{V}(L,\hat{f})$ for all sufficiently large $n$. Consequently, for some parameter value $T_1\in{{\mathbb R}}_+$ that is sufficiently close to $0+$, the curve point $Q=\mathcal{P}_{\hat{f}}(T_1)$ must stay uniformly below the left sides of all those $\mathcal{V}(L,f_n)$, for instance, of distance at least $1$. However, we see that the second condition is violated at the point $Q$: We have shown that the curves $\mathcal{P}_{n}$ would have been at least distance $1$ above $Q$ for all sufficiently large $n$. The contradiction completes the proof. \[coefficientVA\] Let $(\pi,\phi,\gamma)$ be an admissible triple over ${{\mathbb R}}$ with a target group $G$. Let $A$ be a square matrix over ${{\mathbb Z}}\pi$. Suppose that $G$ is finitely generated and virtually abelian. Then for all pairs of distinct points $t_0,t_1\in{{\mathbb R}}_+$, $$\frac{\log V_G(t_0)\log t_1-\log V_G(t_1)\log t_0}{\log t_1-\log t_0}\,\geq\, 0,$$ unless $V_G(t)$ is constantly zero. By Theorem \[mConvex-eBounded\], the function $V_G(t)$ is either constantly zero or multiplicatively convex with bounded exponent. It suffices to consider the latter case. By the geometric meaning of the expression explained in the proof of Lemma \[mConvexVersion\], we can equivalently prove that $V_G(t)$ is asymptotically monomial in both ends with the coefficient no less than $1$. We start by a few reductions. Observe that whether or not the asserted inequality holds true does not change under passage from $G$ to any finite index subgroup $\tilde{G}$ of $\gamma(\pi)$. Indeed, by basic properties of regular Fuglede–Kadison determinants, $$\begin{aligned} V_G(t)&=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi,t)(A))\\ &=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\gamma(\pi))}(\kappa(\gamma,\phi,t)(A))\\ &=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\tilde{G})} \left(\kappa(\gamma,\phi,t)(\mathrm{res}_{\gamma(\pi)}^{\tilde{G}}(A))\right)^{\frac{1}{[\gamma(\pi):\tilde{G}]}}\\ &=& V_{\tilde{G}}(t)^{\frac{1}{[\gamma(\pi):\tilde{G}]}}. \end{aligned}$$ Therefore, possibly after replacing $G$ with a finite index subgroup $\tilde{G}$ of $\gamma(\pi)$, and replacing $\pi$ with $\gamma(\pi)$, we may assume without loss of generality that $\gamma$ is an isomorphism, and $G$ is a finitely generated free abelian group. After these reductions, we denote by $l$ the rank of $G$ and identify ${{\mathbb C}}G$ with the Laurent polynomial ring ${{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]$. Choose a basis $r_1,\cdots,r_d\in{{\mathbb R}}_+$ of the ${{\mathbb Q}}$-vector space spanned by $\phi(\pi)$ such that elements of $\phi(\pi)$ are ${{\mathbb Z}}$-linear combinations of $r_i$. Then we can uniquely decompose $\phi$ as a sum: $$\phi\,=\,r_1\phi_1+\cdots+r_d\phi_d$$ where $\phi_i$ are homomorphisms in $\mathrm{Hom}(\pi,{{\mathbb Z}})$. As in the proof of Lemma \[mConvex-eBounded-FA\], the function $V_G(t)$ can be expressed in terms of a multivariable determinant function: $$V_G(t)\,=\,W((t^{r_1},\cdots,t^{r_d})),$$ where for any vector $\vec{t}=(t_1,\cdots,t_d)\in {{\mathbb R}}_+^d$, $$\begin{aligned} W(\vec{t})&=&\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(A_G(\vec{t})\right)\\ &=&M(p_A(\tilde{t}_1z_1,\cdots,\tilde{t}_lz_l))\\ &=&\exp\left[\frac{1}{(2\pi)^l}\cdot\int_0^{2\pi}\cdots\int_0^{2\pi} \log(|p_A(\tilde{t}_1e^{\mathbf{i}\theta_1},\cdots,\tilde{t}_le^{\mathbf{i}\theta_l})|){{\mathrm{d}}}\theta_1,\cdots{{\mathrm{d}}}\theta_l\right], \end{aligned}$$ and for each $j$, $$\tilde{t}_j=t_1^{\phi_1(z_j)}\cdot\cdots\cdot t_d^{\phi_d(z_j)}.$$ Recall the notations there that the Laurent polynomial matrix $$A_G(\vec{t})\,=\,\kappa(\phi,\gamma,\vec{t})(A)\,\in\, {{\mathrm{Mat}}}_{p\times p}({{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]).$$ is defined using the homomorphism of matrix algebras $\kappa(\phi,\gamma,\vec{t})$ determined by the formula $$\kappa(\phi,\gamma,\vec{t})(g)\,=\,t_1^{\phi_1(g)}\cdots t_d^{\phi_d(g)}\gamma(g)$$ for all $g\in\pi$. The usual determinant of the Laurent polynomial matrix $A_G(\vec{t})$ at the diagonal vector $\vec{1}=(1,\dots,1)\in{{\mathbb Z}}^d$ gives rise to the Laurent polynomial $$p_A(z_1,\cdots,z_l)\,=\mathrm{Det}_{{{\mathbb C}}[z_1^{\pm1},\cdots,z_l^{\pm1}]}\left(A_G(\vec{1})\right).$$ The idea is to govern the asymptotics of $V_G(t)$ by the fact that $p_A$ is a Laurent polynomial over ${{\mathbb Z}}$, since $A$ is assumed to be over ${{\mathbb Z}}\pi$. To this end, expand the Laurent polynomial $p_A$ as $$p_A(z_1,\cdots,z_l)\,=\,\sum_{\vec{v}\in{{\mathbb Z}}^l}a_{\vec{v}}z_1^{v_1}\cdots z_l^{v_l}$$ where $v_i$ are the entries of $\vec{v}\in{{\mathbb Z}}^l$. Only finitely many coefficients $a_{\vec{v}}$ in the summation are nonzero. For any vector $\vec{v}\in{{\mathbb Z}}^l$, denote $$\Phi\vec{v}\,=\,(\phi_1(z_1^{v_1}\cdots z_l^{v_l}),\cdots,\phi_d(z_1^{v_1}\cdots z_l^{v_l}))\,\in\,{{\mathbb Z}}^d.$$ Denote by $\vec{r}\in{{\mathbb R}}_+^d$ the vector $(r_1,\cdots,r_d)$. Let $\vec{w}_{\mathtt{top}}\in{{\mathbb Z}}^d$ be the unique vector at which the maximum of the following set is achieved: $$\left\{\,\langle\, \vec{r},\,\vec{w}\,\rangle\in{{\mathbb R}}\,:\sum_{\Phi\vec{v}=\vec{w}}\,a_{\vec{v}}\neq0\,\right\}.$$ The uniqueness is a consequence of the linear independence of $r_1,\cdots,r_d$ over ${{\mathbb Q}}$. The integrand for $V_G(t)$, denoted as $\omega(t,\vec{\theta})$, can be calculated by: $$\begin{aligned} \omega(t,\vec{\theta})&=& \log\left|p_A(t^{r_1\phi_1(z_1)+\cdots+r_d\phi_d(z_1)}e^{\mathbf{i}\theta_1},\cdots,t^{r_1\phi_1(z_l)+\cdots+r_d\phi_d(z_l)}e^{\mathbf{i}\theta_l})\right|\\ &=& \log\left|\sum_{\vec{w}\in{{\mathbb Z}}^d}\sum_{\Phi\vec{v}=\vec{w}}a_{\vec{v}} \,t^{\langle \vec{r},\,\Phi\vec{v}\rangle}e^{\mathbf{i}\vec{\langle\theta},\vec{v}\rangle}\right|\\ &=& \log\left| \sum_{\Phi\vec{v}=\vec{w}_{\mathtt{top}} }a_{\vec{v}} \,t^{\langle \vec{r},\,\vec{w}_{\mathtt{top}}\rangle}e^{\mathbf{i}\vec{\langle\theta},\vec{v}\rangle} + \sum_{\Phi\vec{v}\neq\vec{w}_{\mathtt{top}}}a_{\vec{v}} \,t^{\langle \vec{r},\,\Phi\vec{v}\rangle}e^{\mathbf{i}\vec{\langle\theta},\vec{v}\rangle}\right|\\ &=& \log\left| \sum_{\Phi\vec{v}=\vec{w}_{\mathtt{top}} }a_{\vec{v}} e^{\mathbf{i}\vec{\langle\theta},\vec{v}\rangle} + \sum_{\Phi\vec{v}\neq\vec{w}_{\mathtt{top}}}a_{\vec{v}} \,t^{\langle \vec{r},\,\Phi\vec{v}-\vec{w}_{\mathtt{top}}\rangle}e^{\mathbf{i}\vec{\langle\theta},\vec{v}\rangle}\right| +\langle \vec{r},\vec{w}_{\mathtt{top}}\rangle\cdot\log t. \end{aligned}$$ Accordingly, the integral $$\log V_G(t)\,=\,\frac{1}{(2\pi)^l}\int_0^{2\pi}\cdots\int_0^{2\pi} \omega(t,\vec{\theta})\,{{\mathrm{d}}}\theta_1\cdots{{\mathrm{d}}}\theta_l$$ breaks into the sum of two terms. The first term gives rise to the logarithmic Mahler measure of the Laurent polynomial $$q_t(z_1,\cdots,z_l)\,=\, \sum_{\Phi\vec{v}=\vec{w}_{\mathtt{top}}}a_{\vec{v}} z_1^{v_1}\cdots z_l^{v_l} + \sum_{\Phi\vec{v}\neq\vec{w}_{\mathtt{top}}}a_{\vec{v}} \,t^{\langle \vec{r},\,\Phi\vec{v}-\vec{w}_{\mathtt{top}}\rangle}z_1^{v_1}\cdots z_l^{v_l}.$$ By the way $\vec{w}_{\mathtt{top}}$ is selected, as $t$ tends to $+\infty$, the coefficients of $q_t$ converge to those of its chief part $$q_{+\infty}(z_1,\cdots,z_l)\,=\,\sum_{\Phi\vec{v}=\vec{w}_{\mathtt{top}}}a_{\vec{v}} z_1^{v_1}\cdots z_l^{v_l}.$$ Thus, by the continuity of Mahler measure [@Boyd], the first term of $\log V_G(t)$ approximates the logarithmic Mahler measure of $q_{+\infty}$ as $t\to+\infty$. The second term is just the integration against $\langle \vec{r},\vec{w}_{\mathtt{top}}\rangle\cdot\log t$, which is constant with respect to $\vec{\theta}$. Putting together, as $t\to +\infty$, $$\log V_G(t)\,=\,\log M(q_{+\infty})+\langle \vec{r},\vec{w}_{\mathtt{top}}\rangle\cdot\log t+o(1).$$ The calculation yields the asymptotic formula: $$V_G(t)\,\sim\, C_{+\infty}\cdot t^{\langle \vec{r},\vec{w}_{\mathtt{top}}\rangle}$$ as $t\to +\infty$. The coefficient satisfies the asserted estimation $$C_{+\infty}\,=\,M(q_{+\infty})\,\geq\,1,$$ because $q_{+\infty}$ is a Laurent polynomial over ${{\mathbb Z}}$, cf. [@Everest--Ward Lemma 3.7]. The same argument works for $V_G(t^{-1})$ as well, which proves the $0+$ direction. We conclude that $V_G(t)$ is asymptotically monomial in both ends with the coefficient greater than or equal to $1$. This completes the proof. We adopt the notations of the statement. By Theorem \[mConvex-eBounded\] and Lemma \[zeroOrNot\], the third assumption implies that the function $V_G(t)$ is positive for all $t\in{{\mathbb R}}_+$. By Lemma \[stable-semicontinuous\], the second condition of Lemma \[mConvexVersion\] is satisfied for $V_G(t)$ and $\{V_n(t)\}_{n\in{{\mathbb{N}}}}$. By Lemma \[coefficientVA\], the functions $\{V_n(t)\}_{n\in{{\mathbb{N}}}}$ satisfy the first condition of Lemma \[mConvexVersion\]. The third condition of Lemma \[mConvexVersion\] has been guaranteed by the assumption of Theorem \[rationalAsymptotic\]. Therefore, Lemma \[mConvexVersion\] implies that $V_G(t)$ is asymptotically monomial in both ends with the coefficient lying in the interval $[1,V_G(1)]$. This completes the proof of Theorem \[rationalAsymptotic\]. $L^2$–Alexander torsion of $3$-manifolds {#Sec-mainProofs} ======================================== In this section, we study $L^2$–Alexander torsion of $3$-manifolds using the tools that we have developed. In subsection \[Subsec-efficientCellularPresentation\], we recall a formula for calculation used by [@DFL-torsion]. We prove Theorem \[main-torsion-weak\] in Subsection \[Subsec-degreeRFTwist\], and Theorem \[main-torsion\] in Subsection \[Subsec-degreeFullTwist\]. Efficient cellular presentation {#Subsec-efficientCellularPresentation} ------------------------------- To calculate $L^2$–Alexander torsion of 3-manifolds, the following formula has been used by [@DFL-torsion Proposition 9.1], and we state it in some more details. \[torsionToMatrix\] Suppose that $N$ is an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary. There exist elements $u_1,v_1,\cdots, u_l,v_l\in \pi_1(N)$ and a square matrix $A$ over ${{\mathbb Z}}\pi_1(N)$ such that the following holds true. The homological classes $[u_i]-[v_i]$ are nontrivial in $H_1(N;{{\mathbb Q}})$. Furthermore, for every homomorphism $\gamma:\,\pi_1(N)\to G$ which induces an isomorphism under $H_1(-;{{\mathbb R}})$, and for every cohomology class $\phi\in H^1(N;\,{{\mathbb R}})$, $$\begin{aligned} \tau^{(2)}(N,\gamma,\phi)(t)&\doteq& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi,t)(A)) \cdot\prod_{i=1}^l\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi,t)(u_i-v_i))^{-1}\\ &=& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi,t)(A)) \cdot\prod_{i=1}^l\max\{t^{\phi(u_i)},t^{\phi(v_i)}\}^{-1}. \end{aligned}$$ Moreover, given any primitive cohomology $\phi_0\in H^1(N;\,{{\mathbb Z}})\cong \mathrm{Hom}(\pi_1(N),{{\mathbb Z}})$ in the first place, we may require in addition that $\phi_0(u_i)\neq\phi_0(v_i)$ for $i=1,\cdots,l$, and that $A$ has the form: $$A_0+\mu\cdot\left(\begin{matrix}\mathbf{1}_{k\times k}&0\\0&0\end{matrix}\right),$$ where $A_0$ is a square matrix over ${{\mathbb Z}}\mathrm{Ker}(\phi_0)$, and $\phi_0(\mu)=1$, and $$k-l\,=\,x_N(\phi_0).$$ We may assume that $H_1(N;\,{{\mathbb R}})$ is nontrivial since otherwise the $L^2$–Alexander torsion is constant. Take any primitive cohomology class $\phi_0\in H^1(N;\,{{\mathbb Z}})$, for example, as specified in the moreover part. We employ the construction of S. Friedl in [@Friedl Section 4] to produce a $\pi_1(N)$–equivariant CW complex structure on the universal cover of $N$. To be precise, there exist finitely many properly embedded oriented compact subsurface $\Sigma_1,\cdots,\Sigma_s$ and accordingly $r_1,\cdots,r_s\in{{\mathbb{N}}}$, satisfying the following properties: - $r_1[\Sigma_1]+\cdots+r_s[\Sigma_s]\in H_2(N,\partial N;\,{{\mathbb Z}})$ is dual to $\phi_0$. - $-r_1\chi(\Sigma_1)-\cdots-r_s\chi(\Sigma_s)=x(\phi_0)$. - $\Sigma_i$ are mutually disjoint and the complement of their union in $N$ is connected. The calculation here is the same as [@DFL-torsion Proposition 9.1] except that instead of computing square matrices induced by $\kappa(\gamma,\phi_0,t)$ there, we compute those induced by $\kappa(\gamma,\phi,t)$ for any class $\phi\in H^1(N;\,{{\mathbb R}})$. For example, the determinant contribution from a block $$\left[\begin{matrix}1&-\nu_i\\1&-z_i\end{matrix}\right]\in{{\mathrm{Mat}}}_{2\times 2}({{\mathbb Z}}\pi_1(N)),$$ where $i$ runs over $1,\cdots,s$ becomes: $$\begin{aligned} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\gamma,\phi,t) \left[\begin{matrix}1&-\nu_i\\1&-z_i\end{matrix}\right]\right)&=& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)} {\left[\begin{matrix}1&-t^{\phi(\nu_i)}\gamma(\nu_i)\\1&-t^{\phi(z_i)}\gamma(z_i)\end{matrix}\right]}\\ &=& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)} {\left[\begin{matrix}1-t^{\phi(\nu_iz_i^{-1})}\gamma(\nu_iz_i^{-1})&-t^{\phi(\nu_i)}\gamma(\nu_i)\\0&-t^{\phi(z_i)}\gamma(z_i)\end{matrix}\right]}\\ &=& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\gamma,\phi,t)(z_i-\nu_i)\right). \end{aligned}$$ The elements $\nu_i$ and $z_i$ arising from Friedl’s construction satisfy $\phi_0(\nu_i)=r_i$ and $\phi_0(z_i)=0$. Since $(\pi,\gamma,\phi_0)$ is an admissible triple and $\phi_0(\nu_i)-\phi_0(z_i)=r_i\neq0$, the element $\gamma(\nu_iz_i^{-1})$ must have infinite order in $G$. Then [@DFL-torsion Lemma 2.8] yields $$\begin{aligned} \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(\kappa(\gamma,\phi,t)(z_i-\nu_i)\right)&=& \mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}\left(t^{\phi(z_i)}\gamma(z_i)-t^{\phi(\nu_i)}\gamma(\nu_i)\right)\\ &=& t^{\phi(z_i)}\cdot\max\{1,t^{\phi(\nu_iz_i^{-1})}\}\\ &=& \max\{t^{\phi(z_i)},t^{\phi(\nu_i)}\}. \end{aligned}$$ The point here is that we do not need to require $\phi(\nu_i)-\phi(z_i)\neq0$ for all $\phi$. With the modification above, we see that $$u_1,v_1,\cdots,u_s,v_s\in \pi_1(N)$$ can be taken to be $z_1,\nu_1,\cdots,z_s,\nu_s$. Similarly, we take $$u_{s+1},v_{s+1},\cdots,u_{2s},v_{2s}\in \pi_1(N)$$ to be $x_1,\nu_1,\cdots,x_s,\nu_s$ in the notations of [@DFL-torsion Proposition 9.1], where $\phi_0(x_i)=0$ for all $i=1,\cdots,s$. This gives rise to a total number of $l=2s$ pairs of $u_i$ and $v_i$. The matrix $A$ is a square matrix over ${{\mathbb Z}}\pi_1(N)$ of the form $$\left[\begin{matrix} \mathbf{1}_{n_1\times n_1}&-\nu_1\cdot\mathbf{1}_{n_1\times n_1}&0&0&\cdots&0&0\\ 0&0&\ddots&\ddots&0&0&0\\ 0&\cdots&0&0&\mathbf{1}_{n_s\times n_s}&-\nu_s\cdot\mathbf{1}_{n_s\times n_s}&0\\ *&\cdots&\cdots&*&*&*&* \end{matrix}\right],$$ where $n_i=-\chi(\Sigma_i)+2$, and $*$ stand for (not necessarily square) blocks with entries in ${{\mathbb Z}}\mathrm{Ker}(\phi_0)$, and $\phi_0(\nu_i)=r_i$. One can further manipulate the matrix $A$ into the asserted form without affecting the regular Fuglede–Kadison determinant under $\kappa(\phi,\gamma,t)$. This can be done by adding diagonal $\mathbf{1}_{1\times 1}$ blocks and performing elementary transformations using well known tricks, so we omit the details, cf. [@DFL-torsion Proposition 9.3]. Degree for residually finite twists {#Subsec-degreeRFTwist} ----------------------------------- In this subsection, we prove Theorem \[main-torsion-weak\]. Throughout this subsection, let $N$ be an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary, and $\gamma:\pi_1(N)\to G$ be a homomorphism. Suppose that $G$ is finitely generated and residually finite and $(N,\gamma)$ is weakly acyclic. For any admissible triple $(\pi_1(N),\gamma,\phi)$ over ${{\mathbb R}}$, denote by $$\tau^{(2)}(N,\gamma,\phi):\,{{\mathbb R}}_+\to[0,+\infty)$$ any representative of the associated $L^2$–Alexander torsion. \[nonzeroTorsion\] Given any admissible triple $(\pi_1(N),\gamma,\phi)$ over ${{\mathbb R}}$, $$\tau^{(2)}(N,\gamma,\phi)(1)\,>\,0.$$ As $(N,\gamma)$ is weakly acyclic, it follows from the definition that $\tau^{(2)}(N,\gamma,\phi)(1)$ is the $L^2$–torsion of the pair $(N,\gamma)$, namely, the $L^2$–torsion of the covering space of $N$ which corresponds to $\mathrm{Ker}(\gamma)$ equipped with the action of $\mathrm{Im}(\gamma)$. The latter can be computed through a weakly acyclic Hilbert chain complex of which the boundary operators are represented by matrices over ${{\mathbb Z}}\mathrm{Im}(\gamma)$. As $G$ is residually finite, [@Lueck-approximating Theorem 3.4 (2)] implies that $\tau^{(2)}(N,\gamma)$ is a multiplicatively alternating product of positive constants which are no smaller than $1$, hence must be nonzero. \[degree-bTorsion\] Let $u_1,v_1,\cdots, u_l,v_l\in \pi_1(N)$ be a collection of elements and $A$ be a square matrix over ${{\mathbb Z}}\pi_1(N)$ as asserted by Lemma \[torsionToMatrix\]. Given any admissible triple $(\pi_1(N),\gamma,\phi)$ over ${{\mathbb R}}$, the following formula holds valid and true: $$\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\gamma,\phi))\,=\, \mathrm{deg}^{\mathtt{b}}\left(\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\phi,\gamma,t)(A))\right)-\sum_{i=1}^l|\phi(u_i)-\phi(v_i)|.$$ The function $\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\phi,\gamma,t)(A))$ of $t\in{{\mathbb R}}_+$ is multiplicatively convex by Theorem \[mConvex-eBounded\]. In fact, it is nowhere zero and hence with bounded exponent by Lemmas \[nonzeroTorsion\], \[torsionToMatrix\], and \[zeroOrNot\]. Thus it is valid to speak of $\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\gamma,\phi))$ and the formula follows immediately from Lemma \[torsionToMatrix\]. We continue to adopt the assumptions of this subsection. It follows from Lemmas \[torsionToMatrix\], \[nonzeroTorsion\], and Theorem \[mConvex-eBounded\] that $\tau^{(2)}(N,\gamma,\phi)$ is everywhere positive and continuous in $t\in{{\mathbb R}}_+$. For any constants $a,b\in{{\mathbb R}}$, note that the function $\max\{t^a,t^b\}^{-1}$ can always be turned into a multiplicatively convex function by multiplying a sufficiently high power of $\max\{1,t\}$, for example, by making the power at least $|a-b|$. It further follows that $\tau^{(2)}(N,\phi)\cdot\max\{1,t\}^m$ is multiplicatively convex with bounded exponent any sufficiently large positive constant $m$. The Lipschitz continuity of $\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\gamma,\phi+\gamma^*\xi))$ as a function of $\xi\in H^1(G;{{\mathbb R}})$ is a consequence of Theorem \[continuityOfDegree\]. Therefore, it remains to show that for all admissible triple $(N,\gamma,\phi)$, the following comparison holds true: $$\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\gamma,\phi))\,\leq\,x_N(\phi).$$ To this end, we first prove the comparison for any admissible triple $(N,\gamma,\phi_0)$ where $\phi_0$ is a primitive class in $H^1(N;{{\mathbb Z}})$. Let $u_1,v_1,\cdots, u_l,v_l\in \pi_1(N)$ be a collection of elements and $A$ be a square matrix over ${{\mathbb Z}}\pi_1(N)$ as guaranteed by the ‘moreover’ part of Lemma \[torsionToMatrix\]. It is clear that for any arbitrary $\delta>0$, $$\lim_{t\to0+}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi_0,t)(A))\cdot t^{\delta}\,=\,0,$$ and $$\lim_{t\to+\infty}\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi_0,t)(A))\cdot t^{-k-\delta}\,=\,0,$$ so $$\mathrm{deg}^{\mathtt{b}}\left(\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(G)}(\kappa(\gamma,\phi_0,t)(A))\right)\,\leq\,k.$$ On the other hand, the integrality of $\phi_0$ and the property that $\phi_0(u_i)\neq\phi_0(v_i)$ imply $$\sum_{i=1}^l|\phi_0(u_i)-\phi_0(v_i)|\,\geq\,l.$$ Then Lemma \[degree-bTorsion\] yields the comparison $$\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\gamma,\phi_0))\,\leq\,k-l\,=\,x_N(\phi_0).$$ For admissible triples over ${{\mathbb Q}}$, the comparison follows immediately from the integral case by considering an integral multiple of $\phi$. For admissible triples over ${{\mathbb R}}$, the comparison follows from the continuity of degree together with the continuity of Thurston norm. This completes the proof of Theorem \[main-torsion-weak\]. Degree for the full twist {#Subsec-degreeFullTwist} ------------------------- In this subsection, we prove Theorem \[main-torsion\]. Suppose that $N$ is an irreducible orientable compact $3$-manifold with empty or incompressible toral boundary. When $N$ contains no hyperbolic piece in its geometric decomposition, $N$ is a graph manifold, possibly a Seifert fibered space. Theorem \[main-torsion\] in this case is an immediate consequence of [@DFL-torsion Theorem 1.2], [@Herrmann]. Therefore, throughout this section, we assume that $N$ contains at least one hyperbolic piece, or in other words, $N$ is either hyperbolic or so-called mixed. Note that $N$ is aspherical so the $\ell^2$–Betti numbers of $N$ all vanish, by Lott–Lück [@Lott-Lueck]. For any class $\phi\in H^1(\pi_1(N);{{\mathbb R}})$, any representative of the associated full $L^2$–Alexander torsion $$\tau^{(2)}(N,\phi):\,{{\mathbb R}}_+\to [0,+\infty)$$ is everywhere positive and continuous, and $\mathrm{deg}^{\mathtt{b}}(\tau^{(2)}(N,\phi))\in{{\mathbb R}}$ is at most $x_N(\phi)$, (Theorem \[main-torsion-weak\]). It remains to determine the asymptotics as the parameter $t$ tends to $+\infty$ or $0+$. Recall that a class $\phi\in H^1(N;{{\mathbb R}})$ is said to be *quasi-fibered* if $\phi$ is the limit of a sequence of fibered classes in $H^1(N;{{\mathbb Q}})$. \[quasifiberedClasses\] Let $G$ be a finitely generated, residually finite group. For every homomorphism $\gamma:\pi_1(N)\to G$ which induces an isomorphism under $H_1(-;{{\mathbb R}})$, and for every quasi-fibered class $\phi\in H^1(N;{{\mathbb R}})$, $$\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma,\phi)\right)\,=\,x_N(\phi).$$ Note that $(\pi_1(N),\gamma,\phi)$ is always admissible regardless of $\phi$ by Lemma \[homologicallyIsomorphic\]. If $\phi\in H_1(N;{{\mathbb Q}})$ is a rational, fibered class, the conclusion follows from [@DFL-torsion Theorem 1.3]. In fact, for such $\phi$, the $L^2$–Alexander torsion $\tau^{(2)}(N,\gamma,\phi)$ is known to be asymptotically monomial, (indeed, eventually monomial, by [@DFL-torsion Theorem 1.3],) so in this case, $$\mathrm{deg}^{\mathrm{b}}\left(\tau^{(2)}(N,\gamma,\phi)\right)\,=\,\mathrm{deg}^{\mathrm{a}}\left(\tau^{(2)}(N,\gamma,\phi)\right)\,=\,x_N(\phi),$$ cf. Definitions \[degree-a\] and \[degree-b\]. For any quasi-fibered class $\phi\in H_1(N;{{\mathbb R}})$, we take a sequence of rational, fibered classes $\{\phi_n\}_{n\in{{\mathbb{N}}}}$ which converges to $\phi$. Then by the continuity of degree (Theorem \[main-torsion-weak\] (3)) and the formula of Lemma \[torsionToMatrix\], we see that $$\begin{aligned} \mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma,\phi_n)\right) &=&\lim_{n\to\infty}\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma,\phi_n)\right)\\ &=&\lim_{n\to\infty} x_N(\phi_n)\\ &=&x_N(\phi). \end{aligned}$$ This completes the proof. Let $u_1,v_1,\cdots, u_l,v_l\in \pi_1(N)$ be a collection of elements and $A$ be a square matrix over ${{\mathbb Z}}\pi_1(N)$ as asserted by Lemma \[torsionToMatrix\]. \[quasifiberedTower\] Given any class $\phi\in H^1(N;{{\mathbb R}})$, there exists a tower of quotients of $\pi_1(N)$ $$\pi_1(N)\to\cdots\to \Gamma_n\to \cdots \to\Gamma_2\to\Gamma_1$$ with all the following properties: - The quotients $\Gamma_n$ are finitely generated and virtually abelian. - The homomorphisms $\gamma_n:\pi_1(N)\to \Gamma_n$ induce isomorphisms under $H_1(-;{{\mathbb R}})$. - The sequence of admissible triples $\{(\pi_1(N),\gamma_n,\phi)\}_{n\in{{\mathbb{N}}}}$ forms a cofinal tower of quotients of $(\pi_1(N),\gamma_\infty,\phi)$, where $\gamma_\infty$ denotes $\mathrm{id}_{\pi_1(N)}:\pi_1(N)\to\pi_1(N)$. Furthermore, the tower can be required to satisfy: $$\mathrm{deg}^{\mathtt{b}}(V_n)=\mathrm{deg}^{\mathtt{b}}(V_\infty)$$ for all $n\in{{\mathbb{N}}}$, where $$V_n(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\kappa(\phi,\gamma_n,t)(A)),$$ and the notation $V_\infty(t)$ is understood similarly. As we have assumed for this section that $N$ is either hyperbolic or mixed, there exists a regular finite cover $p:\tilde{N}\to N$ which corresponds to a finite index subgroup $\tilde{\pi}$ of $\pi_1(N)$, such that $p^*\phi\in H^1(\tilde{N};{{\mathbb R}})$ is quasi-fibered. This follows from a combination of Agol’s RFRS criterion for virtual fibering [@Agol-RFRS] and the virtual specialness of hyperbolic and mixed $3$-manifolds [@Agol-VHC; @Wise-book; @PW-mixed], cf. [@DFL-torsion Subsection 10.1]. Observe that for any further subgroup of finite index in $\tilde{\pi}$ which is normal in $\pi_1(N)$, the corresponding finite cover again carries the pull-back of $\phi$ as a quasi-fibered class. Take a cofinal tower of normal finite-index subgroups of $\pi_1(N)$, $$\pi_1(N)\geq \Pi_1\geq \Pi_2\geq\cdots\geq\Pi_n\geq\cdots.$$ Possibly after intersecting the terms with $\tilde\pi$, we may require that $\Pi_n$ are all contained in $\tilde{\pi}$. For all $n\in{{\mathbb{N}}}$, define $$\Gamma_n\,=\,\pi_1(N)\,/\,(\mathrm{Ker}(\Pi_n\to H_1(\Pi_n;{{\mathbb Q}})).$$ All the asserted properties of Lemma \[quasifiberedTower\] hold obviously true for the tower of quotients $\{\Gamma_n\}$, except maybe the ‘furthermore’ part. To check the equality of degree, denote by $$p_n:\,\tilde{N}_n\to N$$ the finite cover corresponding to the image of $\Pi_n$ in $\Gamma_n$. Taking restriction to $\pi_1(\tilde{N}_n)$ gives rise to new admissible triples $(\pi_1(\tilde{N}),\tilde\gamma_n,p^*_n\phi)$. By the dotted equality of Lemma \[torsionToMatrix\], and basic properties of regular Fuglede–Kadison determinants, and Lemma \[quasifiberedClasses\], for all $n\in{{\mathbb{N}}}$, $$\begin{aligned} \mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma_n,\phi)\right) &=&\frac{1}{[\tilde{N}:N]}\cdot\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(\tilde{N}_n,\tilde{\gamma}_n,p^*\phi)\right)\\ &=&\frac{1}{[\tilde{N}:N]}\cdot x_{\tilde{N}_n}(p_n^*\phi)\\ &=&x_{N}(\phi). \end{aligned}$$ Note that the calculation above does not require the target group to be virtually abelian. Therefore, the same calculation for $\tau^{(2)}(N,\gamma_\infty,\phi)$ yields the equality $$\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\gamma_\infty,\phi)\right)\,=\,x_{N}(\phi).$$ It follows from Lemma \[degree-bTorsion\] that $$\mathrm{deg}^{\mathtt{b}}(V_n)=\mathrm{deg}^{\mathtt{b}}(V_\infty)$$ for all $n\in{{\mathbb{N}}}$. We continue to adopt the assumptions of this subsection. It suffices to prove the statements (2), (3), and (4). Given $N$ hyperbolic or mixed and any $\phi\in H^1(N;{{\mathbb R}})$, we take a tower of quotients as guaranteed by Lemma \[quasifiberedTower\]. By Theorem \[rationalAsymptotic\], we see that the function (now dropping the subscript $\infty$) $$V(t)=\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\kappa(\phi,\mathrm{id}_{\pi_1(N)},t)(A))$$ is asymptotically monomial in both ends. In fact, as $t\to+\infty$, $$V(t)\sim C_{+\infty}\cdot t^{\mathrm{deg}^{\mathtt{b}}_{+\infty}(V)}$$ for some constant $$C_{+\infty}\in\left[1,e^{\mathrm{Vol}(N)/6\pi}\right],$$ and the same statement holds true with $+\infty$ replaced by $0+$. Here the upper bound comes from $$V(1)\,=\,\tau^{(2)}(N,\phi)(1)\,=\,\tau^{(2)}(N)\,=\,e^{\mathrm{Vol}(N)/6\pi}.$$ Therefore, $\tau^{(2)}(N,\phi)$ is also asymptotically monomial in both ends with the same estimation of coefficients. In particular, the asymptote degree of $\tau^{(2)}(N,\phi)$ is valid, and $$\mathrm{deg}^{\mathtt{a}}\left(\tau^{(2)}(N,\phi)\right)\,=\,\mathrm{deg}^{\mathtt{b}}\left(\tau^{(2)}(N,\phi)\right)\,=\,x_N(\phi).$$ By the symmetry of $L^2$–Alexander torsion for $3$-manifolds [@DFL-symmetric], we further imply $$C_{+\infty}\,=\,C_{0+}.$$ This allows us to refer to both of them by one notation: $$C(N,\phi)\in\left[1,e^{\mathrm{Vol}(N)/6\pi}\right].$$ It remains to argue that $C(N,\phi)$ depends upper semi-continuously on $\phi\in H^1(N;{{\mathbb R}})$. In fact, suppose that $\{\phi_n\in H^1(N;{{\mathbb R}})\}_{n\in{{\mathbb{N}}}}$ is a sequence of cohomology classes which converges to $\phi$. We write $$V(\phi_n,t)\,=\,\mathrm{det}^{\mathtt{r}}_{\mathcal{N}(\Gamma_n)}(\kappa(\phi_n,\mathrm{id}_{\pi_1(N)},t)(A)).$$ By Lemma \[norm-semicontinuous\], for all $t\in{{\mathbb R}}_+$, $$\limsup_{n\to\infty} V(\phi_n,t)\leq V(t).$$ By the continuity of degree (Theorem \[continuityOfDegree\]), $$\lim_{n\to\infty} \mathrm{deg}^{\mathtt{b}}\left(V(\phi_n,t)\right)=\mathrm{deg}^{\mathtt{b}}\left(V(t)\right)$$ Then it follows from Lemma \[mConvexVersion\] that $$C(N,\phi)\,\geq\limsup_{n\to\infty} C(N,\phi_n).$$ In other words, the leading coefficient $C(N,\phi)$ is upper semicontinuous as a function of $\phi\in H^1(N;{{\mathbb R}})$. This completes the proof of Theorem \[main-torsion\]. Example {#Sec-example} ======= We conclude our discussion with an example regarding nontrivial leading coefficients. Specifically, we construct an oriented closed $3$-manifold $N$ such that the leading coefficient $C(N,\phi)$ of the full $L^2$–Alexander torsion $\tau^{(2)}(N,\phi)$ gives rise to values other than the asserted bounds, as $\phi$ varies over $H^1(N;{{\mathbb R}})$. The oriented closed $3$-manifold $$N\,=\,K\cup\bigcup_{i\in{{\mathbb Z}}/3{{\mathbb Z}}} J_i$$ is constructed by gluing a product piece $K$ and three figure-eight knot complements $J_i$ as follows. Let $$K\cong \Sigma_{0,3}\times S^1$$ be the product of the thrice holed sphere and the circle. We mark the boundary components of $\Sigma_{0,3}$ in cyclic order. For each $i\in{{\mathbb Z}}/3{{\mathbb Z}}$, denote by $\partial_i K\cong \partial_i \Sigma_{0,3}\times S^1$ the $i$-th boundary component of $K$ accordingly. For each $i\in{{\mathbb Z}}/3{{\mathbb Z}}$, take a copy of a figure-eight knot complement $$J_i\cong S^3\setminus\mathrm{Nhd}^\circ(\mathbf{4}_1).$$ We remind the reader that the interior of the figure-eight complement $J_i$ is a punctured torus bundle over the circle with a pseudo-Anosov monodromy, and it has a unique complete hyperbolic structure of volume $\mathrm{Vol}(J_i)\,=\,2v_3$, where $v_3\approx1.01494$ is the volume of the regular ideal hyperbolic tetrahedron. Denote by $\mu_i$ and $\lambda_i$ the longitude and the meridian of $J_i$ accordingly, so that the boundary of $J_i$ has a canonical product structure $\partial J_i\cong \lambda_i\times \mu_i$. Endow $K$ and $J_i$ with canonical orientations so that the boundary is oriented accordingly. The oriented closed $3$-manifold $N$ is obtained by gluing $K$ and $J_i$ along the boundary in such a way that $\partial_i K$ is identified with $-\partial J_i$ via an isomorphism that takes the factor $\partial_i \Sigma_{0,3}$ to $\lambda_i$ and the factor $S^1$ to $-\mu_i$. Note that the inclusion maps induce an embedding $$\begin{aligned} H^1(N;{{\mathbb R}})&\to& H^1(J_0;{{\mathbb R}})\oplus H^1(J_1;{{\mathbb R}})\oplus H^1(J_2;{{\mathbb R}})\\ \phi&\mapsto&(\phi_0,\phi_1,\phi_2) \end{aligned}$$ By identifying $H^1(J_i;{{\mathbb R}})$ with ${{\mathbb R}}$, we can identify $H^1(N;{{\mathbb R}})$ with the $2$-subspace of the $3$-space given by the linear equation: $$\phi_0+\phi_1+\phi_2\,=\,0.$$ By the fibration structure of the figure-eight complement, it is easy to argue topologically that the Thurston norm of any cohomology class $\phi$ in $H^1(N;{{\mathbb R}})$ is given by the formula: $$x_N(\phi)=|\phi_0|+|\phi_1|+|\phi_2|.$$ The unit ball $B_x(N)$ of $x_N$ is hence the region bounded by the regular hexagon whose vertices are $(\pm\frac12,\mp\frac12,0)$, $(0,\pm\frac12,\mp\frac12)$, and $(\mp\frac12,0,\pm\frac12)$. There are no fibered cones because the restriction of every primitive class $\phi\in H^1(N;{{\mathbb Z}})$ to $K$ vanishes on the Seifert fiber $[S^1]\in H_1(K;{{\mathbb Z}})$, which means no subsurface that is dual to $\phi$ could be transverse to the Seifert fibration everywhere (or so-called horizontal) restricted to $K$. The full $L^2$–Alexander torsion of $N$ associated with any cohomology class $\phi\in H^1(N;{{\mathbb R}})$ can calculated by the formula: $$\tau^{(2)}(N,\phi)\doteq\tau^{(2)}(J_0,\phi_0)\cdot\tau^{(2)}(J_1,\phi_1)\cdot\tau^{(2)}(J_2,\phi_2).$$ This follows from [@Lueck-book Theorem 3.35 (1)], (see [@Lueck-book Theorem 3.93 (2)] for a similar calculation). Note that in our case, the pieces $K$ and $J_i$ are weakly acyclic glued along tori which contribute nothing to the $L^2$–torsion of the twisted chain complex. There ought to be a factor $\tau^{(2)}(K,\phi_K)$ corresponding to the restriction of $\phi$ to $K$ on the right-hand side, but that factor is represented by $1$ according to [@Herrmann], cf. [@DFL-torsion Theorem 1.2]. For each $i\in{{\mathbb Z}}/3{{\mathbb Z}}$, it follows from the fiberedness of the figure-eight knot complement that the leading coefficient $$C(J_i,\phi_i)\,=\,\begin{cases}e^{v_3/3\pi}&\phi_i=0\\1&\phi_i\neq0\end{cases}$$ Therefore, for any cohomology class $\phi=(\phi_1,\phi_2,\phi_3)\in H^1(N;{{\mathbb R}})$, the leading coefficient of $\tau^{(2)}(N,\phi)$ is given by the formula: $$C(N,\phi)\,=\,e^{\frac{\delta(\phi)\cdot v_3}{3\pi}}$$ where $\delta(\phi)$ denotes the number of zero coordinates in $(\phi_0,\phi_1,\phi_2)$ subject to the constraint $\phi_0+\phi_1+\phi_2=0$. To summarize, the leading coefficient $C(N,\phi)$ equals $e^{\mathrm{Vol}(N)/6\pi}$ at the origin, and $e^{\mathrm{Vol}(N)/18\pi}$ along the six radial rays through the vertices of $B_x(N)$ (except at the origin), and $1$ in the rest part of $H^1(N;{{\mathbb R}})$. I. Agol, *Criteria for virtual fibering*, J. Topol. **1** (2008), 269–284. I. Agol, *The virtual Haken conjecture*, with an appendix by I. Agol, D. Groves, and J. Manning, Documenta Math. **18** (2013), 1045–1087. M. Aschenbrenner, S. Friedl, and H. Wilton. *3-Manifold Groups*, EMS Series of Lectures in Mathematics, 2015. D. Boyd, *Uniform approximation to Mahler’s measure in several variables*, Canad. Math. Bull. **41** (1998), 125–128. A. Carey, M. Farber, and V. Mathai, *Determinant lines, von Neumann algebras and $L^2$ torsion*, J. Reine Angew. Math. **484** (1997), 153–181. T. Cochran, *Noncommutative knot theory*, Algebr. Geom. Topol. **4** (2004), 347–398. J. Dubois, S. Friedl, and W. Lück, *The $L^2$–Alexander torsion of 3-manifolds*, J. Topol., to appear. Preprint available at `arXiv:1410.6918v3`. , *Three flavors of twisted invariants of knots*, Introduction to Modern Mathematics, Advanced Lectures in Mathematics 33 (2015), pp. 143–170. , *The $L^2$–Alexander torsion is symmetric*, Algebr. Geom. Topol., to appear. Preprint available at `arXiv:1411.2292v1`. G. Everest and T. Ward, *Heights of Polynomials and Entropy in Algebraic Dynamics*. Springer, London, 1999. S. Friedl, *Twisted Reidemeister torsion, the Thurston norm and fibered manifolds*, Geom. Dedicata **172**, (2014), 135–145. S. Friedl and T. Kim, *Twisted Alexander norms give lower bounds on the Thurston norm*, Trans. Amer. Math. Soc. **360** (2008), 4597–4618. , S. Friedl and W. Lück, *The $L^2$–torsion function and the Thurston norm of 3-manifolds*, preprint (2015), 22 pages, `arXiv:1510.00264v1`. S. Friedl and S. Vidussi, *The Thurston norm and twisted Alexander polynomials*, preprint, (2012), 17 pages, `arXiv:1204.6456v2`. S. Harvey, *Higher-order polynomial invariants of 3-manifolds giving lower bounds for the Thurston norm*. Topol. **44** (2005), 895–945. , *Monotonicity of degrees of generalized Alexander polynomials of groups and 3-manifolds*. Math. Proc. Camb. Philos. Soc. **140** (2006), 431–450. G. Herrmann, The $L^2$–Alexander torsion of Seifert fibered spaces, Masters thesis (2015), University of Regensburg. W. Li and W. Zhang, *An $L^2$–Alexander invariant for knots*, Commun. Contemp. Math. **8** (2006), 167–187. , *An $L^2$–Alexander–Conway invariant for knots and the volume conjecture* Differential Geometry and Physics, pp. 303–312, Nankai Tracts Math., 10, World Sci. Publ., Hackensack, NJ, 2006. Y. Liu, *Virtual cubulation of nonpositively curved graph manifolds*, J. Topol. **6** (2013), 793–822. J. Lott and W. Lück, *$L^2$-topological invariants of 3-manifolds*, Invent. Math. **120** (1995), 15–60. W. Lück, *Approximating $L^2$-invariants by their finite-dimensional analogues*, Geom. Funct. Analysis **4** (1994), 455–481. W. Lück, *$L^2$-Invariants: Theory and Applications to Geometry and K-Theory*, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer-Verlag, Berlin, 2002. W. Lück, *Twisting $L^2$–invariants with finite-dimensional representations*, preprint (2015), 66 pages, `arXiv:1510.00057v1`. C. T. McMullen, *The Alexander polynomial of a 3-manifold and the Thurston norm on cohomology*, Ann. Sci. Ec. Norm. Super. (4) **35** (2002), 153–171. P. Przytycki and D. T. Wise, *Graph manifolds with boundary are virtually special*, J. Topol. **7** 2014, 419–435. , *Mixed 3-manifolds are virtually special*, preprint, 2012, 29 pages, `arXiv:1205.6742`. J. Raimbault, *Exponential growth of torsion in abelian coverings*, Algebr. Geom. Topol. **12** (2012), 1331–1372. W. P. Thurston, *A norm for the homology of 3-manifolds.* Mem. Amer. Math. Soc. **59** (1986), no. 339, pp. 99–130. V. Turaev, *A homological estimate for the Thurston norm*, preprint, 2002, 32 pages, `arXiv:math.GT/0207267v1`. S. Vidussi, *Norms on the cohomology of a 3-manifold and SW theory*, Pac. J. Math. **208** (2003), 169–186. D. T. Wise, *From Riches to RAAGs: 3-Manifolds, Right–Angled Artin Groups, and Cubical Geometry*, CBMS Regional Conference Series in Mathematics, 2012.
--- abstract: 'We discuss the concept of width-to-spacing ratio which plays the central role in the description of local spectral statistics of evolution operators in multiplicative and additive stochastic processes for random matrices. We show that the local spectral properties are highly universal and depend on a single parameter being the width-to-spacing ratio. We discuss duality between the kernel for Dysonian Brownian motion and the kernel for the Lyapunov matrix for the product of Ginibre matrices.' address: 'AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, al. Mickiewicza 30, 30-059 Kraków, Poland' author: - Zdzislaw Burda title: 'Universality of random matrix dynamics [^1]' --- Introduction ============ Local spectral properties of invariant random matrix ensembles are highly universal [@m; @dg; @k; @ey]. This means that these properties depend only on the symmetry class of the ensemble or equivalently on the type of invariance of the probability measure. Here we search for an analogous principle for stochastic processes in the matrix space. We consider prototypes of additive and multiplicative stochastic processes in the space of Hermitian matrices. We show that local spectral statistics of evolution operators for these processes is described by a determinantal point process with the kernel that interpolates between the picket-fence kernel and the sine kernel in a universal way that is controlled by a single parameter being the width-to-spacing ratio [@abk1; @abk2; @abk3]. The paper is organised as follows. In Section \[sec:DBM\] we recall Dyson Brownian motion [@d1]. In Section \[sec:DBMlocal\] we evoke an analytic formula for the kernel of Dyson Brownian motion with the initial condition given by equidistant eigenvalues [@j]. This result is used in comparative studies towards the end of the paper. In Section \[sec:M\] we introduce a multiplicative stochastic evolution in the matrix space. In Section \[sec:ML\] we investigate local statistics of the Lyapunov spectrum associated with this evolution. In Section \[sec:Duality\] we discuss duality and universality of the kernels of evolution operators for additive and multiplicative stochastic processes. The material presented in Sections \[sec:M\],\[sec:ML\],\[sec:Duality\] is based on a joint work with Gernot Akemann and Mario Kieburg [@abk1; @abk2; @abk3]. The paper is concluded in Section \[sec:Conclusions\]. Additive matrix evolution - Dysonian random walk \[sec:DBM\] ============================================================ We first recall the Dyson construction of additive random walk in the space of matrices [@d1]. Let $X_m$ be $N\times N$ complex matrices. The random walk $X_0 \rightarrow X_1 \rightarrow \ldots \rightarrow X_M$ is defined by the recursive formula $$X_m = X_{m-1} + \sigma G_m \ . \label{additive}$$ which describes incremental random changes of matrices $X_m$ at discrete times $m=1,2,\ldots, M$. The increments $G_m$’s are independent identically distributed $N \times N$ Ginibre matrices whose entries are themselves independent identically distributed standard complex variables ${\mathcal C} \mathcal{N}(0,1)$ [@g]. $\sigma$ is a scale parameter. One is interested in the evolution of eigenvalues of the Hermitian matrix $A_m$ associated with $X_m$, which is obtained by the Hermitian projection $A_m = (X_m + X_m^\dagger)/\sqrt{2}$. The evolution equation for this matrix $$A_m = A_{m-1} + \sigma H_m \ . \label{AM}$$ is analogous to Eq. (\[additive\]), except that the increments $H_m=(G_m + G_m^\dagger)/\sqrt{2}$ are GUE matrices in this case. The matrix $A_M$ at time $M$ is a sum of the initial matrix and of i.i.d. Gaussian increments $$A_M = A_0 + \sigma (H_1+H_2+\ldots+H_M) \ . \label{A0_sum}$$ The matrix $A_M$ has $N$ real eigenvalues $a_{Mj}$, $j=1,\ldots, N$. The process of evolution of these eigenvalues is known as Dysonian random walk. One can define physical time $t = M \Delta t$ where $\Delta t$ is the time interval between two consecutive instances of the discrete process. If the scale parameter scales as $\sigma = \sigma_c \sqrt{\Delta t}$ where $\sigma_c$ is a positive constant one can take the limit $\Delta t \rightarrow 0$ to obtain a continuous Dyson random walk which is commonly known as Dyson Brownian motion. It follows from the stability of GUE matrices [@v; @bp] that the sum of i.i.d. increments in Eq. (\[A0\_sum\]) has for $N\rightarrow \infty$ the same limiting eigenvalue density[^2] as a single GUE matrix $\sqrt{M} H$ with the scale parameter $\sqrt{M}$. Eigenvalues of $A_M$ at time $M$ have the same distribution as eigenvalues of the matrix $$\tilde{A}_M = A_0 + \sigma \sqrt{M} H = A_0 + \sigma_c \sqrt{t} H\ . \label{free_sum}$$ Local spectral properties of Dyson Brownian motion \[sec:DBMlocal\] =================================================================== Using the Dyson Coulomb gas representation [@m] of Eq.(\[AM\]) one can derive the following equations for eigenvalues [@d1] $$a_{m,j} - a_{m-1,j} = \sum_{k\ne j}\frac{1}{a_{m-1,j}-a_{m-1,k}} + \sigma g_{m,j} \label{discrete}$$ where $g_{m,j}$, $m=1,2,\ldots,M$, $j=1,2,\ldots,N$, is a set of independent standard real normal random variables ${\mathcal N}(0,1)$. The corresponding equations in the continuous time formalism read $$d a_j(t) = \sum_{k\ne j}\frac{1}{a_j(t)-a_k(t)} dt + \sigma_{c} dW_j(t) \label{coulomb_gas}$$ where $W_j(t)$, $j=1,\ldots,N$ are independent Wiener processes. If one interpretes eigenvalues $a_j(t)$, $j=1,\ldots,N$, as positions of $N$ particles in one dimension at time $t$, then the equations (\[coulomb\_gas\]) describe the Brownian motion of these particles which interact with each other. The potential of the interactions is logarithmic $\ln |a_j - a_i|$. One calls the system “Coulomb gas” since the logarithmic potential is the Coulomb potential in two dimensions. Even if this is a slight abuse of terminology, as the system in question is one-dimensional, the term “Coulomb gas” perfectly reflects the behaviour of the system which imitates thermal behaviour of a gas of repelling particles. Particles’ trajectories generated by Eq. (\[coulomb\_gas\]) are continuous. The repulsion potential $\ln|a_j - a_i|$ prevents the trajectories from intersecting each other so if $a_1(t)< a_2(t)< \ldots < a_N(t)$ at some $t$ then $a_1(t')< a_2(t')< \ldots < a_N(t')$ at any later time $t'>t$. To solve the differential stochastic equation (\[coulomb\_gas\]) means to determine the probability density function, $P_N(x_1,x_2,\ldots, x_N;t)$ which is directly related to the probability $P_N(x_1,x_2,\ldots, x_N;t) dx_1\ldots dx_N$ of finding eigenvalues $a_1, a_2, \ldots, a_N$ at time $t$ in the infinitesimal neighbourhood of $x_1$, $x_2$, $\ldots$, $x_N$. The standard way of solving the problem is to write down the Fokker-Planck equation associated with the stochastic differential equations (\[coulomb\_gas\]) and to solve it for $P_N$. One can then calculate correlation functions [@gmw] $$R_{k}(x_1,x_2,\ldots,x_k;t) = \frac{N!}{(N-k)!} \int \ldots \int dx_{k+1} \ldots dx_N P_N(x_1,x_2,\ldots,x_N;t) \label{Rk}$$ which are just appropriately normalised marginal distributions of $P_N$. They can be interpreted as probability densities that $k$ eigenvalues lie in the infinitesimal neighbourhood of $x_1,\ldots,x_k$, except that the total integral of $R_k$ is not one but $N!/(N-k)!$. In particular the first correlation function $R_1(x)$ gives the distribution of eigenvalues normalised to the number of eigenvalues $\int R_1(x) dx = N$. Generally it is difficult to find a closed-form solution to the stochastic differential equations (\[coulomb\_gas\]) since the evolution of the system is very complex and non-stationary. The repulsion makes the gas continuously expand. Details of this expansion are sensitive to the initial positions of particles and the statistical noise. An exception is the situation when the gas is uniformly distributed on the whole real axis (for $N=\infty$) since in this case the effect of expansion is eliminated and the average distance between particles stays on average constant over time. An explicit solution can be found in this case [@j]. This situation can be imitated by a finite-$N$ system with the initial condition $a_j(0) = (j - K) s$, $j=1,2,\ldots, N$ with $N=2K-1$, which describes $N$ equidistant eigenvalues (particles) uniformly distributed on the real axis within the boundaries $-s(K-1)$ and $s(K-1)$. This can be realised by choosing a diagonal matrix $A_0 = \mbox{diag}\left(-s(K-1),\ldots, -s,0,s,\ldots, s(K-1)\right)$ in Eq. (\[A0\_sum\]). During the evolution (\[coulomb\_gas\]) eigenvalues drift away from each other. The peripheral eigenvalues move away the fastest. The further an eigenvalue is from the gas boundary the slower it moves since it is confined by eigenvalues on both sides which have to drift away first. When $N$ is large the mean spacing between internal eigenvalues is almost constant and equal to the initial spacing $s$ for a long time $t$, or more precisely for time $t \ll N s^2/\sigma_c^2$. This can be seen from the following argument. The width of the eigenvalue distribution (radius of gyration) is equal to the square root of the second cumulant of the eigenvalue distribution of the matrix $\tilde{A}_t=A_0 + \sigma_c \sqrt{t} H$ (\[free\_sum\]). For large $N$ the second cumulant of the eigenvalue distribution of $\tilde{A}_t$ can be approximated as a sum of the second cumulant of $A_0$ which is $(sN)^2/12$ and of $\sigma_c \sqrt{t} H$ which is $\sigma_{c}^2 N t$, since for large $N$ the addition of these matrices is almost free [@v]. This gives $(sN)^2/12 + \sigma_c^2 N t$. Let $S$ be the spacing of a hypothetical distribution of $N$ equidistant particles with the same radius of gyration $(S N)^2/12 = (sN)^2/12 + \sigma_{c}^2 N t$. This hypothetical spacing is related to the initial spacing $s$ as $S = s\sqrt{1 + 12 \sigma_c^2 t/(s^2 N)}$. Clearly $S$ gives the upper bound on the spacing between eigenvalues of the matrix (\[free\_sum\]) in the center of the spectrum. The initial spacing $s$ gives the lower bound (since eigenvalues repel). For fixed $t$, the upper bound $S$ approaches $s$ for $N\rightarrow \infty$. This means that the mean spacing between eigenvalues in the center of the spectrum is equal $s$ in this limit. The same holds also in the double scaling limit: $t=t(N)$ and $N\rightarrow \infty$, as long as $t=t(N)$ grows slower than $N$, that is $t\sim o(N)$. More generally for $N\rightarrow \infty$ one can assume that the spacing between eigenvalues lying in any compact interval is constant and equal $s$. This is an enormous simplification. In effect one can give an explicit closed-form solution of the evolution equations (\[coulomb\_gas\]) for eigenvalues in the bulk in the limit $N\rightarrow\infty$. The solution was given in [@j] where it was shown that the correlation functions (\[Rk\]) have the determinantal form $$R_k(x_1,\ldots,x_k; t) = \det K_t(x_i,x_j)_{i,j=1,\ldots,k}$$ with the kernel $$K_t(x,y) = \frac{1}{\pi s} \mbox{Re} \sum_{k=-\infty}^{\infty} \exp\left[-2 \pi^2 w^2 k(k-1) \right] \frac{\exp[i\pi \left((2k-1) x/s + y/s\right)]}{2 \pi w^2 k + i (y-x)/s} \ . \label{kernel_t}$$ The corresponding eigenvalue distribution is $R_1(x)=K_t(x,x)$. The evolution of the eigenvalue distribution with time is shown in Fig.\[Fig:random\_walk\] where we plot the limiting density for $N=\infty$ derived analytically $R_1(x) = K_t(x,x)$ from Eq. (\[kernel\_t\]) and the corresponding histograms for $N=255$ obtained by Monte-Carlo simulations of Eq. (\[free\_sum\]). One can see that the histograms for $N=255$ coincide with the limiting density. This means that the mean spacing between these five eigenvalues remains almost constant for the given evolution times $t$, in agreement with the argument given above. ![Dyson Brownian motion of eigenvalues of Hermitian matrix which for $t=0$ is diagonal and has equidistant eigenvalues $\lambda_j = j - K$ for $j=1,\ldots,2K-1$, where $N=2K-1$. The eigenvalue spacing is $s=1$ initially. We set $\sigma_c=1$ so the width-to-spacing ratio is $w=\sqrt{t}$ (\[wsr\_rw\]). In the left panel we plot a single realisation of the stochastic evolution (\[discrete\]) of five central eigenvalues of the matrix which initially, for $t=0$, are located at $\{-2,-1,0,1,2\}$. The matrix size is $N=255$. The right panel shows the central part of the spectral density for $x\in [-2.5,2,5]$ for $w=\sqrt{t}=\{0.125,0.25,0.5,1.0\}$. Solid lines represent the limiting density for $N\rightarrow \infty$ calculated from the analytic formula $R_1(x)=K_t(x,x)$ (\[kernel\_t\]). Different colors correspond to different values of the width-to-spacing ratio parameter (\[wsr\_rw\]): $w=0.125$ (Black), $w=0.25$ (Blue), $w=0.5$ (Red) and $w=1.0$ (Green). Points represent results of Monte-Carlo simulations for $N=255$. For each $w$ we generated $10^5$ matrices.[]{data-label="Fig:random_walk"}](five_ev.png "fig:"){width="6.0cm"} ![Dyson Brownian motion of eigenvalues of Hermitian matrix which for $t=0$ is diagonal and has equidistant eigenvalues $\lambda_j = j - K$ for $j=1,\ldots,2K-1$, where $N=2K-1$. The eigenvalue spacing is $s=1$ initially. We set $\sigma_c=1$ so the width-to-spacing ratio is $w=\sqrt{t}$ (\[wsr\_rw\]). In the left panel we plot a single realisation of the stochastic evolution (\[discrete\]) of five central eigenvalues of the matrix which initially, for $t=0$, are located at $\{-2,-1,0,1,2\}$. The matrix size is $N=255$. The right panel shows the central part of the spectral density for $x\in [-2.5,2,5]$ for $w=\sqrt{t}=\{0.125,0.25,0.5,1.0\}$. Solid lines represent the limiting density for $N\rightarrow \infty$ calculated from the analytic formula $R_1(x)=K_t(x,x)$ (\[kernel\_t\]). Different colors correspond to different values of the width-to-spacing ratio parameter (\[wsr\_rw\]): $w=0.125$ (Black), $w=0.25$ (Blue), $w=0.5$ (Red) and $w=1.0$ (Green). Points represent results of Monte-Carlo simulations for $N=255$. For each $w$ we generated $10^5$ matrices.[]{data-label="Fig:random_walk"}](R1x.png "fig:"){width="6.0cm"} The kernel $K_t$ (\[kernel\_t\]) depends on time $t$ through the parameter $$w = \frac{\sigma_{c} \sqrt{t}}{s} \ . \label{wsr_rw}$$ This parameter has a clear physical meaning. The numerator $\sigma_c\sqrt{t}$ is approximately equal to the width of the peak representing the probability of finding an eigenvalue that undergoes Brownian motion between neighbouring eigenvalues, while the denominator $s$ is equal to the average spacing between eigenvalues. For this reason we call $w$ width-to-spacing ratio. For short times the evolution of individual eigenvalues is described by an almost free Brownian motion and the peaks are Gaussian. When the peaks get broader the repulsion starts to deform them. The kernel (\[kernel\_t\]) depends on the positions $x$ and $y$ through the combinations $x/s$ and $y/s$. One can express $x$ and $y$ in units of $s$. This amounts to introducing rescaled variables $\xi=x/s$ and $\zeta=y/s$. Denote the resulting kernel by $K_w(\xi,\zeta)$. It is related to the kernel (\[kernel\_t\]) as $K_w(\xi,\zeta) = s K_t(s\xi,s\zeta)$. The prefactor $s$ is equal to the Jacobian $dx/d\xi$. The kernel $K_w$ is $$K_w(\xi,\zeta) = \frac{1}{\pi} \mbox{Re} \sum_{k=-\infty}^{\infty} \exp\left[-2 \pi^2 w^2 k(k-1) \right] \frac{\exp[i\pi \left((2k-1) \xi + \zeta\right)]}{2 \pi w^2 k + i (\zeta-\xi)} \ . \label{Kw1}$$ The width-to-spacing ratio $w$ increases as time goes on. The peaks of the distribution are initially localised at integers but they broden when $w$ increases. They begin to overlap when $w$ is of order one. When $w$ further increases the gap between peaks closes and the spectrum flattens (see Fig.\[Fig:random\_walk\]). Eventually in the limit $w \rightarrow \infty$ the spectrum becomes flat. The density $R_{1,w}(\xi) = K_w(\xi,\xi)$ (\[kernel\_t\]) interpolates between the Dirac-delta picket fence $$R_{1,w=0}(\xi) = \sum_{j=-\infty}^{\infty}\delta(\xi-j) \label{R1dirac}$$ for $w=0$, and a fully translationally invariant flat distribution $$R_{1,w=\infty}(\xi) = 1 \label{R1flat}$$ for $w\rightarrow \infty$. The limiting kernel is just the standard sine kernel [@m] $$K_{w=\infty}(\xi,\zeta)= \frac{\sin\left(\pi(\xi-\zeta)\right)}{\pi(\xi-\zeta)} \ . \label{Ksine}$$ in this case. Multiplicative matrix evolution \[sec:M\] ========================================= Let us now consider a multiplicative stochastic matrix evolution defined by the following recursive formula $$X_m = G_m X_{m-1} \label{multiplicative}$$ where as before $m=1,\ldots,M$ is a discrete time index and the random increments $G_m$’s are independent identically distributed $N \times N$ Ginibre matrices. This equation is analogous to Eq. (\[additive\]) except that the incremental changes are now multiplicative. One can analytically determine the eigenvalue distribution of $X_M$ [@bjw; @ab]. Here we are interested in the Hermitian matrix $Y_M=X_M^\dagger X_M$ associated with $X_M$. For the multiplicative process (\[multiplicative\]) $Y_M$ is a more natural Hermitian partner of $X_M$ than $(X_M^\dagger + X_M)/\sqrt{2}$ that was used for the additive process (\[additive\]). Clearly eigenvalues of $Y_M$ correspond to squares of singular values of $X_M$. Let us for simplicity assume that $X_0$ is an identity matrix. In this case $Y_M$ is $$Y = (G_M G_{M-1}\ldots G_1)^\dagger (G_M G_{M-1}\ldots G_1) \ . \label{YM}$$ The eigenvalue distribution of this matrix was determined in [@akw]. From here on we skip the index $M$ and for brevity write $Y$, to simplify notation. We are interested in the evolution of eigenvalues $y_{Mj}$, $j=1,\ldots,N$, of the matrix $Y$ or alternatively in the evolution of Lyapunov exponents $\lambda_{Mj}$, $j=1,\ldots,N$, that is eigenvalues of the Lyaponov matrix [@abk1; @abk2; @abk3] $$L = \frac{1}{2M} \log (G_M G_{M-1} \ldots G_1)^\dagger (G_M G_{M-1} \ldots G_1) = \frac{1}{2M} \log Y \ . \label{lyapunov}$$ For any finite $M$ and $N$ the spectra of $L$ and $Y$ contain exactly the same information since $y_{Mj} = e^{2M \lambda_{Mj}}$. The product $G_M G_{M-1}\ldots G_1$ can be viewed as a discrete time evolution operator or a transfer matrix in a system with $N$ degrees of freedom. An initial state of the system $| x \rangle_0$ is mapped onto the state $$\left| x \right\rangle_M = G_M G_{M-1}\ldots G_1 \left|x\right\rangle_0$$ at time $M$. This equation can be depicted symbolically as a multilayered network as sketched in Fig.\[Fig:mls\]. ![Schematic representation of the architecture of a multilayered system. Nodes (blue dots) in a layer $m$ represent components of the state vector $| x \rangle_m$ of the system at time $m$. The state $|x \rangle_m$ is obtained from $|x \rangle_{m-1}$ by a linear map $|x \rangle_m = G_m| x \rangle_{m-1}$. Elements $(G_m)_{ij}$ of the transfer matrix $G_m$ are represented by edges of the network. The network shown in the figure represents signal processing of $N=5$ degrees of freedom in $M=6$ time steps.[]{data-label="Fig:mls"}](multilayer.png){width="6.0cm"} The layout of this network is typical for signal processing in artificial neural networks known from machine learning. Here the signal processing from layer to layer $\left| x \right\rangle_m = G_m \left| x \right\rangle_{m-1}$ is linear while in neural networks it is non-linear. As we shall see even for the linear case the system undergoes an interesting phase transition between “deep” systems and “shallow” ones which manifests as a change of local spectral statistics of Lyapunov exponents in the limit $M,N\rightarrow \infty$. Let $M=M(N)$ be a monotonically increasing function of $N$ and let $a$ be the limiting aspect ratio of the system $$a = \lim_{N\rightarrow \infty} a_N = \lim_{N\rightarrow \infty} \frac{N}{M(N)} \ . \label{aspect_ratio}$$ Depending on the value of $a$ one can distinguish three types of architecture: deep systems for $a=0$, shallow systems for $a=\infty$ and critical ones for $0<a<\infty$. When the number of time slices $M$ super-linearly grows with the number of degrees of freedom, $N$, [*e.g.*]{} $M\sim N^2$, the limiting system is deep; when it scales sub-linearly, [*e.g.*]{} $M\sim \sqrt{N}$, the limiting system is shallow. The architecture is critical when $M$ is proportional to $N$. For large but finite $M,N$ the system can be called deep when $M \gg N$ and shallow when $M \ll N$. Local statistics of Lyapunov spectrum \[sec:ML\] ================================================ Eigenvalues of the Lyapunov matrix (\[lyapunov\]) for the product of GUE matrices assume deterministic values [@n1; @ni] $$\lambda_j = \frac{\psi(j)}2, \quad j=1,\ldots, N \label{positions}$$ in the limit $M\rightarrow \infty$ where $\psi(z)= \left(\log \Gamma(z)\right)'$ is the digamma function. For finite $M$ but very large $M\gg N$ eigenvalues of the Lyapunov matrix (\[lyapunov\]) have a probability distribution that can be approximated by a sum of Gaussian peaks centered around the limiting values [@abk1; @abk2] $$R_{1}(\lambda) \approx \sum_{j=1}^N \frac{1}{\sqrt{2\pi \sigma_j^2}} \exp\left[-\frac{(\lambda-\lambda_j)^2}{2\sigma_j^2}\right] \ . \label{rhoMN}$$ Each peak is normalised to one, so the total distribution is normalised to the number of eigenvalues $N$. The widths of the peaks depend on the derivative of the digamma function $$\sigma_j = \frac{\sqrt{\psi'(j)}}{4M}, \quad j=1,\ldots, N \ . \label{widths}$$ For $M\rightarrow \infty$ the peaks become Dirac deltas. The distribution (\[rhoMN\]) has an interesting property. The positions and widths of the peaks do not depend on $N$. This means that when $N$ is increased new peaks are added to the distribution but the old ones stay intact. The digamma function satisfies the following identity $\psi(z+1) = \psi(z) + 1/z$. Thus the mean spacing between neighbouring Lyapunov exponents is $$\lambda_{j+1}-\lambda_j = \frac{1}{2j} \ .$$ The digamma function has the asymptotic expansion of $\psi(z) = \ln z + 1/(2z) -1/(12z^2) + \ldots$ for $\mbox{Re}(z)>0$. In consequence, the width of the $j$-th peak is $j \sim \sqrt{\psi'(j)/(4M)} \approx \sqrt{1/(4jM)}$. This means that for large $j$ the width-to-spacing ratio can be approximated by $$w_{j} = \frac{\sigma_{j+1}+\sigma_j}{2(\lambda_{j+1}-\lambda_j)} \approx \sqrt{\frac{j}{M}} \ . \label{wsrj}$$ The width-to-spacing ratio increases when $j$ increases. It is maximal for $j$ at the upper end of the spectrum, where it takes the value $\sqrt{N/M}=\sqrt{a_N}$ (\[aspect\_ratio\]). We are now going to discuss local spectral statistics of the Lyapunov exponents in the limit $M,N \rightarrow \infty$. We start from an explicit expression for the kernel of the matrix $Y$ (\[YM\]) for finite $M$ and $N$ [@abk2; @abk3] $$K_Y(x,y) = \frac{1}x\, \sum_{j=1}^{N} \left(\frac{x}y\right)^j G_j(y), \label{eq:main1}$$ where $$G_j(y) =\int_{-i\infty}^{+i\infty} \frac{dt}{2\pi i}\frac{\sin (\pi t)}{\pi t}\ {y}^t \left(\frac{\Gamma(j-t)}{\Gamma(j)}\right)^{M+1} \frac{\Gamma(N-j+1+t)}{\Gamma(N-j+1)}. \label{eq:main2}$$ There are many equivalent expressions for the kernel that can be found in the literature on the subject [@akw; @aik; @kz; @lwz]. The one given above has been derived from a formula in [@aik]. An advantage of the integral representation (\[eq:main2\]) is that it is very well suited for taking various limits $M\rightarrow \infty$ and $N\rightarrow \infty$. One can for example easily transform the kernel $K_Y$ (\[eq:main1\],\[eq:main2\]) to the kernel $K_L$ for the matrix $L$ (\[lyapunov\]) by changing variables in (\[eq:main1\]). By doing this one can immediately recover Eq.(\[rhoMN\]) from the asymptotic behaviour of the integrand (\[eq:main2\]) for $M\rightarrow \infty$ [@abk2; @abk3]. Here we are mainly interested in the double scaling limit $N,M\rightarrow \infty$ and $N/M \rightarrow a$, for $a$ (\[aspect\_ratio\]) being a finite and positive number, $0<a<\infty$, which corresponds to the critical scaling. The number of Lyapunov exponents between $x$ and $x+dx$ is proportional to the eigenvalue density $\rho_\lambda(x)dx$. The mean spacing between Lyapunov exponents in the neighbourhood of $x$ is inversely proportional to $\rho_\lambda(x)$ so it depends on the position $x$ in the spectrum. It is convenient to make the spacing independent of the position in the spectrum. One does it by unfolding the spectrum, [*i.e.*]{} by expressing the distribution in the variable $$p=\int^\lambda_{-\infty} \rho_\lambda(x) dx \label{cdf}$$ which has the uniform distribution on the unit interval [@gmw]. For finite $N$ this variable can be imitated by $p = j/N$ where $j$ is the index of the Lyapunov exponent $\lambda_j$. Since Lyapunov exponents are ordered $\lambda_1< \lambda_2 < \ldots <\lambda_N$, the quantity $p=j/N$ can be interpreted as the probability of finding an exponent smaller than or equal $\lambda_j$: $\mbox{Prob}(\lambda\le \lambda_{j}) = j/N = p$. For $N\rightarrow \infty$ the last equation takes the form (\[cdf\]), which means that the variable $p=j/N$ indeed unfolds the spectrum in the limit $N\rightarrow \infty$. The eigenvalue density $\rho_\lambda(x)dx$ is known analytically for any finite $M$ [@n2] but unfortunately it is expressed in an intricate parametric form from which it is hard to reconstruct the unfolding map. However for $M \rightarrow \infty$ one can find another way to unfold the spectrum [@n1]. It is based on the asymptotic behaviour of Lyapunov exponents $\lambda_j = \log(j)/2 +o(1/j)$ for large $j$ that we discussed above. A consequence of this asymptotic behaviour is that the quantity $u_j = e^{2\lambda_j}/N$ behaves asymptotically as $u_j = j/N (1+ o(1/j)) \approx p$. Thus, for $j$ of order $N$ it unfolds the spectrum when $N\rightarrow \infty$. The variables $u_j$ can be viewed as eigenvalues of the matrix $$u = \frac{e^{2L}}N = \frac{Y^{1/M}}N \ . \label{uLY}$$ For $M,N\rightarrow \infty$ the eigenvalue spectrum of $u$ becomes uniform on $(0,1)$ and thus it unfolds the Lyapunov spectrum. The kernel $K_u(p_x,p_y)$ for the unfolded spectrum can by obtained from $K_Y(x,y)$ (\[eq:main1\]) by changing variables to $p_x=x^{1/M}/N$, $p_y=y^{1/M}/N$ as follows from (\[uLY\]). This amounts to replacing $x$ and $y$ by $x = (p_x N)^M$ and $y=(p_y N)^M$ in $K_Y(x,y)$. One has also to include the Jacobian $dx/dp_x$ in the transformation law $K_u(p_x,p_y) = dx/dp_x K_Y(x,y)$. The mean spacing between eigenvalues of the uniform spectrum on the unit interval is $1/N$, so if one wants to investigate local level statistics at a point $p$ of the unfolded spectrum one has to zoom in at this point to the local scale $$p_x = p + \frac{\xi}{N} \ , \quad p_y= p+\frac{\zeta}{N}$$ where $\xi$ and $\zeta$ are of order one. One can now take the double scaling limit $N\rightarrow \infty$, $N/M(N)\rightarrow a$ keeping the aspect ratio (\[aspect\_ratio\]) finite and positive $0<a<\infty$. We denote the limiting kernel for the unfolded spectrum at the point $p \in (0,1)$ by $$K_p(\xi,\zeta) = \lim_{N\rightarrow \infty, N/M\rightarrow a} K_u\left(p + \frac{\xi}{N}, p+\frac{\zeta}{N}\right) \ .$$ The result reads [@abk2] $$K_{p}(\xi,\zeta) = \frac{1}{2\pi ap} \mbox{Re} \sum_{\nu=-\infty}^{+\infty} \exp\left(\frac{\nu(\xi-\zeta)}{ap}\right) \mbox{erfi} \left(\frac{\pi\sqrt{2ap}}2 + i\frac{\zeta-\nu}{\sqrt{2ap}} \right) \ . \label{Kp}$$ where $\mbox{erfi}$ is the imaginary error function. The details of the calculations are presented in [@abk3]. Here we only give a short recap. One begins with an explicit expression for the kernel $K_Y$ of the matrix $Y$ for finite $M$ and $N$, as for instance the one given here by Eqs. (\[eq:main1\]) and (\[eq:main2\]). By changing variables $Y \rightarrow u$ (\[uLY\]) one can then determine the kernel $K_u$ of the $u$-spectrum which becomes unfolded in the limit $M,N\rightarrow \infty$. Before one takes the limit one has to zoom in at a point $p$ of the $u$-spectrum. Eventually one takes the double scaling limit $N\rightarrow \infty$ and $a_N=N/M \rightarrow a$, which can be done by replacing $M$ by $N/a$ and then taking the limit $N\rightarrow \infty$. The resulting expression (\[Kp\]) depends on the product $ap$ of the aspect ratio $a$ and the position in the spectrum $p\in (0,1)$. The combination $\sqrt{ap}$ can be easily identified from Eq. (\[wsrj\]) $$w_{j=pN} = \sqrt{\frac{j}{M}} = \sqrt{a_Np} \rightarrow \sqrt{ap}$$ as the width-to-spacing ratio at the position $p$ of the spectrum. For brevity we denote it by $w=\sqrt{ap}$. The kernel (\[Kp\]) for the given width-to-spacing ratio is $$\hat{K}_{w}(\xi,\zeta) = \frac{1}{2\pi w^2} \mbox{Re} \sum_{\nu=-\infty}^{+\infty} \exp\left(\frac{\nu(\xi-\zeta)}{w^2}\right) \mbox{erfi} \left(\frac{\pi w}{\sqrt{2}} + i\frac{\zeta-\nu}{w \sqrt{2}} \right) \ . \label{Kw2}$$ We denote it here by $\hat{K}_w$ to distinguish it from the kernel $K_w$ (\[Kw1\]) that was discussed in the previous section. The corresponding eigenvalue density $\hat{R}_{1,w}(\xi) = \hat{K}_{w}(\xi,\xi)$ is $$\hat{R}_{1,w}(\xi) = \frac{1}{2\pi w^2} \mbox{Re} \sum_{\nu=-\infty}^{+\infty} \mbox{erfi} \left(\frac{\pi w}{\sqrt{2}} + i\frac{\xi-\nu}{w \sqrt{2}} \right) \ . \label{bR}$$ It interpolates between a picket-fence made of Dirac delta functions for $w\rightarrow 0$, and a flat density for $w\rightarrow \infty$, in the same manner as the kernel $K_w$, (\[R1dirac\]) and (\[R1flat\]). The limiting form of the kernel $\hat{K}_w$ is given by the sine kernel $w\rightarrow \infty$, the same as for $K_w$ (\[Ksine\]). Is it a coincidence, or maybe the kernels are equivalent? Duality and universality \[sec:Duality\] ======================================== It was Jac Verbaarschot and Maurice Duits who first suggested that the two kernels might be identical for any $w$ [@vd]. We have checked that this is indeed the case [@abk3]. The map between the expressions for $K_w$ (\[Kw1\]) and for $\hat{K}_w$ (\[Kw2\]) is provided by the Poisson summation formula which transforms the sum over $\nu$ in Eq. (\[Kw2\]) onto the sum over Fourier modes, $k$, in Eq. (\[Kw1\]). In a sense, the two expressions are dual to each other. The Dirac picket-fence limit (\[R1dirac\]) corresponds to the large time behaviour of $\hat{K}_w$ and the short time behaviour of $K_w$, while the flat limit (\[R1flat\]) the other way round. This again reflects the duality of the two kernels. We have checked by Monte-Carlo simulations [@abk2; @abk3] that the local spectral density of unfolded Lyapunov spectrum coincides with the limiting density (\[bR\]) within the numerical accuracy also when one replaces Ginibre matrices $G_m$ in the evolution equation (\[multiplicative\]) by random matrices made of i.i.d. non-Gaussian random centered complex variables, or by weakly correlated Ginibre matrices. We refer the interested reader to [@abk2; @abk3]. This is an indication that the universality of local spectral statistics extends also beyond the realm of Gaussian Markov stochastic processes. Conclusions \[sec:Conclusions\] =============================== We have shown here that the kernels describing local eigenvalue statistics of evolution operators for multiplicative and additive Gaussian stochastic processes in the space of Hermitian matrices (for Dyson index $\beta=2$) interpolate between the Dirac-delta kernel and the sine kernel in a universal way. The interpolation is controlled by the width-to-spacing ratio. It would be interesting to check if a similar universality holds also for real-symmetric ($\beta=1$) and quaternionic matrices ($\beta=4$). Here we concentrated on local spectral statistics in the bulk but one can extend the analysis also to the hard and soft edges of the spectrum [@abk2; @abk3; @lww]. The main message of the paper is that the spectrum of eigenvalues (or Lyapunov exponents) of the evolution operator constructed as the product of i.i.d. Ginibre matrices changes from a continuous to a discrete one depending on the ratio of the number of degrees of freedom and the propagation time. This spectral change is a prototope of a phase transition that seems to be generic for systems having one distinguished direction for which evolution is driven by the transfer matrix composition rule. Such a situation takes place in many physical systems. Examples include evolution operators in dynamical systems [@i], quantum transport [@b], sequential MIMO systems [@m2], quantum maps [@bbtv], multiplex networks [@bnl], artificial neural networks [@s], thermal field theory [@jw], CDT gravity [@ajl] and others. The occurence of such a spectral phase transition seems to be an inherent feature of multilayered systems when they change from shallow to deep ones. Acknowledgements {#acknowledgements .unnumbered} ================ This contribution is based on a joint work with Gernot Akemann and Mario Kieburg. I like to thank Gernot and Mario for many exciting, illuminating and inspiring discussions which I have enjoyed very much. I also want to thank Jac Verbaarschot and Maurice Duits for drawing our attention to the paper [@d1] and for suggesting the equivalence of the kernels. [99]{} M. L. Mehta, [*Random Matrices*]{}, 3rd ed., Elsevier, Amsterdam (2004). P. Deift and D. Gioev, [*Random Matrix Theory: Invariant Ensembles and Universality*]{}, CourantLecture Notes in Mathematics 18, American Mathematical Society, Providence, RI, 2009. A. B. J. Kuijlaars, [*Universality*]{}, The Oxford Handbook on Random Matrix Theory, (G. Akemann, J. Baik, and P. Di Francesco, eds.), Oxford University Press, (2011). L. Erdös and H.-T. Yau, [*Bull. Amer. Math. Soc.*]{} [**49**]{}, 377 (2012). G. Akemann, Z. Burda, M. Kieburg, [*J. Phys. A: Math. Theor.*]{} [**47**]{}, 395202 (2014). G. Akemann, Z, Burda, M. Kieburg, [**EPL**]{} [**126**]{} 40001 (2019). G. Akemann, Z. Burda, M. Kieburg, in preparation. J. Ginibre, J. Math. Phys. [**6**]{}, 440 (1965). F. J. Dyson, [*J. Math. Phys.*]{} [**3**]{}, 1191 (1962). K. Johansson, [*Commun. Math. Phys.*]{} [**252**]{}, 111 (2004). D.V. Voiculescu, [*Invent. Math.*]{} [**104**]{}. 201 (1991). H. Bercovici and V. Pata, [*Math. Res. Lett.*]{} [**2**]{}, 791 (1995). Z. Burda, R. A. Janik, and B. Waclaw, [*Phys. Rev. E*]{} [**81**]{}, 041132 (2010). G. Akemann and Z. Burda, [*J. Phys. A*]{} [**45**]{}, 465201 (2012). T. Guhr, A. Müller-Groeling, and H. A. Weidenmüller, [*Phys. Rep.*]{} [**299**]{}, 189 (1998). C. M. Newman, [*Commun. Math. Phys.*]{} [**103**]{}, 121 (1986). M. Isopi and C. M. Newman, [*Commun. Math. Phys.*]{} [**143**]{}, 591 (1992). G. Akemann, M. Kieburg, and L. Wei, J. Phys. A [**46**]{}, 275205 (2013). G. Akemann, J. R. Ipsen, and M. Kieburg, Phys. Rev. E [**88**]{}, 052118 (2013). A. B. J. Kuijlaars and L. Zhang, Commun. Math. Phys. [**332**]{}, 759 (2014). D.-Z. Liu, D. Wang, and L. Zhang, Ann. Inst. Henri Poincaré - Probabiltés et Statistiques [**52**]{}, 1734 (2016). T. Neuschel, [*Random Matrices Theory Appl.*]{} [**3**]{} 1450003 (2014). J. Verbaarschot and M. Duits, Comments after Gernot Akemann’s talk at the Workshop [*Random Matrices, Integrability and Complex Systems, Yad Hashmona, October 2018*]{}; D.-Z. Liu, D. Wang, and Y. Wang, [*Lyapunov exponent, universality and phase transition for products of random matrices*]{} arXiv:1810.00433 (2018). J. R. Ipsen, [*J. Stat. Mech.*]{}, 093209 (2017). C. W. J. Beenakker, [*Rev. Mod. Phys.*]{} [**69**]{}, 731 (1997). R. R. Müller, [*IEEE Trans. Inf. Theor.*]{} [**48**]{}, 2086 (2002). M.V. Berry, N.L. Balazs, M. Tabor, and A. Voros, [*Ann. Phys.*]{} [**122**]{}, 26 (1979). F. Battiston, V. Nicosia, and V. Latora [*Phys. Rev. E*]{} [**89**]{}, 032804 (2014). J. Schmidhuber, [*Neural Networks*]{} [**61**]{}, 85 (2015). R. A. Janik and W. Wieczorek, [*J. Phys. A*]{} [**37**]{}, 6521 (2004). J. Ambjørn, J. Jurkiewicz, and R. Loll, [*Phys. Rev. Lett.*]{} [**95**]{}, 171301 (2005). [^1]: Presented at the conference Random Matrix Theory: Applications in the Information Era, Krakow, April 29th - May 3rd 2019 [^2]: One has to divide out a trivial scaling factor $\sqrt{N}$ which is proportional to the width of the eigenvalue distribution. The matrix $H/\sqrt{N}$ has a limiting density.
--- author: - Masashige Matsumoto and Mikito Koga$^1$ title: 'Exciton Mediated Superconductivity in PrOs$_4$Sb$_{12}$' --- [PrOs$_4$Sb$_{12}$]{} is a recently found superconductor in the Praseodymium-based heavy-fermion system. [@Bauer] Specific heat measurement revealed multiple superconducting transition temperatures at $T_{{\rm c1}}=1.85$ K and $T_{{\rm c2}}=1.75$ K. [@Vollmer] The experiment on thermal conductivity also reported the multiple phases depending on temperature and external magnetic field. [@Izawa] The nuclear quadrupole resonance (NQR) experiment showed that there is no Hebel-Slichter peak in $T_1^{-1}$ at the superconducting transition temperature. [@Kotegawa] Very recently, zero-field $\mu$SR measurement revealed that the superconducting state is associated with a spontaneous magnetic field, indicating that the superconducting state breaks the time reversal symmetry. [@Aoki-muSR] Thus far, only phenomenological theories have tried to account for these experimental results of unconventional superconductivity. [@Miyake; @Goryo; @Maki; @Ichioka; @Sergienko] Since the multiple superconducting phase is still under investigation, we do not discuss it in this letter. The main purpose of this letter is to present a microscopic theory considering electronic states specific to [PrOs$_4$Sb$_{12}$]{}, and to give a scenario for the superconductivity with broken time reversal symmetry. The Pr$^{3+}$ ion has 4$f^2$ configuration in a $T_h$ point group crystal field. [@Takegahara] It is reported that the Fermi surface of [PrOs$_4$Sb$_{12}$]{} is similar to that of [LaOs$_4$Sb$_{12}$]{} which is a reference compound, [@Sugawara] indicating well-localized 4$f^2$ nature. This localized nature of the f-electrons is characteristic to [PrOs$_4$Sb$_{12}$]{}, compared with other heavy fermion superconductors with itinerant $f$-electron nature such as in U-based compounds. Therefore, we consider the conduction electron system well-separated from the $f$-electrons. Another characteristic point of [PrOs$_4$Sb$_{12}$]{} is the magnetic field-induced ordered phase above 4.5 T, observed by specific heat, [@Aoki-field] electric conductivity, magnetization and thermal expansion measurements. [@Ho] In this ordered phase, high-field neutron scattering measurements revealed a small staggered moment perpendicular to the field. [@Kohgi] Shiina and Aoki proposed that the field-induced order is mainly driven by a quadrupole-quadrupole interaction. [@Shiina-preprint] They assumed the $\Gamma_1$ singlet ground state and the $\Gamma_5$ triplet first excited state in Pr 4$f^2$ configuration (specifically, $\Gamma_4$ should be used for $T_h$ representation). Since an external magnetic field lifts the degenerate $\Gamma_5$ triplet and stabilizes one of them, this $\Gamma_1$-$\Gamma_5$ level scheme explains the field-induced order. In this letter, we assume this $\Gamma_1$-$\Gamma_5$ scheme. [PrRu$_4$Sb$_{12}$]{} is a reference superconductor ($T_{\rm c}=1.04$ K [@Takeda]) with a Hebel-Slichter peak in $T_1^{-1}$ of NQR, [@Yogi] indicating an $s$-wave pairing state. It is reported that [PrRu$_4$Sb$_{12}$]{} has also triplet excitations as in [PrOs$_4$Sb$_{12}$]{}. [@Frederic] However, a field-induced order has not been reported thus far in [PrRu$_4$Sb$_{12}$]{}. This means that the crystal-field excitation gap to the triplet state is much smaller in [PrOs$_4$Sb$_{12}$]{} than in [PrRu$_4$Sb$_{12}$]{}. The low-energy excitations play important roles in the field-induced order and the heavy electron mass for [PrOs$_4$Sb$_{12}$]{}. The low-lying excitation (exciton) is expected to be the most important origin of the exotic superconductivity in [PrOs$_4$Sb$_{12}$]{}, while this is not the case for [PrRu$_4$Sb$_{12}$]{}. In this letter, we present a microscopic theory for time reversal breaking superconductivity mediated by excitons, specific to the bcc system [PrOs$_4$Sb$_{12}$]{}. The wave functions for the Pr 4$f^2$ state are $$\begin{aligned} |\Gamma_1 \rangle &= \frac{\sqrt{30}}{12} ( |4 \rangle + |-4 \rangle ) + \frac{\sqrt{21}}{6} |0 \rangle, \cr |\Gamma_5^1 \rangle &= \sqrt{\frac{7}{8}} |3 \rangle - \frac{1}{\sqrt{8}} |-1 \rangle, \cr |\Gamma_5^2 \rangle &= \frac{1}{\sqrt{2}} ( |2 \rangle - |-2 \rangle ), \cr |\Gamma_5^3 \rangle &= -\sqrt{\frac{7}{8}} |-3 \rangle + \frac{1}{\sqrt{8}} |1 \rangle, \\ |\Gamma_4^1 \rangle &= -\frac{1}{\sqrt{8}} |-3 \rangle - \sqrt{\frac{7}{8}} |1 \rangle, \cr |\Gamma_4^2 \rangle &= \frac{1}{\sqrt{2}} ( |4 \rangle - |-4 \rangle ), \cr |\Gamma_4^3 \rangle &= \frac{1}{\sqrt{8}} |3 \rangle + \sqrt{\frac{7}{8}} |-1 \rangle. \nonumber \label{eqn:base}\end{aligned}$$ Here, $\Gamma_1$, $\Gamma_4$ and $\Gamma_5$ are $O_h$ representations. The total angular momentum is fixed to 4, and the wave function $|J_z\rangle$ ($J_z=-4 \sim 4$) represents the $z$ component of the angular momentum. We take the basis functions $|\phi_n \rangle$ ($n=0,1,2,3$) for the $T_h$ system as [@Shiina-preprint] $$\begin{aligned} &|\phi_0 \rangle = |\Gamma_1 \rangle, \cr &|\phi_{n>0} \rangle = \sqrt{1-d^2} |\Gamma_5^n \rangle + d |\Gamma_4^n \rangle,\end{aligned}$$ where $-1/\sqrt{2}\le d \le 1/\sqrt{2}$. As an effective Hamiltonian for the 4$f^2$ states, we take an intersite interaction into account as well. $$\begin{aligned} &H_f = H_{\rm CF} + H_{\rm I} \cr &H_{\rm CF} = \Delta_{\rm CF} \sum_{n=1}^3 |\phi_n\rangle \langle\phi_n| \label{eqn:HPr} \\ &H_{\rm I} = \sum_{\alpha=1,2,3} \sum_{\langle ij \rangle} \sum_{\beta} D_{\beta \beta} X^\alpha_{i\beta} X^\alpha_{j\beta} \nonumber\end{aligned}$$ Here, $H_{\rm CF}$ is a local term with a crystal field excitation gap $\Delta_{\rm CF}$. We choose isotropic multipole-multipole interactions for $H_{\rm I}$ in order to simplify the following discussion. In $H_{\rm I}$, $D_{\beta \beta}$ is a coupling constant and $X_{\beta}^\alpha$ is one of the three-dimensional tensor operators (denoted by $\alpha$) for a dipole, a quadrupole and an octupole, among others (denoted by $\beta$). $\sum_{\langle ij \rangle}$ denotes the summation over the nearest-neighbor Pr sites in the bcc lattice. First, we introduce bosonic operators such as [@Kusunose; @Shiina-2003] $$X_\beta^\alpha = \sum_{n,n'=0}^3 x_{nn'\beta}^\alpha a_n^\dagger a_{n'},$$ with $x_{nn'\beta}^\alpha = \langle\phi_n| X_\beta^\alpha |\phi_{n'}\rangle$ and $a_n^\dagger a_{n'} = |\phi_n\rangle \langle\phi_{n'}|$. The bosons are subjected to $\sum_{n=0}^3 a_n^\dagger a_n =1$ at each site. Due to the finite excitation gap, bosons $a_{n>0}$ for excitations are dilute at low temperatures, and we eliminate $a_0$ for the ground state using $a_0 = a_0^\dagger = \sqrt{ 1 - \sum_{n=1}^3 a_n^\dagger a_n}$. The multipole operator is rewritten up to the quadratic order in dilute boson operators as $$\begin{aligned} X_\beta^\alpha &= x_{00\beta}^\alpha + \sum_{n=1}^3 ( x_{n0\beta}^\alpha a_n^\dagger + x_{0n\beta}^\alpha a_n ) \cr &~+ \sum_{n,n'=1}^3 ( x_{nn'\beta}^\alpha - x_{00\beta}^\alpha \delta_{nn'} ) a_n^\dagger a_{n'} + O(3).\end{aligned}$$ One of the relevant intersite interactions in PrOs$_4$Sb$_{12}$ is of the $\Gamma_5$ quadrupolar type ($X_{\rm Q}^1=O_{xy}$, $X_{\rm Q}^2=O_{yz}$, $X_{\rm Q}^3=O_{zx}$). [@Shiina-preprint] $$O_{\xi \eta} = \frac{\sqrt{3}}{2} ( J_\xi J_\eta + J_\eta J_\xi )$$ Using this quadrupole operator, we demonstrate a derivation of exciton dispersion. Nonzero matrix elements for the quadrupole operators $x_{nn'\beta}^\alpha$ ($\beta={\rm Q}$) are given as $x_{20{\rm Q}}^1 = -{\rm i} x_{\rm Q}$, $x_{10{\rm Q}}^2 = {\rm i} x_{\rm Q}/\sqrt{2}$, $x_{30{\rm Q}}^2 = -{\rm i} x_{\rm Q}/\sqrt{2}$, $x_{10{\rm Q}}^3 = x_{\rm Q}/\sqrt{2}$, $x_{30{\rm Q}}^3 = x_{\rm Q}/\sqrt{2}$, and $x_{\rm Q}=\sqrt{35(1-d^2)}$. $H_f$ is now derived up to the quadratic order in the boson operators as $$\begin{aligned} &H_f = \sum_i \Delta_{\rm CF} \sum_{n=1}^3 a_{in}^\dagger a_{in} \\ &~+ \sum_{\langle ij \rangle} \left[ \lambda \sum_{n=1}^3 a_{in}^\dagger a_{jn} + \lambda'( a_{i1}^\dagger a_{j3}^\dagger + a_{i3}^\dagger a_{j1}^\dagger - a_{i2}^\dagger a_{j2}^\dagger ) + {\rm h.c.} \right]. \nonumber\end{aligned}$$ Here, $\lambda'=\lambda=x_{\rm Q}^2 D_{{\rm Q Q}}$ holds for the only quadrupole-quadrupole interaction in $H_{\rm I}$. For a dipole-dipole interaction, $\lambda'=-\lambda$ is obtained. In general, other multipoles are involved in the intersite interaction as well as the quadrupole, so that $\lambda'$ differs from $\lambda$. In our theory, the types of intersite interaction do not alter our result. They just modify the constants $\lambda$ and $\lambda'$. Since $a_1$ couples with $a_3$ by the pair creation and annihilation terms, we transform the operators as $a_1=-(a_x -{\rm i} a_y)/\sqrt{2}$, $a_2=a_z$, $a_3=(a_x +{\rm i} a_y)/\sqrt{2}$. $H_f$ is then given by the decoupled bosons $a_\alpha$ ($\alpha=x,y,z$). $$\begin{aligned} &H_f = \sum_{\alpha=x,y,z} H_\alpha \label{eqn:Hamiltonian} \\ &H_\alpha = \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} [ e_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}} a_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha}^\dagger a_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha} - \frac{1}{2} \Lambda_{{{{\mbox{\footnotesize \boldmath$k$}}}}} ( a_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha}^\dagger a_{-{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha}^\dagger + {\rm h.c.} ) ] \cr &e_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}}=\Delta_{\rm CF}+ \lambda \varepsilon_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}},~~~~~~ \Lambda_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = \lambda' \varepsilon_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}} \cr &\varepsilon_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}}=8 \cos{\frac{k_x}{2}} \cos{\frac{k_y}{2}} \cos{\frac{k_z}{2}} \nonumber\end{aligned}$$ $a_{i\alpha}^\dagger = (1/\sqrt{N}) \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} {\rm e}^{{\rm i} {{{{\mbox{\footnotesize \boldmath$r$}}}}}_i \cdot {{{{\mbox{\footnotesize \boldmath$k$}}}}}} a_{{{{\mbox{\footnotesize \boldmath$k$}}}}\alpha}^\dagger$ was introduced with $N$ as the number of Pr sites. We can now diagonalize the Hamiltonian using a Bogoliubov transformation. $$\begin{aligned} &H_\alpha = \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} E_{{{{\mbox{\footnotesize \boldmath$k$}}}}} b_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha}^\dagger b_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha} - \frac{1}{2} \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} ( e_{{{{\mbox{\footnotesize \boldmath$k$}}}}} - E_{{{{\mbox{\footnotesize \boldmath$k$}}}}} ) \cr &a_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha} = u_{{{{\mbox{\footnotesize \boldmath$k$}}}}} b_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha} + v_{{{{\mbox{\footnotesize \boldmath$k$}}}}} b_{-{{{{\mbox{\footnotesize \boldmath$k$}}}}}\alpha}^\dagger \label{eqn:Bogoliubov} \\ &u_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = \sqrt{\frac{1}{2}(\frac{e_{{{{\mbox{\footnotesize \boldmath$k$}}}}}}{E_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}}}+1)},~~~ v_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = {\rm sgn}(\Lambda_{{{{\mbox{\footnotesize \boldmath$k$}}}}}) \sqrt{\frac{1}{2}(\frac{e_{{{{\mbox{\footnotesize \boldmath$k$}}}}}}{E_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}}}-1)} \nonumber\end{aligned}$$ Here, $b_{{{{\mbox{\footnotesize \boldmath$k$}}}}}$ describes low-energy bosonic excitations. The dispersion relation of the exciton is given by $E_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = \sqrt{ e_{{{{\mbox{\footnotesize \boldmath$k$}}}}}^2 - \Lambda_{{{{\mbox{\footnotesize \boldmath$k$}}}}}^2}$. There are threefold degenerate excitations ($\alpha=x,y,z$). We note that the Hamiltonian (\[eqn:Hamiltonian\]) is the same as that for interacting spin dimer systems in which the field-induced order takes place. [@Matsumoto] Next, we derive an effective interaction between conduction electrons via exciton creation and annihilation processes. The conduction electron system has characteristics of both the a$_u$ and t$_u$ molecular orbitals of the Sb$_{12}$ cage structure. Since the a$_u$ component strongly couples with the 4$f^1$ wave function of the 4$f^2$ $\Gamma_5$ triplet at the $\Gamma$ point, [@Harima] we restrict ourselves to the a$_u$ conduction band to discuss superconductivity. We study the following Hamiltonian $H$. $$\begin{aligned} &H = H_c + H_f + H_{cf} \\ &H_c = \sum_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma} \epsilon_{{{{\mbox{\footnotesize \boldmath$k$}}}}} c_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma}^\dagger c_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma}, ~~~H_{cf} = -J_{cf} \sum_i {{\mbox{\boldmath$s$}}}_{i}\cdot{{\mbox{\boldmath$S$}}}_{i} \nonumber\end{aligned}$$ Here, $H_c$ is for the conduction electron system. $H_f$ is given in eq. (\[eqn:Hamiltonian\]). $H_{cf}$ represents an effective exchange interaction ($J_{cf}>0$) between the conduction electrons and the 4$f^2$ $\Gamma_5$ triplet. [@Shiba] The $\Gamma_1$ and $\Gamma_5$ states do not couple with each other via only spin exchange. $H_{cf}$ is isotropic in $T_h$ symmetry. ${{\mbox{\boldmath$s$}}}_{i}$ denotes a spin ($S=1/2$) operator for a conduction electron system at the $i$-th site. $${{\mbox{\boldmath$s$}}}_i = \frac{1}{2} \sum_{\sigma\sigma'} c_{i \sigma}^\dagger {{\mbox{\boldmath$\sigma$}}}_{\sigma\sigma'} c_{i \sigma'}$$ Here, $c_{i \sigma}$ is an annihilation operator for the a$_u$ electron at the $i$-th site. ${{\mbox{\boldmath$S$}}}_{i}$ is a pseudospin ($S=1$) operator for the $|\phi_{n>0}\rangle$ triplet, which is given at each Pr site by $${{\mbox{\boldmath$S$}}}= -{\rm i} (a_x^\dagger,a_y^\dagger,a_z^\dagger) \times (a_x,a_y,a_z).$$ When conduction electrons excite the triplet excitons via $H_{\rm cf}$, they induce polarization of the pseudospin ${{\mbox{\boldmath$S$}}}$. The other conduction electrons approach the polarized site in the next process, giving rise to an effective interaction between the conduction electrons. We treat $H_{cf}$ as a perturbation and derive an effective interaction Hamiltonian up to the second order. $$H' = H_{cf} \frac{1}{E_0 - H_c - H_f} H_{cf}$$ Since the characteristic energy is $\Delta_{\rm CF} \sim 8$ K, [@Maple; @Kohgi] the excitons are dilute at $T<T_{\rm c}=1.85$ K. We consider only the exciton pair creation and annihilation process which is dominant at low temperatures. In this case, we obtain the following $H'$ using the Bogoliubov transformation (\[eqn:Bogoliubov\]). $$\begin{aligned} &H' = \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} V_{{{{\mbox{\footnotesize \boldmath$k$}}}}}~{{\mbox{\boldmath$s$}}}_{{{{\mbox{\footnotesize \boldmath$k$}}}}}\cdot{{\mbox{\boldmath$s$}}}_{-{{{{\mbox{\footnotesize \boldmath$k$}}}}}} \\ &V_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = 2 J_{pf}^2 \frac{1}{N} \sum_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} \frac{u_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}v_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}u_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}v_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}- v_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^2 u_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^2} { E_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} + E_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} } \nonumber\end{aligned}$$ Here, ${{\mbox{\boldmath$s$}}}_{{{{\mbox{\footnotesize \boldmath$k$}}}}}$ is the Fourier-transformed spin operator of the conduction electrons. We neglected the energies of the conduction electrons in the denominator. The excitation has a small dispersion, [@Kuwahara] namely $|\Lambda_{{{{\mbox{\footnotesize \boldmath$k$}}}}}| \ll \Delta_{\rm CF}$, and $V_{{{{\mbox{\footnotesize \boldmath$k$}}}}}$ is simplified as $$\begin{aligned} &V_{{{{\mbox{\footnotesize \boldmath$k$}}}}} = V_0 ( \cos{\frac{k_x}{2}} \cos{\frac{k_y}{2}} \cos{\frac{k_z}{2}} -1 ), \label{eqn:V0} \\ &V_0 = \frac{2 (\lambda'J_{cf})^2}{\Delta_{\rm CF}^3}. \nonumber\end{aligned}$$ In real space, $H'$ can be written as $$H' = \frac{1}{4}V_0 \sum_{\langle ij \rangle} {{\mbox{\boldmath$s$}}}_i \cdot {{\mbox{\boldmath$s$}}}_j - V_0\sum_i {{\mbox{\boldmath$s$}}}_i \cdot {{\mbox{\boldmath$s$}}}_i.$$ The first term is an antiferromagnetic interaction between the nearest-neighbor sites \[along (111) directions\], while the second term is a ferromagnetic interaction on the same sites. The former leads to a singlet pairing. The latter is a ferromagnetic short-range (repulsive $s$-wave) interaction, and does not contribute to triplet pairings. We study then the effective Hamiltonian for the conduction electron system. $$H_{\rm eff} = \sum_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma} \epsilon_{{{{\mbox{\footnotesize \boldmath$k$}}}}} c_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma}^\dagger c_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}\sigma} + \sum_{{{{\mbox{\footnotesize \boldmath$k$}}}}} V_{{{{\mbox{\footnotesize \boldmath$k$}}}}}~{{\mbox{\boldmath$s$}}}_{{{{\mbox{\footnotesize \boldmath$k$}}}}}\cdot{{\mbox{\boldmath$s$}}}_{-{{{{\mbox{\footnotesize \boldmath$k$}}}}}}$$ A simple mean-field analysis leads to the following gap equation for singlet pairings at low temperatures. $$\Delta({{\mbox{\boldmath$k$}}}) = \frac{1}{N} \sum_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} \frac{3}{2} V_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^{({\rm e})} \frac{\Delta({{\mbox{\boldmath$k$}}}')}{2\sqrt{\epsilon_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^2+|\Delta^2({{\mbox{\boldmath$k$}}}')|^2}}$$ Here, $V_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^{({\rm e})}$ represents the even component of $V_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}$ under ${{\mbox{\boldmath$k$}}}\rightarrow -{{\mbox{\boldmath$k$}}}$ or ${{\mbox{\boldmath$k$}}}'\rightarrow -{{\mbox{\boldmath$k$}}}'$ transformations. It is expressed as $$\begin{aligned} &V_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}-{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}^{({\rm e})} = V_0 \{ [ f_s({{\mbox{\boldmath$k$}}}) f_s({{\mbox{\boldmath$k$}}}') -1 ] + f_{d1}({{\mbox{\boldmath$k$}}}) f_{d1}({{\mbox{\boldmath$k$}}}') \cr &~~~~~~~~~~~~~~ + f_{d2}({{\mbox{\boldmath$k$}}}) f_{d2}({{\mbox{\boldmath$k$}}}') + f_{d3}({{\mbox{\boldmath$k$}}}) f_{d3}({{\mbox{\boldmath$k$}}}') \}, \\ &f_s({{\mbox{\boldmath$k$}}}) = \cos(k_x/2) \cos(k_y/2) \cos(k_z/2), \cr &f_{d1}({{\mbox{\boldmath$k$}}}) = \sin(k_x/2) \sin(k_y/2) \cos(k_z/2), \cr &f_{d2}({{\mbox{\boldmath$k$}}}) = \cos(k_x/2) \sin(k_y/2) \sin(k_z/2), \cr &f_{d3}({{\mbox{\boldmath$k$}}}) = \sin(k_x/2) \cos(k_y/2) \sin(k_z/2). \nonumber\end{aligned}$$ For triplet pairings, we note that effective interactions are all repulsive. For singlet pairings, there are $s$-wave \[$f_s({{\mbox{\boldmath$k$}}})f_s({{{{\mbox{\footnotesize \boldmath$k$}}}}}')-1]$ and $d$-wave \[$f_{dn}({{\mbox{\boldmath$k$}}})f_{dn}({{\mbox{\boldmath$k$}}}')]$ $(n=1,2,3$) channels. The $d$-wave channel is attractive, while the $s$-wave channel is repulsive due to the $-1$ term. Now we determine what type of superconducting state is favorable for the $d$-wave. For this purpose, we derive a Ginzburg-Landau free energy from the gap equation. The gap equation for $d_1$, corresponding to $d_{xy}$-wave, is written using the Matsubara frequency. $$\begin{aligned} &\eta_1 \propto \sum_{\omega_m} \int d \Omega_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} f_{d1}({{\mbox{\boldmath$k$}}}') \frac{\Delta({{\mbox{\boldmath$k$}}}')}{\sqrt{\omega_m^2 + |\Delta({{\mbox{\boldmath$k$}}}')|^2}} \\ &\Delta({{\mbox{\boldmath$k$}}}) = \eta_1 f_{d1}({{\mbox{\boldmath$k$}}}) + \eta_2 f_{d2}({{\mbox{\boldmath$k$}}}) + \eta_3f_{d3}({{\mbox{\boldmath$k$}}}) \nonumber\end{aligned}$$ Here, $\eta_1$, $\eta_2$ and $\eta_3$ are complex numbers representing order parameters for the $d_{xy}$, $d_{yz}$ and $d_{zx}$-wave, respectively. $\int d\Omega_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'}$ means an integral over the Fermi surface. We assume $|\Delta({{\mbox{\boldmath$k$}}}')|$ is small near $T_{\rm c}$, and expand the denominator. The third-order terms are given by $$\begin{aligned} \frac{\delta F_4}{\delta \eta_1^*} &\propto \int d \Omega_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} f_{d1}({{\mbox{\boldmath$k$}}}') |\Delta({{\mbox{\boldmath$k$}}}')|^2 \Delta({{\mbox{\boldmath$k$}}}') \\ &= A |\eta_1|^2 \eta_1 + 2B ( |\eta_2|^2 + |\eta_3|^2 ) \eta_1 + B ( \eta_2^2 + \eta_3^2 ) \eta_1^* \cr A &= \int d \Omega_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} f_{d1}^4({{\mbox{\boldmath$k$}}}'),~~~ B = \int d \Omega_{{{{{\mbox{\footnotesize \boldmath$k$}}}}}'} f_{d1}^2({{\mbox{\boldmath$k$}}}') f_{d2}^2({{\mbox{\boldmath$k$}}}') \nonumber\end{aligned}$$ Here, $F_4$ is the fourth-order term in the free energy written as $$\begin{aligned} F_4 &\propto \frac{1}{2} A ( |\eta_1|^4 +|\eta_2|^4 +|\eta_3|^4 ) \cr &~+ 2B (|\eta_1|^2 |\eta_2|^2 + |\eta_2|^2 |\eta_3|^2 + |\eta_3|^2 |\eta_1|^2 ) \\ &~+ \frac{1}{2} B ( {\eta_1^*}^2 \eta_2^2 + {\eta_2^*}^2 \eta_3^2 + {\eta_3^*}^2 \eta_1^2 + {\rm c.c.} ). \nonumber\end{aligned}$$ The generic form of $F_4$ for the three-dimensional representation is given by [@Volovik] $$\begin{aligned} F_4 &= \beta_1 ( |\eta_1|^2 + |\eta_2|^2 + |\eta_3|^2 )^2 + \beta_2 | \eta_1^2 + \eta_2^2 + \eta_3^2 |^2 \cr &~+ \beta_3 ( |\eta_1|^4 + |\eta_2|^4 + |\eta_3|^4 ).\end{aligned}$$ There are the following relations between the coefficients. $$\beta_1 \propto B > 0,~~~~ \beta_2 \propto \frac{1}{2} B > 0,~~~~ \beta_3 \propto \frac{1}{2} (A-3B)$$ For a simple spherical Fermi surface, $\beta_3 > 0$ holds. In the most stable state, the time reversal symmetry is broken with the following order parameter: [@Volovik; @Sigrist] $$\Delta({{\mbox{\boldmath$k$}}}) = \eta [ f_{d1}({{\mbox{\boldmath$k$}}}) + \omega f_{d2}({{\mbox{\boldmath$k$}}}) + \omega^2 f_{d3}({{\mbox{\boldmath$k$}}}) ]. \label{eqn:Delta}$$ Here, $\omega={\rm e}^{\pm {\rm i} 2\pi/3}$. This state has eight point nodes. There is eightfold degeneracy due to the four directions of the threefold axes and to the time reversal degeneracy. In this letter, we studied the exciton mediated superconductivity to discuss the time reversal symmetry breaking state realized in [PrOs$_4$Sb$_{12}$]{}. Our idea is based on the fact that there exist low-lying excitations above the nonmagnetic $\Gamma_1$ singlet ground state. This is a unique feature of [PrOs$_4$Sb$_{12}$]{} among known skutterudite superconductors. Our theory reveals that the superconducting pairing symmetry is determined by the ${{\mbox{\boldmath$k$}}}$ dependence of the exciton dispersion. For a bcc lattice, a three-component $d$-wave state appears with broken time reversal symmetry, which agrees with the result of the zero-field $\mu$SR experiment. We point out that the most stable state (\[eqn:Delta\]) breaks the time reversal symmetry around one of the threefold axes due to the relative phase ${\rm e}^{\pm {\rm i} 2\pi/3}$ between the three components. A magnetic exciton mechanism was suggested as a origin of a $d$-wave superconductivity in [UPd$_2$Al$_3$]{}. [@Sato; @Thalmeier; @McHale] In this case, the superconductivity coexists with antiferro-magnetic order. In our model for [PrOs$_4$Sb$_{12}$]{}, the ground state is a nonmagnetic $\Gamma_1$ singlet which does not suffer superconductivity, and we consider excitons with a finite energy gap. These points are different from the model for [UPd$_2$Al$_3$]{}. Our theory can predict that the gapped-exciton-mediated superconductivity is suppressed with the increase in crystal field excitation gap $\Delta_{\rm CF}$ \[see eq. (\[eqn:V0\])\], which is the characteristic energy for the excitons. This can be realized by substituting Ru for Os, since [PrRu$_4$Sb$_{12}$]{} has a much higher crystal field excitation energy gap. [@Frederic] Very recently, it has been reported that $T_{\rm c}$ decreases from 1.85 K ($T_{\rm c}$ of [PrOs$_4$Sb$_{12}$]{}) with the substitution of Ru for Os, and then it increases towards 1.04 K ($T_{\rm c}$ of [PrRu$_4$Sb$_{12}$]{}), which is understood as a competition between the $d$-wave and $s$-wave. [@Frederic] The first decrease in $T_{\rm c}$ agrees with our theory. Another way to increase the crystal field excitation gap $\Delta_{\rm CF}$ is by applying pressure. In our model, the $T_{\rm c}$ of [PrOs$_4$Sb$_{12}$]{} should be decreased with pressure. Throughout the letter, the conduction band is restricted to the $a_u$ band hybridizing strongly with an $f$-electron wave function of the Pr 4$f^2$ state. In fact, the $t_u$ band electron also couples with the 4$f^2$ state and admixes with the $a_u$ on the Pr sites, which leads to an orbital exchange interaction. This interaction couples the Pr $\Gamma_1$ singlet ground state to the $\Gamma_5$ triplet, and the orbital exchange (or interband exchange) process is involved in the effective interaction between conduction electrons. The possibility of multiband superconductivity will be studied in the future. The idea of gapped-exciton-mediated superconductivity can be adopted to spin dimer systems, where a nonmagnetic singlet ground state is realized accompanied by dispersive triplet excitations with a finite energy gap. It is essentially the same as the Pr 4$f^2$ system discussed in this letter. Introducing a conduction electron system coupled to the spin dimers, we can expect superconductivity mediated by the triplet excitons. This is one of the directions to search for a new superconductor. We are indebted to Y. Aoki, K. Kuwahara, H. Shiba and R. Shiina for valuable discussions. [99]{} E. D. Bauer [*et al*]{}.: Phys. Rev. B [**65**]{} (2002) 100506(R). Y. Vollmer [*et al*]{}.: Phys. Rev. Lett. [**90**]{} (2003) 057001. K. Izawa [*et al*]{}.: Phys. Rev. Lett. [**90**]{} (2003) 117001. H. Kotegawa [*et al*]{}.: Phys. Rev. Lett. [**90**]{} (2003) 027001. Y. Aoki [*et al*]{}.: Phys. Rev. Lett. [**91**]{} (2003) 067003. K. Miyake, H. Kohno and H. Harima: J. Phys.: Condens. Matter: [**15**]{} (2003) L275. J. Goryo: Phys. Rev. B [**67**]{} (2003) 184511. K. Maki [*et al*]{}.: Europhys. Lett. [**64**]{} (2003) 496. M. Ichioka, N. Nakai and K. Machida: J. Phys. Soc. Jpn. [**72**]{} (2003) 1322. I. A. Sergienko and S. H. Curnoe: Phys. Rev. B [**70**]{} (2004) 144522. K. Takegahara, H. Harima and A. Yanase: J. Phys. Soc. Jpn. [**70**]{} (2001) 1190. H. Sugawara [*et al*]{}.: Phys. Rev. B [**66**]{} (2002) 220504(R). Y. Aoki [*et al*]{}.: J. Phys. Soc. Jpn. [**71**]{} (2002) 2098. P.-C. Ho [*et al*]{}.: Phys. Rev. B [**67**]{} (2003) 180508(R). M. Kohgi [*et al*]{}.: J. Phys. Soc. Jpn. [**72**]{} (2003) 1002. R. Shiina and Y. Aoki: J. Phys. Soc. Jpn. [**73**]{} (2004) 541. N. Takeda and M. Ishikawa: J. Phys. Soc. Jpn. [**57**]{} (2000) 868. M. Yogi [*et al*]{}.: Phys. Rev. B [**67**]{} (2003) 180501(R). N. A. Frederic [*et al*]{}.: Phys. Rev. B [**69**]{} (2004) 024523. H. Kusunose and Y. Kuramoto: J. Phys. Soc. Jpn. [**70**]{} (2001) 3076. R. Shiina [*et al*]{}.: J. Phys. Soc. Jpn. [**72**]{} (2003) 1216. M. Matsumoto [*et al*]{}.: Phys. Rev. Lett. [**89**]{} (2002) 077203; Phys. Rev. B [**69**]{} (2004) 054423. H. Harima and K. Takegahara: J. Phys.: Condens. Matter [**15**]{} (2003) S2081. H. Shiba, O. Sakai and M. Koga: in preparation. M. B. Maple [*et al*]{}.: J. Phys. Soc. Jpn. [**71**]{} (2002) Suppl. 23. K. Kuwahara [*et al*]{}.: J. Phys. Soc. Jpn. [**73**]{} (2004) 1438. G. E. Volovik and L. P. Gor’kov: Zh. Eksp. Teor. Fiz. [**88**]{} (1985) 1412 \[Sov. Phys. JETP [**61**]{} (1985) 843\]. M. Sigrist and K. Ueda: Rev. Mod. Phys. [**63**]{} (1991) 239. N. K. Sato [*et al*]{}.: Nature [**410**]{} (2001) 340. P. Thalmeier: Eur. Phys. J. B [**27**]{} (2002) 29. P. McHale, P. Thalmeier and P. Fulde: Phys. Rev. B [**70**]{} (2004) 014513.
--- abstract: 'A considerable fraction of the energy in a solar flare is released as suprathermal electrons; such electrons play a major role in energy deposition in the ambient atmosphere and hence the atmospheric response to flare heating. Historically the transport of these particles has been approximated through a deterministic approach in which first-order secular energy loss to electrons in the ambient target is treated as the dominant effect, with second-order diffusive terms (in both energy and angle) being generally either treated as a small correction or neglected. However, it has recently been pointed out that while neglect of diffusion in energy may indeed be negligible, diffusion in angle is of the same order as deterministic scattering and hence must be included. Here we therefore investigate the effect of angular scattering on the energy deposition profile in the flaring atmosphere. A relatively simple compact expression for the spatial distribution of energy deposition into the ambient plasma is presented and compared with the corresponding deterministic result. For unidirectional injection there is a significant shift in heating from the lower corona to the upper corona; this shift is much smaller for isotropic injection. We also compare the heating profiles due to return current Ohmic heating in the diffusional and deterministic models.' author: - 'A. Gordon Emslie, Nicolas H. Bian, and Eduard P. Kontar' bibliography: - 'diffusive\_energy\_deposition\_refs.bib' title: ENERGY DEPOSITION BY ENERGETIC ELECTRONS IN A DIFFUSIVE COLLISIONAL TRANSPORT MODEL --- Introduction ============ Energy transport in solar flares involves a variety of mechanisms, such as nonthermal particle acceleration and propagation, thermal conduction, radiation, and bulk mass motions [see, e.g., @1988psf..book.....T; @2011SSRv..159..107H; @2011SSRv..159..301K for reviews]. A significant fraction [e.g., @2012ApJ...759...71E] of the energy released is manifested as bremsstrahlung-emitting deka-keV electrons [see, e.g., @1976SoPh...50..153L; @2011SSRv..159..357Z]. These electrons propagate from the primary energy release site and deposit their energy in the ambient target principally through Coulomb collisions on ambient electrons [e.g., @1972SoPh...26..441B; @1978ApJ...224..241E], with additional energy losses associated with Ohmic dissipation of the neutralizing return current [e.g., @1980ApJ...235.1055E; @2006ApJ...651..553Z] and with the turbulent environment through which they propagate . Modeling of the Coulomb collision process has typically involved a test-particle approach involving systematic (secular) energy loss [e.g., @1971SoPh...18..489B; @1972SoPh...26..441B; @1978ApJ...224..241E], although numerical solutions of the Fokker-Planck equation, involving collisional diffusion in pitch angle and energy [e.g. @2014ApJ...787...86J] in addition to the secular energy loss term have also been carried out. [@2017ApJ...835..262B] have shown that, while diffusion in [*energy*]{} can be justifiably neglected in a sufficiently cold target, diffusion of the accelerated electrons in [*angle*]{} is of the same order as secular change in angle and thus it is essential to include diffusive angular scattering processes in determining the spatial and angular distributions of the accelerated electrons in the target. Knowledge of the energy deposition profile is a key element in determining the response of the solar atmosphere to flare heating [@1989ApJ...341.1067M; @2015ApJ...809..104A] and hence in interpreting the plethora of observations of Doppler-shifted and -broadened spectral lines in terms of the velocity differential emission measure [@1995ApJ...447..915N] corresponding to candidate energy transport models. In this paper we therefore build on the results of @2017ApJ...835..262B to derive a formula for the energy deposition profile associated with the passage of electrons through a cold target, where diffusion associated with angular scattering is explicitly taken into account. The results show that for unidirectional injection the spatial distribution of plasma heating differs noticeably from the simple deterministic treatment that has formed the basis for much of the modeling of both solar [@1973SoPh...31..143B; @1989ApJ...341.1067M] and stellar [@2015ApJ...809..104A] flares to date. In Section \[diffusive-solutions\] we present an analysis of collision-dominated electron propagation in a cold target, with angular diffusion taken into account; the results are presented as a solution for the electron flux $F(E,z)$ (electrons cm$^{-2}$ s$^{-1}$ keV$^{-1}$) at energy $E$ and target depth $z$ in terms of an integral over a Green’s function for electrons injected at a specified energy and pitch angle. In Section \[energy-deposition-rates\] we use this result to calculate the energy deposition rate as a function of $z$, both for unidirectional and isotropic injection cases. In Section \[return-current-ohmic\] we briefly discuss the impact of diffusive angular scattering on the return current Ohmic losses associated with driving the beam-neutralizing electron current though the finite resistivity of the ambient plasma. In Section \[summary-conclusions\] we discuss the results and present our conclusions. Solution to the collisional transport equation in the diffusive regime {#diffusive-solutions} ====================================================================== [@2017ApJ...835..262B] have shown that the collisional transport of electrons in a cold target can effectively be modeled, in a first (local) approximation[^1], by the one-dimensional transport equation [e.g. @2014ApJ...780..176K] $$\label{cold-diffusion} - \, \frac{\partial }{\partial z} \left ( \frac{\lambda_{C}(v)v}{6} \, \frac{\partial f_{0}(z,v)}{\partial z} \right ) = \frac{1}{v^{2}} \, \frac{\partial}{\partial v} \left ( v^{3} \, \nu_{C}(v) \, f_{0} \right ) + S_{0}(z,v) \,\,\, .$$ Here $f_0(v,z)$ (electrons cm$^{-3}$ \[cm s$^{-1}$\]$^{-3}$) is the principal (isotropic) part of the electron phase space distribution at speed $v$ and distance $z$ from the injection site, $S_0(z,v)$ (electrons cm$^{-3}$ s$^{-1}$ \[cm s$^{-1}$\]$^{-3}$) is the injection (source) term, and the collisional mean-free path $$\label{lambda_C} \lambda_{C}(v) = \frac{v}{\nu_{C}(v)} \,\,\, ,$$ with $\nu_C(v)$ the cold-target collision frequency, given by $$\label{nu-c} \nu_{C}(v) = \frac{4\pi n_e \, e^4 \, \ln \Lambda}{m_e^2} \, \frac{1}{v^3} \,\,\, .$$ In this equation $n_e = 4 \pi \int f_0(v) \, v^2 \, dv$ is the local density (cm$^{-3}$), $e$ (esu) and $m_e$ (g) are the electronic charge and mass, respectively, and $\ln \Lambda$ is the Coulomb logarithm [e.g., @1962pfig.book.....S]. It is convenient to make a transformation of the dependent variable from $f_0(v,z)$ to the energy flux $F(E,z)$ (electrons cm$^{-2}$ s$^{-1}$ erg$^{-1}$). This is related to the phase-space distribution function $f_0$ (electrons cm$^{-3}$ \[cm s$^{-1}]^{-3}$) by considering the hemispherical particle flux, i.e., $$F(E,z) \, dE = f_{0}(v,z) \, v^2 \, dv \int_{\phi = 0}^{2 \pi} \int_{\theta =0}^{\pi/2} v \cos \theta \, \sin \theta \, d\theta \, d\phi = \pi f_0 \, v^3 \, dv \,\,\, .$$ Using $dE= mv \, dv$ we obtain the relation $$\label{F-f} F(E,z) = \frac{\pi}{m_e} \, f_0 (v,z) \, v^2 \,\,\, .$$ Substituting Equations (\[lambda\_C\]), (\[nu-c\]), and (\[F-f\]) in Equation (\[cold-diffusion\]), we obtain the diffusion equation $$\label{diffusion-continuity} - \, \frac{\lambda_C(E)}{6} \, \frac{\partial^2 F(E,z)}{\partial z^2} + \frac{\partial }{\partial E} \, \left [ \, B(E) \, F(E,z) \, \right ] = {\hat S}(E,z) \,\,\, ,$$ where ${\hat S}(E,z)$ (cm$^{-3}$ s$^{-1}$ erg$^{-1}$) = $(\pi v/m_e) \, S_0(v,z)$, $\lambda_C(E)$ is the collisional mean free path as a function of energy $E$ (cf. Equations (\[lambda\_C\]) and (\[nu-c\])): $$\label{lambdae-def} \lambda_C(E)= \frac{E^2}{\pi n_e e^4 \ln \Lambda} \equiv \frac{2 E^2}{Kn} \,\,\, ,$$ and $B(E)$ (erg cm$^{-1}$) is the usual [@1972SoPh...26..441B; @1978ApJ...224..241E] cold-target energy loss rate per unit distance: $$\label{cold-target-be} B(E) \equiv \frac{dE}{dz} = - \frac{2\pi e^4 \ln \Lambda \, n}{E} \equiv -\frac{Kn}{E} \,\,\, .$$ Equation (\[cold-target-be\]) has solution $E^2(z) = E^2(0) - 2K \int n(z) \, dz$. For simplicity, we shall henceforth assume a uniform density $n$, so that $E^2= E^2(0) - 2Knz$. The characteristic collisional stopping distance, for an electron of injected energy $E$ in a scenario without diffusion, is $E^2/2Kn$, is thus one-fourth of the diffusional mean free path (\[lambdae-def\]). We now change to the new dependent variable $$\label{Phi-def} \Phi(E,z) = F(E,z) \, B(E) \equiv -\frac{Kn}{E} \, F(E,z)$$ (units cm$^{-3}$ s$^{-1}$) and to a new independent energy variable (with units cm$^2$) $$\begin{aligned} \label{zeta-def} \zeta = \frac{1}{6} \, \int dE \, \frac{\lambda_C(E)}{B(E)} & = & - \frac{1}{6Kn} \, \int_{E_{0}}^{E} \lambda_C(E') \, E' \, dE' = \frac{1}{3 (Kn)^2} \int_E^{E_{0}} E'^3 \, dE' \cr &=& \ell^{2} \left [ \left ( \frac{E_{0}}{k_{B}T_{e}} \right )^{4} - \left (\frac{E}{k_{B}T_{e}} \right )^{4} \right ] \,\,\, ,\end{aligned}$$ where $$\label{ell-def} \ell = \frac{1}{2 \sqrt{3}} \frac{(k_B T_e)^2}{Kn} \equiv \frac{\lambda_{ec}}{4 \sqrt{3}} ; \qquad \lambda_{ec} = \frac{2 (k_B T_e)^2}{K \, n} \,\,\, .$$ With this substitution, Equation (\[diffusion-continuity\]) takes the form of a standard diffusion equation $$\label{basic} \frac{\partial \Phi}{\partial \zeta} = \frac{\partial^2 \Phi(E,z)}{\partial z^2} + {\overline S}(\zeta,z) \,\,\, ,$$ where ${\overline S}(\zeta,z) = (6 B(E)/\lambda_C(E)) \, {\hat S}$ (cm$^{-5}$ s$^{-1}$) is the pertinent source function. The well-known Green’s function for such a parabolic diffusion equation is $$G_{\Phi}(\zeta,z) = \frac{1}{(4\pi \zeta)^{1/2}} \, \exp \left ( - \frac{({z-z'})^{2}}{4\zeta} \right ) \,\,\, ,$$ or, in terms of the original independent variables $(E,z)$ and dependent variable $F(E,z)$, $$G_{F}(E,z) = \frac{E}{Kn \left \{ 4\pi l^{2} \left [ \left (\frac{E_{0}}{k_{B}T_{e}} \right )^{4}- \left ( \frac{E}{k_{B}T_{e}} \right )^{4} \right ] \right \}^{1/2}} \, \exp \left \{ - \, \frac{({z-z'})^{2}}{4 l^{2} \left [ \left ( \frac{E_{0}}{k_{B}T_{e}} \right )^{4}- \left ( \frac{E}{k_{B}T_{e}} \right )^{4} \right ]} \right \} \,\,\, .$$ Hence, the solution to Equation (\[diffusion-continuity\]), with a source term of the form ${\hat S}(E,z) = S(z) \, F_{0}(E_0)$, where $S(z)$ has units cm$^{-1}$ and $F_0(E_0)$ has units cm$^{-2}$ s$^{-1}$ erg$^{-1}$, can be expressed [see Eq. (26) in @2014ApJ...780..176K] as $$\begin{aligned} \label{f-result} F(E,z) & = & \frac{E}{Kn}\int _{-\infty}^{+\infty}dz'\int_{E}^{\infty} dE_{0} \frac{S(z') \, F_{0}(E_{0})}{\left \{ 4\pi l^{2} \left [ \left (\frac{E_{0}}{k_{B}T_{e}} \right )^{4}- \left ( \frac{E}{k_{B}T_{e}} \right )^{4} \right ] \right \}^{1/2}} \, \times \cr & \times & \, \exp \left \{ - \, \frac{({z-z'})^{2}}{4 l^{2} \left [ \left ( \frac{E_{0}}{k_{B}T_{E}} \right )^{4}- \left ( \frac{E}{k_{B}T_{E}} \right )^{4} \right ]} \right \} \,\,\, .\end{aligned}$$ To illustrate the form of this solution, and in particular how it deviates from the diffusion-free result of past works, let us assume for definiteness a point-injection $$S(z)=\delta(z)$$ and a low-energy-truncated power-law injection form for the source (acceleration) spectrum: $$F_{0}(E_0)=\frac{\dot{N}}{A} \, \frac{(\delta -1)}{E_{c}} \, \left ( \frac{E_0}{E_{c}} \right )^{-\delta} \, H(E_0 - E_c) \,\,\, ,$$ where $H(x)$ is the Heaviside step function and the total injected rate (s$^{-1}$) $$\dot{N} = A \, \int_{E_{c}}^{\infty} F_0(E_0) \, dE_0 \,\,\, .$$ With these identifications, we obtain $$\label{fez-result} F(E,z) = \sqrt{\frac{3}{\pi}} \, \frac{\dot{N}}{A} \, \frac{(\delta-1)}{E_{c}} \times \begin{cases} E \, \int_{E}^{\infty} dE_{0} \, \frac{(E_{0}/E_{c})^{-\delta}}{(E_0^4-E^4)^{1/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \qquad E \ge E_c \cr E \, \int_{E_c}^{\infty} dE_{0} \, \frac{(E_{0}/E_{c})^{-\delta}}{(E_0^4-E^4)^{1/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \qquad E < E_c \,\,\, . \end{cases}$$ We can compare this expression with that for one-dimensional deterministic transport. Unlike for the diffusional case[^2], there now [*is*]{} a unique value of the energy $E$ at position $z$. For a one-dimensional transport model, this is given by $$\label{ee0} E^2 = E_0^2 - 2 K n z \,\,\, .$$ Further, since all the energy is injected in one direction, we need consider only $z \ge 0$. The corresponding expression for $F(E,z)$ is [e.g., @1984ApJ...279..882E] $$\begin{aligned} \label{fez-old-model} F_{ND}(E,z) & = & F_0(E_0) \, \frac{dE_0}{dE} = \frac{E}{E_0} \, F_0(E_0) = \cr & = & \begin{cases} \frac{\dot{N}}{A} \, (\delta-1) \, E_c^{\delta-1} \, \frac{E}{\left ( E^2 + 2Knz \right )^{(\delta+1)/2} } \quad ; \quad E^2 \ge E_c^2 - 2 K n z \\ 0 \qquad \qquad \qquad \qquad \qquad \qquad \quad \,\, ; \quad {\rm otherwise} \,\,\, . \end{cases}\end{aligned}$$ Collisional energy deposition rate {#energy-deposition-rates} ================================== With the forms of $F(E,z)$ now determined, we turn our attention to the energy deposition profile due to Coulomb collisions, thus generalizing the diffusionless treatments of [@1973SoPh...31..143B] and [@1978ApJ...224..241E]. We remind the reader that even in the diffusional model, the diffusion is in pitch angle only [diffusion in energy is a higher order effect; @2017ApJ...835..262B], so that a cold-target energy loss rate $dE/dz = -Kn/E$ is still appropriate for each electron. Non-diffusional model {#edep-nd} --------------------- We first review the results for the deterministic non-diffusional model. Although these results are well established in the literature, dating back to [@1972SoPh...26..441B; @1973SoPh...31..143B], it is worth reviewing these to provide a baseline and also to develop a method that carries over to the diffusive case. In the non-diffusive case, the heating rate $Q(z)$ can be obtained by evaluating [cf. @1973SoPh...31..143B; @1978ApJ...224..241E] the quantity $$\label{qnd-alternative} Q(z) = \int_0^\infty F_{ND}(E,z) \, \left \vert \frac{dE}{dz} \right \vert \, dE = Kn \int_0^\infty \frac{F_{ND}(E,z)}{E} \, dE \,\,\, .$$ Substituting for $F_{ND}(E,z)$ from Equation (\[fez-old-model\]), we obtain $$\begin{aligned} \label{q-nd-de-dz-method} Q(z) = \frac{{\dot N}}{A} \, (\delta-1) E_c^{\delta-1} Kn \times \begin{cases} \int_{\sqrt{E_c^2-2Knz}}^\infty \frac{dE}{\left ( E^2 + 2Knz \right )^{(\delta+1)/2}} \quad ; \quad z < \frac{E_c^2}{2Kn} \\ \int_0^\infty \frac{dE}{( E^2 + 2Knz)^{(\delta+1)/2}} \qquad \qquad \, ; \quad z > \frac{E_c^2}{2Kn} \,\,\, . \end{cases}\end{aligned}$$ Using the substitution $$\label{y-variable} y = \frac{2Knz}{E^2+2Knz} \,\,\, ,$$ Equation (\[q-nd-de-dz-method\]) can be written as $$\label{q-nd-de-dz} Q(z) = \frac{1}{2} \, (\delta-1) \, \frac{{\dot N}}{A} \, \frac{Kn}{E_c} \times \begin{cases} B_{(2Knz/E_c^2)} \left ( \frac{\delta}{2}, \frac{1}{2} \right ) \, \left ( \frac{2Knz}{E_c^2} \right )^{-\delta/2} \,\, ; \quad z < \frac{E_c^2}{2Kn} \\ B \left ( \frac{\delta}{2}, \frac{1}{2} \right ) \, \left ( \frac{2Knz}{E_c^2} \right )^{-\delta/2} \qquad \qquad ; \quad z > \frac{E_c^2}{2Kn} \,\,\, , \end{cases}$$ where the incomplete beta function is $$\label{incomplete-beta} B_x(a,b) = \int_0^x y^{a-1} \, (1-y)^{b-1} \, dy$$ and the complete beta function $B(a,b) \equiv B_1 (a,b)$. For injection at an angle to the guiding magnetic field, the electrons propagate through the target with varying pitch angle, and the relationship between the energy and pitch angle at a given depth to the injected energy and pitch angle is more complicated [@1972SoPh...26..441B]. The corresponding heating rate can, however, be well approximated as a straightforward generalization of Equation (\[qnd-alternative\]), namely $$\label{qnd-general-anisotropic} Q_{ND}(z) = Kn \int_{\mu=0}^1 h(\mu) \int_{0}^{\infty} \frac{F(E,z/\mu)}{\mu \, E} \,\, dE \, d\mu \,\,\, ,$$ where $h(\mu) \, d\mu$ is the fraction of the flux at pitch angle cosines in $(\mu, \mu + d\mu)$. In particular, if the electrons are injected isotropically in the half-plane, then [see Equation (25) of @1972SoPh...26..441B] they remain isotropic at all depths, and $$\label{qnd-general-isotropic} Q_{ND}(z) = Kn \int_{\mu=0}^1 \int_{0}^{\infty} \frac{F(E,z/\mu)}{\mu \, E} \,\, dE \, d\mu \,\,\, .$$ This expression can be readily evaluated numerically using the obvious generalization of Equation (\[q-nd-de-dz\]). Because in the diffusive case there is no unique value of $E$ associated with an electron injected with energy $E_0$ at position $z$, the above expression (or a generalization of it) cannot be used. We therefore develop an expression for the heating rate that can also be applied to the diffusive case. We first use Equation (\[fez-old-model\]) to obtain an expression for the total energy flux ${\cal F}(z)$ (erg cm$^{-2}$ s$^{-1}$) at point $z$: $$\label{calf-z} {\cal F}(z) = \int_0^\infty E \, F(E,z) \, dE = \frac{{\dot N}}{A} \, (\delta-1) E_c^{\delta-1} \times \begin{cases} \int_{\sqrt{E_c^2-2Knz}}^\infty \frac{E^2 \, dE}{\left ( E^2 + 2Knz \right )^{(\delta+1)/2}} \, ; \quad z < \frac{E_c^2}{2Kn} \\ \int_0^\infty \frac{E^2 \, dE}{( E^2 + 2Knz)^{(\delta+1)/2}} \qquad \quad \,\, ; \quad z > \frac{E_c^2}{2Kn} \,\,\, . \end{cases}$$ Using the change of variable (\[y-variable\]), this can be written as $$\label{calf-z-alt} {\cal F}(z) = \frac{1}{2} \, (\delta -1) \, \frac{{\dot N}}{A} \, E_c \, \left ( \frac{2 K n z}{E_c^2} \right )^{1-\delta/2} \times \begin{cases} B_{(2Knz/E_c^2)} \left ( \frac{\delta}{2}-1, \frac{3}{2} \right ) \, \qquad \qquad \qquad ; \quad z < \frac{E_c^2}{2Kn} \\ B \left ( \frac{\delta}{2} - 1, \frac{3}{2} \right ) \, \qquad \qquad \qquad \qquad \qquad ; \quad z > \frac{E_c^2}{2Kn} \,\,\, . \end{cases}$$ Energy conservation requires that the heating rate $Q(z)$ (erg cm$^{-3}$ s$^{-1}$) is $$\begin{aligned} \label{q-nd-df-dz} Q(z) & = & - \frac{d{\cal F}}{dz} = \frac{{\dot N}}{A} \, (\delta-1) \, (\delta+1) \, Kn \, E_c^{\delta -1 } \times \cr & \times & \begin{cases} \int_{\sqrt{E_c^2-2Knz}}^\infty \frac{E^2 \, dE}{ \left ( E^2+2Knz \right )^{(\delta+3)/2}} - \frac{\sqrt{E_c^2 - 2Knz}}{E_c} \, ; \quad z < \frac{E_c^2}{2Kn} \\ \int_0^\infty \frac{E^2 \, dE}{ \left ( E^2+2Knz \right )^{(\delta+3)/2}} \, \qquad \qquad \qquad \qquad \quad ; \quad z > \frac{E_c^2}{2Kn} \,\,\, , \end{cases}\end{aligned}$$ which, using the change of variable (\[y-variable\]), can be written as $$\begin{aligned} \label{q-nd} Q(z) & = & \frac{1}{2} \, (\delta-1) \, (\delta+1) \, \frac{{\dot N}}{A} \, \frac{Kn}{E_c} \times \cr & \times & \begin{cases} B_{(2Knz/E_c^2)} \left ( \frac{\delta}{2}, \frac{3}{2} \right ) \, \left ( \frac{2Knz}{E_c^2} \right )^{-\delta/2} - 2 \, \sqrt{1 - \frac{2Knz}{E_c^2}}\, ; \quad z < \frac{E_c^2}{2Kn} \\ B \left ( \frac{\delta}{2}, \frac{3}{2} \right ) \, \left ( \frac{2Knz}{E_c^2} \right )^{-\delta/2} \, \qquad \qquad \qquad \qquad \qquad \quad ; \quad z > \frac{E_c^2}{2Kn} \,\,\, . \end{cases}\end{aligned}$$ To simplify this expression, we note that $$\label{beta-identity} (\delta+1) B \left ( \frac{\delta}{2}, \frac{3}{2} \right ) \equiv (\delta + 1) \frac{\Gamma \left ( \frac{\delta}{2} \right ) \Gamma \left ( \frac{3}{2} \right )}{\Gamma \left ( \frac{\delta+3}{2} \right )} = (\delta + 1) \frac{\Gamma \left ( \frac{\delta}{2} \right ) \frac{1}{2} \Gamma \left ( \frac{1}{2} \right )}{\left ( \frac{\delta+1}{2} \right ) \Gamma \left ( \frac{\delta+1}{2} \right )} = \frac{\Gamma \left ( \frac{\delta}{2} \right ) \Gamma \left ( \frac{1}{2} \right )}{\Gamma \left ( \frac{\delta+1}{2} \right )} \equiv B \left ( \frac{\delta}{2}, \frac{1}{2} \right ) \,\,\, ,$$ and using this in the incomplete beta function identity (\#8.17.21 in http://dlmf.nist.gov/8.17) $$\label{incomplete-beta-identity} B_x(a,b) = \frac{B(a,b)}{B(a,b+1)} \, B_x(a,b+1) - \frac{1}{b} \, x^a \, (1-x)^b$$ with $a=\delta/2$ and $b=1/2$ gives $$\label{incomplete-beta-identity-particular} B_x \left ( \frac{\delta}{2}, \frac{1}{2} \right ) x^{-\delta/2} = (\delta + 1) \, B_x \left ( \frac{\delta}{2}, \frac{3}{2} \right ) x^{-\delta/2} - 2 \, \sqrt{1-x} \,\,\, .$$ From this we see that the expressions (\[q-nd-de-dz\]) and (\[q-nd\]) are equivalent. Furthermore, the latter method can be used even where there is no one-to-one correspondence between $E$ and $z$, as in the diffusional transport case, next to be considered. Diffusional Model ----------------- Using Equation (\[fez-result\]) for the differential particle flux spectrum $F(E,z)$, the energy flux ${\cal F}(z)$ in the diffusional transport model becomes $$\begin{aligned} \label{f-diff} {\cal F}(z) & = & \int_0^\infty E \, F(E,z) \, dE = \sqrt{\frac{3}{\pi}} \, \frac{\dot{N}}{A} \, \frac{(\delta-1)}{E_{c}} \times \nonumber \\ & \times & \left [ \! \left [ \int_{E = 0}^{E_c} E^2 dE \int_{E_0 = E_c}^{\infty} dE_{0} \, \frac{(E_{0}/E_{c})^{-\delta}}{(E_0^4-E^4)^{1/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} + \right . \right . \nonumber \\ & \qquad & + \left . \left . \int_{E = E_c}^\infty E^2 dE \int_{E_0 = E}^{\infty} dE_{0} \, \frac{(E_{0}/E_{c})^{-\delta}}{(E_0^4-E^4)^{1/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \right ] \! \right ] \,\,\, .\end{aligned}$$ Reversing the order of $(E,E_0)$ integration gives $$\label{f-diff-reverse} {\cal F}(z) = \sqrt{\frac{3}{\pi}} \, \frac{\dot{N}}{A} \, \frac{(\delta-1)}{E_{c}} \times \int_{E_0 = E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} \, dE_0 \, \int_{E = 0}^{E_0} \frac{E^2 \, dE}{(E_0^4-E^4)^{1/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \,\,\, .$$ From this, it is now straightforward to calculate the heating rate $$\begin{aligned} \label{q-diff} Q(z) & = & - \frac{d{\cal F}}{dz} = \sqrt{\frac{3}{\pi}} \, \frac{\dot{N}}{A} \, \frac{(\delta-1)}{E_{c}} \, \left ( 6 K^2n^2 z \right ) \times \nonumber \\ & \times & \int_{E_0 = E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} dE_0 \int_{E = 0}^{E_0} \frac{E^2 \, dE}{(E_0^4-E^4)^{3/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \,\,\, .\end{aligned}$$ The energy flux ${\cal F}(z)$ has a maximum at $z=0$ and hence its divergence, the heating rate $Q(0) = d{\cal F}/dz \, (0) = 0$. As the energy flux decreases with distance, a positive heating rate develops, which subsequently decreases as the energy flux (and hence its divergence) gets smaller. The maximum value of $Q(z)$ occurs where $dQ(z)/dz=0$, i.e., where $z$ satisfies the transcendental equation $$\label{z-max} 6 K^2 n^2 z^2 = \frac{\int_{E_0 = E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} dE_0 \int_{E = 0}^{E_0} \frac{E^2 \, dE}{(E_0^4-E^4)^{3/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \}}{\int_{E_0 = E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} dE_0 \int_{E = 0}^{E_0} \frac{E^2 \, dE}{(E_0^4-E^4)^{5/2}} \, \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \}} \,\,\, ,$$ ![Comparison of heating profiles for $E_c = 20$ keV and $n=10^{11}$ cm$^{-3}$. The solid line in each panel represents the heating in the diffusional propagation. The dashed lines represent the heating in the deterministic model for a one-dimensional (field-aligned injection) model (left panel) and for a model with isotropic injection in a hemisphere (right panel).[]{data-label="fig:comparison-ec20"}](f1a.eps "fig:"){width="45.00000%"} ![Comparison of heating profiles for $E_c = 20$ keV and $n=10^{11}$ cm$^{-3}$. The solid line in each panel represents the heating in the diffusional propagation. The dashed lines represent the heating in the deterministic model for a one-dimensional (field-aligned injection) model (left panel) and for a model with isotropic injection in a hemisphere (right panel).[]{data-label="fig:comparison-ec20"}](f1b.eps "fig:"){width="45.00000%"}\ ![Comparison of heating profiles for $E_c = 20$ keV and $n=10^{11}$ cm$^{-3}$. The solid line in each panel represents the heating in the diffusional propagation. The dashed lines represent the heating in the deterministic model for a one-dimensional (field-aligned injection) model (left panel) and for a model with isotropic injection in a hemisphere (right panel).[]{data-label="fig:comparison-ec20"}](f1c.eps "fig:"){width="45.00000%"} ![Comparison of heating profiles for $E_c = 20$ keV and $n=10^{11}$ cm$^{-3}$. The solid line in each panel represents the heating in the diffusional propagation. The dashed lines represent the heating in the deterministic model for a one-dimensional (field-aligned injection) model (left panel) and for a model with isotropic injection in a hemisphere (right panel).[]{data-label="fig:comparison-ec20"}](f1d.eps "fig:"){width="45.00000%"} The left-hand panels of Figure \[fig:comparison-ec20\] compare the heating rate (\[q-nd\]) in the one-dimensional deterministic model with that in the diffusional propagation model (Equation (\[q-diff\])). Results are shown for $n = 10^{11}$ cm$^{-3}$ and $E_c = 20$ keV (results for different values of $n$ and $E_c$ scale and shift straightforwardly), and for $\delta=4$ and $\delta=6$ (top and bottom panels, respectively). The right-hand panels of Figure \[fig:comparison-ec20\] compare the heating rate (\[qnd-general-isotropic\]) in a deterministic model with isotropic injection (over the downward hemisphere) with that for the diffusional propagation model (Equation (\[q-diff\])). While the heating rates in all three models are of comparable magnitude, the following should be noted: - the deterministic model with field-aligned injection significantly underestimates the heating near the injection point because it neglects electrons that scatter to high pitch angles and hence remain close to the injection site. It also overestimates the heating at moderate distances, with a spike[^3] at distances close to where electrons of energy $E_c$ thermalize. - the maximum heating rate occurs at different positions in the deterministic and diffusional models, but is of comparable magnitude. - the results for the deterministic model with isotropic-injection in the downward hemisphere are only slightly different from the diffusional model (that involves isotropic injection over the entire sphere). [*This close agreement implies that the chromospheric heating rate can in most cases be adequately modeled by a deterministic transport model with isotropic injection in the downward hemisphere*]{}. Return current Ohmic energy deposition {#return-current-ohmic} ====================================== For an anisotropic injection of electrons (or even an isotropic injection so that electrons proceed away from the injection point in separate hemispheres), a return current is rapidly established by the thermal electrons in the target plasma in order to effect charge and current neutralization . Driving this return current through the finite resistivity of the ambient medium results in an Ohmic energy deposition rate $$Q_{rc}(z) = j_{\parallel}(z) \cdot {\cal E}_\parallel(z) \,\,\, ,$$ where the return current density $j_\parallel$ is $$j(z) = n e \langle v_{\parallel}\rangle = e \int_0^\infty F(E,z) \, dE \,\,\, .$$ For a local Ohm’s law ${\cal E}_\parallel = \eta j_\parallel$, with scalar resistivity $\eta$, we thus have $$\label{q-rc-exp} Q_{rc}(z) = \eta \, e^{2} \left ( \int_0^\infty F(E,z) \, dE \right )^{2} \,\,\, .$$ The form of $F(E,z)$ in this expression should, of course, be evaluated (or computed) self-consistently using both collisional and return-current losses. However, as a first approximation, we can use the collisional diffusion result (\[fez-result\]) for $F(E,z)$ (this will be justified [*a posteriori*]{} below). Reversing the order of $(E,E_0)$ integration, we obtain $$\begin{aligned} \int_0^\infty F(E,z) \, dE & = & \sqrt{\frac{3}{\pi}} \, \frac{\dot{N}}{A} \, \frac{(\delta-1)}{E_c} \, \times \cr & \times & \int_{E_0=E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} \, dE_0 \, \int_{E=0}^{E_0} \frac{E \, dE}{(E_0^4-E^4)^{1/2}} \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \,\,\, ,\end{aligned}$$ so that $$\begin{aligned} \label{return-current-diffusion-result} Q_{rc}(z) & = & \eta \, e^2 \, \left ( \frac{3}{\pi} \right ) \, (\delta-1)^2 \, \left ( \frac{{\dot N}}{A} \right )^2 \, \times \cr & \times & \left [ \frac{1}{E_c} \, \int_{E_0=E_c}^\infty \left ( \frac{E_0}{E_c} \right )^{-\delta} \, dE_0 \, \int_0^{E_0} \frac{E \, dE}{(E_0^4-E^4)^{1/2}} \exp \left \{ - \frac{3 (Knz)^2}{E_0^4-E^4} \right \} \right ]^2 \,\,\, .\end{aligned}$$ The corresponding deterministic (non-diffusive) field-aligned injection result [e.g. @1977ApJ...218..306K; @1980ApJ...235.1055E] is obtained by using the form (\[fez-old-model\]) for $F(E,z)$ in Equation (\[q-rc-exp\]): ![Comparison of return current Ohmic heating profiles for $E_c = 20$ keV and $\delta =4$. The solid line represents the heating in the diffusional propagation, while the dashed line represents the heating in the deterministic field-aligned injection model. The units of the heating are per unit squared injected particle flux $({\dot N}/A)^2$, and are also scaled by the quantity $\eta \, e^2$.[]{data-label="fig:return-current"}](f2.eps){width="70.00000%"} $$\begin{aligned} \int_0^\infty F(E,z) \, dE & = & \frac{\dot{N}}{A} \, (\delta-1) \, E_c^{\delta-1} \, \times \begin{cases} \int_{\sqrt{E_c^2-2Knz}}^\infty \frac{E \, dE}{\left ( E^2 + 2Knz \right )^{(\delta+1)/2}} \, ; \quad z < \frac{E_c^2}{2Kn} \\ \int_0^\infty \frac{E \, dE}{( E^2 + 2Knz)^{(\delta+1)/2}} \qquad \quad \,\, ; \quad z > \frac{E_c^2}{2Kn} \end{cases} \cr & = & \frac{\dot{N}}{A} \times \begin{cases} 1 \qquad \qquad \qquad ; \quad z < \frac{E_c^2}{2Kn} \\ \left ( \frac{2Knz}{E_c^2} \right )^{\frac{1-\delta}{2}} \quad \, ; \quad z > \frac{E_c^2}{2Kn} \,\,\, , \end{cases}\end{aligned}$$ which simply reflects the conservation of particle flux down to depth $z=E_c^2/2Kn$, after which electrons are progressively “lost” from the beam. Thus, for such a non-diffusive field-aligned injection model, $$\label{qrc-nd} Q_{rc,ND}(z) = \eta \, e^{2} \, \left ( \frac{{\dot N}}{A} \right )^2 \, \times \begin{cases} 1 \qquad \qquad \quad ; \quad z < \frac{E_c^2}{2Kn} \\ \left ( \frac{2Knz}{E_c^2} \right )^{1-\delta} \quad ; \quad z > \frac{E_c^2}{2Kn} \,\,\, . \end{cases}$$ Overall, the effect of diffusion is to reduce the anisotropy in the electron phase-space distribution function and thus reduce the magnitude of the return current and in turn the amount of Ohmic heating. Figure \[fig:return-current\] compares the Ohmic heating profiles (in units of $\eta e^2 (\dot{N}/A)^2$) in the diffusive and field-aligned deterministic models. Including diffusion reduces the return current heating rate by a factor of about two to three near the injection point, and by over an order of magnitude near the point where electrons at the cutoff energy $E_c$ start to be lost from the beam. We can now justify [*a posteriori*]{} the use of the collision-dominated expression for $F(E,z)$ in the calculation of the return current heating rate. An upper limit to the maximum return current heating rate is obtained by setting $z=0$ in Equation (\[qrc-nd\]): $$\label{q-max-rc} Q_{rc,max} = \eta \, e^2 \, \left ( \frac{{\dot N}}{A} \right )^2 \,\,\, .$$ To compare this with the maximum heating rate in the collisional model, we use the result (\[q-nd-de-dz\]) for the deterministic model at $z=E_c^2/2Kn$, since Figure \[fig:comparison-ec20\] shows that the maximum heating rate in the diffusional model is similar. This allows us to calculate the ratio of the maximum return current Ohmic heating to collisional heating: $$\label{max-heating-ratio} \frac{Q_{rc,max}}{Q_{c,max}} = \frac{2}{(\delta - 1) \, B(\frac{\delta}{2}, \frac{1}{2})} \, \eta \, e^2 \, \frac{E_c}{Kn} \left ( \frac{{\dot N}}{A} \right ) \,\,\, .$$ Although electron transport properties such as thermal conductivity and resistivity can be altered in the presence of additional non-collisional processes, e.g., angular scattering off, for example, magnetic inhomogeneities [e.g., @2016ApJ...824...78B], for consistency with the assumed collision-dominated transport we use the [@1962pfig.book.....S] expression $$\label{eta-def} \eta = \frac{\pi e^2 m^{1/2} \ln \Lambda}{(k_B T)^{3/2}}$$ for the resistivity $\eta$. With this, Equation (\[max-heating-ratio\]) becomes $$\label{max-heating-ratio-2} \frac{Q_{rc,max}}{Q_{c,max}} = \frac{1}{(\delta-1) \, B(\frac{\delta}{2}, \frac{1}{2})} \frac{m^{1/2}}{(k_B T)^{3/2}} \, \frac{E_c}{n} \left ( \frac{{\dot N}}{A} \right ) = \frac{m^{1/2}}{8 \, (k_B T)^{3/2}} \, \frac{E_c}{n} \left ( \frac{{\dot N}}{A} \right )\,\,\, ,$$ where we have set $\delta =4$. Substituting $E_c = 20$ keV $=3.2 \times 10^{-8}$ erg, $T=10^7$ K, and $n=10^{11}$ cm$^{-3}$ gives $$\label{max-heating-ratio-3} \frac{Q_{rc,max}}{Q_{c,max}} \simeq 2.4 \times 10^{-20} \, \left ( \frac{{\dot N}}{A} \right ) \,\,\, .$$ Even for a large flare with ${\dot N} = 10^{37}$ s$^{-1}$ and $A=10^{18}$ cm$^2$, this gives $Q_{rc,max}/Q_{c,max} \simeq 1/4$. This ratio is even smaller in the diffusional model: although the maximum collisional heating rates in the diffusional and deterministic models are comparable (Figure \[fig:comparison-ec20\]), return current losses are significantly reduced relative to those in the deterministic model (Figure \[fig:return-current\]). We therefore see that return current ohmic losses are significantly less than collisional losses, so that the evolution of $F(E,z)$ is controlled primarily by collisions. Thus the use of a collisional form for $F(E,z)$ in determining the approximate return current losses is justified [*a posteriori*]{}. Summary and Conclusions {#summary-conclusions} ======================= Modelling of energy deposition by injected electron beams in solar flares previously assumed a directional beam accelerated in a point source and directed downward to the chromosphere where it was stopped collisionally. However, the need to include angular diffusion due to collisions in the physics of electron transport [@2017ApJ...835..262B] results in significantly changed profiles for the electron flux versus depth and hence for the profile $Q(z)$ of heat deposition versus depth. The resulting cold target heating function can, however be adequately modeled simply by using a deterministic transport model with isotropic injection in the downward hemisphere (see right panels of Figure \[fig:comparison-ec20\]). The effects on Ohmic return current heating are more severe; the significantly greater level of isotropization of the injected electrons caused by enhanced pitch angle scattering reduces the magnitude of the associated current, resulting in a reduction of up to an order-of-magnitude in the Ohmic heating rate associated with the neutralizing return current. This treatment can be extended to include non-collisional pitch-angle scattering of electrons in flaring loops . In a future work we will use these modified heating functions to determine the hydrodynamic response [@2015ApJ...809..104A] of the solar atmosphere to the electron energy input. This will in turn allow us to construct velocity differential emission measure [@1995ApJ...447..915N] profiles with which to compare observations of shifted and broadened soft X-ray and EUV spectral lines, with the ultimate goal of more meaningfully constraining the processes of nonthermal electron acceleration and transport during solar flares. NHB and AGE were supported by grant NNX17AI16G from NASA’s Heliophysics Supporting Research program. EPK was supported by a STFC consolidated grant ST/P000533/1. [^1]: In general, as shown by @2017ApJ...835..262B, the diffusive term is non-local, so that the corresponding particle flux depends on the spatial gradient of the electron distribution function over a range of distances $\sim \lambda/\sqrt{45}$, where $\lambda$ is the collisional mean free path. We neglect this higher-order effect in this work. [^2]: in the original Fokker-Planck equation we can write only $d<\!\!\! E \!\!\! >/dz=-Kn/z$, which does [*not*]{} allow a deterministic relation between $E$ and $z$. [^3]: The sharpness of this spike is somewhat artificial as it is produced by the assumed abrupt cutoff in the injected electron distribution below $E_c$. A more gradual tapering of the injected spectrum at low energies will smooth this out; however, there will still be a (broader) peak in the heating around the locations where electrons at the spectral break point thermalize.
--- abstract: 'In this paper we introduce the concept of [*inflexible*]{} $CR$ submanifolds. These are $CR$ submanifolds of some complex Euclidean space such that any compactly supported $CR$ deformation is again globally $CR$ embeddable into some complex Euclidean space. Our main result is that any $2$-pseudoconcave quadratic $CR$ submanifold of type $(n,d)$ in ${\mathbb{C}}^{n+d}$ is inflexible.' --- Inflexible $CR$ submanifolds <span style="font-variant:small-caps;">Judith Brinkschulte[^1]</span> and C. Denson Hill [^2] Introduction ============ In this paper, we shall be interested in proving embedding results for compactly supported perturbations of embedded $CR$ manifolds.\ Here an abstract $CR$ manifold of type $(n,d)$ is a triple $(M, HM, J)$, where $M$ is a smooth real manifold of dimension $2n+d$, $HM$ is a subbundle of rank $2n$ of the tangent bundle $TM$, and $J: HM \rightarrow HM$ is a smooth fiber preserving bundle isomorphism with $J^2= -\mathrm{Id}$. We also require that $J$ be formally integrable; i.e. that we have $$\lbrack T^{0,1}M,T^{0,1}M\rbrack \subset T^{0,1}M$$ where $$T^{0,1}M = \lbrace X+ iJX\mid X\in \Gamma(M,HM)\rbrace \subset \Gamma(M,\mathbb{C}TM),$$ with $\Gamma$ denoting smooth sections. The $CR$ dimension of $M$ is $n\geq 1$ and the $CR$ codimension is $d\geq 1$.\ A problem of great interest is to decide which $CR$ manifolds $M$ admit $CR$ embeddings into some complex Euclidean space. Namely, can one find a smooth embedding $\varphi$ of $M$ into $\mathbb{C}^N$ such that the induced $CR$ structure $\varphi_\ast(T^{0,1}M)$ on $\varphi(M)$ coincides with the $CR$ structure $T^{0,1}(\mathbb{C}^{\mathbb{N}})\cap\mathbb{C}T(\varphi(M))$ from the ambient space $\mathbb{C}^N$.\ Typically, examples of non-embeddable $CR$ structures arise as deformations of $CR$ submanifolds of some complex Euclidean space. For example, Rossi $\cite{R}$ constructed small real analytic deformations of the standard $CR$ structure on the 3-sphere $S^3$ in $\mathbb{C}^2$, and the resulting abstract $CR$ structures fail to $CR$ embed globally into $\mathbb{C}^2$. Also Nirenberg’s famous local nonembeddability examples [@Ni] can be interpreted as small (local) deformations of the Heisenberg structure on $\mathbb{H}^2\subset\mathbb{C}^2$. The examples by Nirenberg were later on extended to higher dimensions by Jacobwitz and Trèves [@JT].\ However, there is something special about Nirenberg’s three-dimensional examples: Since the formal integrability condition is always satisfied in this situation, one can easily modify the examples to obtain small (global) deformations of the Heisenberg structure $\mathbb{H}^2$. Moreover, these deformations are compactly supported (in the sense that the deformations coincide with the given Heisenberg structure outside a compact set). For the examples of Jacobowitz and Trèves, it is not clear if this is possible.\ In fact, as soon as the $CR$ dimension is greater than one, the integrability conditions come into play, and they make it much more difficult to construct deformations. However, when $M$ is given as a $CR$ submanifold of some complex Eudlidean space, one can always obtain compact deformations of the $CR$ structure on $M$ by making a small compact geometric deformation of $M$ within the complex Euclidean space. We refer to this as “punching $M$”. But it is not clear if there exists other compact deformations of the abstract $CR$ structure on $M$, which render $M$ no longer embeddable as a $CR$ submanifold of the complex Euclidean space, such as in Nirenberg’s example.\ Therefore in the present paper, we want to discuss the following problem: Suppose $f: (M,HM,J)\longrightarrow {\mathbb{C}}^{n+k}$ is a $CR$ embedding, and $(M^\prime, HM^\prime, J^\prime)$ is small, compactly supported $CR$ deformation of $(M,HM,J)$. Does it follow that it also admits a $CR$ embedding $f^\prime$ with $f^\prime$ close to $f$?.\ An answer to this question clearly depends on the Levi-form of $M$, so let us now recall its intrinsic definition.\ We denote by $H^o M=\lbrace \xi\in T^\ast M\mid < X,\xi>=0, \forall X\in H_{\pi(\xi)}M\rbrace$ the [*characteristic conormal bundle*]{} of $M$. Here $\pi: T M \longrightarrow M$ is the natural projection. To each $\xi\in H^o_p M\setminus \lbrace 0\rbrace$, we associate the Levi form at $\xi:$ $$\mathcal{L}_p(\xi, X) = \xi(\lbrack J\tilde X, \tilde X\rbrack )= d\tilde\xi(X,JX) \ \mathrm{for} \ X\in H_p M$$ which is Hermitian for the complex structure of $H_p M$ defined by $J$. Here $\tilde \xi$ is a section of $H^o M$ extending $\xi$ and $\tilde X$ a section of $HM$ extending $X$.\ Following [@HN1] $M$ is called $q$-pseudoconcave, $0\leq q\leq\frac{n}{2}$ if for every $p\in M$ and every characteristic conormal direction $\xi\in H^o_p M\setminus \lbrace 0\rbrace$, the Levi form $\mathcal{L}_p(\xi, \cdot)$ has at least $q$ negative and $q$ positive eigenvalues.\ [**Acknowledements.**]{} The first author was supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, grant BR 3363/2-1).\ Definitions and statement of the main results ============================================= Let $(M,HM,J)$ be $CR$ manifold of type $(n,d)$ globally $CR$ embedded into some complex Euclidean space. We say that $(M,HM,J)$ admits a [*compactly supported $CR$ deformation*]{} if there exists a family $(M_a, HM_a, J_a)_{a>0}$ of abstract $CR$ manifolds depending smoothly on a real parameter $a > 0$ and converging to $(M,HM,J)$ as $a$ tends to $0$ in the usual $\mathcal{C}^\infty$ topology; we also require that $(M_a, HM_a, J_a)= (M,HM,J)$ for every $a>0$ outside some compact $K$ of $M$ not depending on $a$.\ We say that $(M,HM,J)$ is a [*flexible*]{} $CR$ submanifold if it admits a compactly supported $CR$ deformation $(M_a, HM_a, J_a)_{a>0}$ such that for every sufficiently small $a > 0$, the $CR$ structure $(M_a, HM_a, J_a)$ is not globally $CR$ embeddable into some complex Euclidean space. So, for example, the Heisenberg $CR$ structure $\mathbb{H}^2$ in ${\mathbb{C}}^2$ is flexible.\ We say that $(M,HM,J)$ is an [*inflexible*]{} $CR$ submanifold if it is not flexible. That means that $(M,HM,J)$ is inflexible if and only if for every compactly supported $CR$ deformation $(M_a, HM_a, J_a)_{a>0}$ of $(M, HM, J)$, the $CR$ manifold $(M_a, HM_a, J_a)$ is globally $CR$ embeddable into some complex Euclidean space.\ In other words, a flexible $CR$ submanifold admits a compactly supported $CR$ deformation that “pops out” of the space of globally $CR$ embeddable manifolds. On the other hand, for an inflexible $CR$ submanifold, any compactly supported $CR$ deformation stays in the space of globally $CR$ embeddable manifolds.\ [*Remark:*]{} In the definitions above, we also allow compact deformations which are only defined for a sequence of $a$’s tending to zero.\ Our main result is as follows:\ \[section\] \[main\]  \ Let $M$ be a quadratic $CR$ submanifold of type $(n,d)$ in ${\mathbb{C}}^{n+d}$ that is $2$-pseudoconcave. Let $(M_a, HM_a, J_a)_{a>0}$ be a compactly supported $CR$ deformation of $(M,HM,J)$. Then, given any smooth $CR$ function $f: (M,HM,J)\longrightarrow {\mathbb{C}}$, there is a $CR$ function $f_a: (M_a,HM_a,J_a)\longrightarrow {\mathbb{C}}$ as close to $f$ as we please, provided $a$ is sufficiently close to $0$.\ Moreover, $f_a$ can be chosen to coincide with the given $f$ outside a compact of $M$. In particular, $(M_a, HM_a, J_a)$ is $CR$ embeddable into ${\mathbb{C}}^{n+d}$ for $a$ sufficiently close to $0$. Here a [*quadratic*]{} $CR$ submanifold is a submanifold of ${\mathbb{C}}^{n+d}$ of the form $$M=\lbrace z\in {\mathbb{C}}^{n+d}\mid \mathrm{Im} z_\ell = H_\ell(z_1,\ldots, z_n),\ n+1\leq \ell\leq n+d\rbrace,$$ where the $H_\ell$’s are quadratic hermitian forms on ${\mathbb{C}}^n$.\ “$f_a$ as close to $f$ as we please” means that for any given $\ell\in\mathbb{N}$, any given compact $K$ of $M$ and arbitrary small $\varepsilon >0$, one can find a $CR$ function $f_a: (M_a,HM_a,J_a)\longrightarrow {\mathbb{C}}$ such that the $\mathcal{C}^\ell$ norm of $f-f_a$ on $K$ is less than $\varepsilon$.\ In particular, Theorem \[main\] implies that for a 2-pseudoconcave quadratic $CR$ submanifold, any compactly supported $CR$ deformation amounts to “punching $M$”: any of the ambient complex coordinate functions is a $CR$ function on $M$. Our theorem yields that we can make arbitrarily small modifications of these coordinate functions inside a compact subset of $M$ to obtain global $CR$ coordinate functions on the deformed $CR$ manifolds.\ The last statement of Theorem \[main\] combined with the definition of “inflexible” immediately gives the following \[main\][Corollary]{} \[corr\]  \ Let $M$ be 2-pseudoconcave quadratic $CR$ submanifold of type $(n,d)$ in ${\mathbb{C}}^{n+d}$. Then $M$ is inflexible. A first example =============== The idea of the proof of Theorem \[main\] is as follows: For a given $CR$ function $f$ on $M$ we want to find a $CR$ function on $M_a$ which is very close to the given $f$. Therefore we want to solve the Cauchy-Riemann equations ${\overline{\partial}}_{M_a}u = {\overline{\partial}}_{M_a}f $ with $u$ having compact support and the $\mathcal{C}^k$-norms of $u$ being controlled by some $\mathcal{C}^l$-norms of ${\overline{\partial}}_{M_a}$ (uniformly with respect to $a$).\ In this section, we will explicitly carry out the proof of our main result \[main\] in all details for the easiest example of a 2-pseudoconcave $CR$ manifold. Namely let $M\subset {\mathbb{C}}^5$ be the real hypersurface defined by $$\label{defM} M = \lbrace (z_1,z_2,z_3, z_4, x + iy) \mid y = \vert z_1\vert^2 + \vert z_2\vert^2 - \vert z_3\vert^2 - \vert z_4\vert^2 \rbrace.$$ Then $M$ is a 2-pseudoconcave $CR$ manifold of type $(4,1)$. To abbreviate notations, we also define $z= (z_1, z_2, z_3, z_4)$ and $\vert z\vert^2 = \vert z_1\vert^2 + \vert z_2\vert^2 + \vert z_3\vert^2 + \vert z_4\vert^2$. A straightforward computation shows that $T^{0,1}M$ is spanned by $${\overline }L_j = \frac{{\partial}}{{\partial}{\overline }z_j} - i \epsilon_j z_j\frac{{\partial}}{{\partial}x}, \ j =1,2,3,4,$$ where $\epsilon_1 =\epsilon_2 =-1$ and $\epsilon_3 = \epsilon_4 = 1$.\ For $u = \sum_{1}^{4} u_j d{\overline }z_j\in \mathcal{C}^\infty_{0,1}(M)$ we have $${\overline{\partial}}_M u = \sum_{j,k=1}^4 {\overline }L_k(u_j) d{\overline }z_k\wedge d{\overline }z_j.$$ Next, we consider the volume element $$dV = (\frac{i}{2})^4 e^{\vert z\vert^2} \bigwedge_{j=1}^4 dz_j\wedge d{\overline }z_j \wedge dx = \frac{1}{16} e^{\vert z\vert^2} \bigwedge_{j=1}^4 dz_j\wedge d{\overline }z_j \wedge dx$$ on $M$, and we denote by $\Vert\ \Vert$ the $L^2$-norm of $(0,q)$-forms on $M$ with respect to this volume element, where the pointwise norms of $(0,q)$-forms on $M$ is the one induced by the standard euclidean metric on $\mathbb{C}^5$. The corresponding $L^2$-spaces will be denoted by $L^2_{0,q}(M, \vert z\vert^2)$. Then ${\overline{\partial}}^\ast_{M}$, the formal adjoint of ${\overline{\partial}}_M$ with respect to $\Vert\ \Vert$ can be computed as follows: $${\overline{\partial}}^\ast_{M}u = - e^{-\vert z\vert^2} \sum_{j=1}^4 L_j(u_j e^{\vert z\vert^2})$$ for $u\in \mathcal{D}^{0,1}(M)$.\ First we will prove the following $L^2$ estimates on $M$:\ \[section\] \[L2\]  \ Let $M$ be defined as in (\[defM\]). 1. For all $u\in L^2_{0,1}(M, \vert z\vert^2) \cap \mathrm{Dom}({\overline{\partial}}_M)\cap\mathrm{Dom}({\overline{\partial}}_M^\ast)$ we have $$\label{vanishing} 2 \Vert u\Vert^2 \leq \Vert {\overline{\partial}}_M u\Vert^2 + \Vert{\overline{\partial}}^\ast_{M} u\Vert^2.$$ 2. For all $u\in L^2_{0,0}(M, \vert z\vert^2) \cap \mathrm{Dom}({\overline{\partial}}_M)$ we have $$\label{noCR} 4 \Vert u\Vert^2 \leq \Vert {\overline{\partial}}_M u\Vert^2$$ [*Proof.*]{} Throughout the proof of this Lemma, we identify $M$ with $\mathbb{C}^4\times\mathbb{R}$. We will begin by showing how to reduce the proof of (\[vanishing\]) to an estimate for an easier differential operator. Therefore we introduce the partial Fourier transform with respect to the variable $x$: $$\tilde u(z,\xi)= \int e^{-i \langle x,\xi \rangle} u(z,x) dx$$ (for differential forms, this partial Fourier transform is defined componentwise).\ Now an easy computation shows that for $u\in\mathcal{D}^{0,1}(M)$ we have $$\begin{aligned} \widetilde{{\overline{\partial}}_M u}(z,\xi) & = & \sum_{j,k=1}^4 \widetilde{{\overline }L_k(u_j)}(z,\xi)d{\overline }z_k\wedge d{\overline }z_j\\ & = & \sum_{j,k=1}^4 \big(\frac{{\partial}}{{\partial}{\overline }z_k}u_j - i\epsilon_k z_k\frac{{\partial}}{{\partial}x}u_j\big)\widetilde{\ \ \ }(z,\xi)d{\overline }z_k\wedge d{\overline }z_j\\ & = & \sum_{j,k=1}^4 \big( \frac{{\partial}}{{\partial}{\overline }z_k}\tilde{u}_j(z,\xi) + \epsilon_k z_k \xi \tilde{u}_j(z,\xi) \big) d{\overline }z_k\wedge d{\overline }z_j\\ & = & {\overline{\partial}}_{(z)} \tilde u(z,\xi),\end{aligned}$$ where ${\overline{\partial}}_{(z)}$ is defined by $${\overline{\partial}}_{(z)}v(z,\xi)= \sum_{j,k=1}^4 {\overline{\partial}}_k v_j d{\overline }z_k\wedge d{\overline }z_j.$$ Here ${\overline{\partial}}_k v_j= \frac{{\partial}}{{\partial}{\overline }z_k}v_j +\epsilon_k z_k \xi v_j$ is of order 0 in $\xi$. Similarly, we get $$\begin{aligned} \widetilde{{\overline{\partial}}^\ast_{M}u}(z,\xi) & = & - (\sum_{j=1}^4 \widetilde{ L_j u_j + {\overline }z_j u_j})(z,\xi)\\ & = & \delta_{(z)} \tilde{u} (z,\xi),\end{aligned}$$ where $$\delta_{(z)}v(z,\xi)= \sum_{j=1}^4(\delta_j v_j) (z,\xi)$$ with $\delta_j v_j = -\frac{{\partial}}{{\partial}z_j} v_j + \epsilon_j {\overline }z_j \xi v_j - {\overline }z_j v_j$. Note that also $\delta_j$ is of order 0 in $\xi$.\ Now, as in [@H1] we compute $$\begin{aligned} \label{first} \vert {\overline{\partial}}_{(z)} v\vert^2 & = & \vert \sum_{j,k=1}^4{\overline{\partial}}_k v_j d{\overline }z_k\wedge d{\overline }z_j\vert^2\nonumber\\ & = & \frac{1}{2} \sum_{j,k=1}^4 \vert {\overline{\partial}}_k v_j - {\overline{\partial}}_j v_k\vert^2 \nonumber\\ & = & \sum_{j,k=1}^4 \vert {\overline{\partial}}_j v_k\vert^2 - \sum_{j,k=1}^4 {\overline{\partial}}_k v_j{\overline }{{\overline{\partial}}_j v_k}\end{aligned}$$ Also we have $$\begin{aligned} \label{second} \vert \delta_{(z)}v\vert^2 & = & \vert \sum_{j=1}^4\delta_j v_j\vert^2 = \sum_{j,k=1}^4 \delta_j v_j {\overline }{\delta_k v_k} \nonumber\\ & = & \sum_{j=1}^4 \vert \delta_j v_j\vert^2 + \sum_{j\not= k}\delta_j v_j {\overline }{\delta_k v_k}\end{aligned}$$ Summing up (\[first\]) and (\[second\]) we obtain $$\int_{\mathbb{C}^4} \big( \vert{\overline{\partial}}_{(z)}v\vert^2 + \vert \delta_{(z)}v\vert^2\big)\exp{(\vert z\vert^2)} (\frac{i}{2})^4\bigwedge_{j=1}^{4}dz_j\wedge d{\overline }z_j =$$ $$\sum_{j=1}^4 \Vert\delta_j v_j\Vert^2_z + \sum_{j\not=k} \Vert{\overline{\partial}}_j v_k\Vert^2_z + \sum_{j\not=k} \ll \lbrack {\overline{\partial}}_k,\delta_j\rbrack v_j, v_k\gg_z .$$ Here we have used that ${\overline{\partial}}_k$ and $\delta_k$ are adjoint operators. To abbreviate notations, we have introduced $\Vert\ \Vert_z$ to denote partial integration with respect to the $z=(z_1,z_2,z_3,z_4)$ variables: $$\Vert v\Vert_z^2 = \int_{z\in \mathbb{C}^4} \vert v(z,\xi)\vert^2 \exp{(\vert z\vert^2)} (\frac{i}{2})^4\bigwedge_{j=1}^{4}dz_j\wedge d{\overline }z_j.$$ Since $\lbrack {\overline{\partial}}_k,\delta_j\rbrack = 0$ for $j\not= k$ we obtain $$\label{apriori1} \Vert{\overline{\partial}}_{(z)}v\Vert^2_z + \Vert \delta_{(z)}v\Vert^2_z = \sum_{j\not= k} \Vert{\overline{\partial}}_j v_k\Vert^2_z + \sum_{j=1}^4\Vert \delta_j v_j\Vert^2_z$$ Also, a straightforward computation shows that $$\label{commutator} \lbrack{\overline{\partial}}_j,\delta_j\rbrack = -1 +2\epsilon_j \xi$$ This will be used to show that for each fixed $k\in \lbrace 1,2,3,4\rbrace$ we have $$\label{apriori2} \sum_{\underset{j\not= k}{j=1} }^4 \Vert {\overline{\partial}}_j v_k\Vert^2_z \geq 2 \Vert v_k\Vert^2_z$$ Assume e.g. k=4. From (\[commutator\]) we then obtain $$\label{identity} \Vert \delta_j v_4\Vert^2_z - \Vert {\overline{\partial}}_j v_4\Vert^2_z = (-1+2\epsilon_j \xi) \Vert v_4\Vert^2_z.$$ It follows that $$\begin{aligned} \sum_{j=1}^3 \Vert {\overline{\partial}}_j v_4\Vert^2_z & \geq & \sum_{j=1,3} \Vert {\overline{\partial}}_j v_4\Vert^2_z\\ & = & \sum_{j=1,3} \Vert \delta_j v_4\Vert^2_z + \sum_{j=1,3} (1- 2\epsilon_j \xi) \Vert v_4\Vert^2_z\\ & \geq & 2\Vert v_4\Vert^2_z - 2\xi(-1+1) \Vert v_4\Vert^2_z\\ & = & 2 \Vert v_4\Vert^2_z,\end{aligned}$$ which proves (\[apriori2\]) for $k=4$. The remaining cases are similar.\ Combining (\[apriori1\]) and (\[apriori2\]) we have proved that $$2\Vert v\Vert_z^2(\xi) \leq \Vert{\overline{\partial}}_{(z)} v\Vert_z^2(\xi) + \Vert \delta_{(z)}v\Vert_z^2(\xi)$$ for every fixed $\xi\in\mathbb{R}$. Setting $v= \tilde{u}$ and integrating this inequality with respect to $\xi$ we obtain from the definition of the operators ${\overline{\partial}}_{(z)}$ and $\delta_{(z)}$ $$2\Vert \tilde{u}\Vert^2 \leq \Vert \widetilde{{\overline{\partial}}_M u}\Vert^2 + \Vert \widetilde{{\overline{\partial}}_M^\ast u}\Vert^2$$ for all $u\in\mathcal{D}^{0,1}(M)$.\ The Plancherel theorem permits to conclude that $$2\int_M \vert u\vert^2 dV \leq \int_M( \vert{\overline{\partial}}_M u\vert^2 + \vert{\overline{\partial}}^\ast_{M}\vert^2) dV$$ for $u\in\mathcal{D}^{0,1}(M)$. Obviously, the restriction of the standard euclidean metric to $M$ is complete, therefore the above estimate extends to all $u\in L^2_{0,1}(M, \vert z\vert^2) \cap \mathrm{Dom}({\overline{\partial}}_M)\cap\mathrm{Dom}({\overline{\partial}}_M^\ast)$, which proves the first statement of the Lemma.\ The proof of (\[noCR\]) is similar. Indeed, using the partial Fourier transform, the proof of (\[noCR\]) is again reduced to the estimate of $\sum_{j=1}^4 \Vert {\overline{\partial}}_j v\Vert^2_z$, where ${\overline{\partial}}_j$ is defined as before. But using (\[identity\]) we get $$\begin{aligned} \sum_{j=1}^4 \Vert {\overline{\partial}}_j v\Vert^2_z & = & \sum_{j=1}^4 \Vert \delta_j v\Vert^2_z + \sum_{j=1}^4 (1- 2\epsilon_j \xi)\Vert v\Vert^2_z\\ & \geq & 4 \Vert v\Vert^2_z - 2\xi(-1-1+1+1) \Vert v\Vert^2_z\\ & = & 4 \Vert v\Vert^2_z,\end{aligned}$$ This completes the proof of the Lemma by the same arguments as before.$\square$\ Next, we use again that $M$ is 2-pseudoconcave (this condition is clearly stable under small perturbations). This implies that we have a uniform subelliptic estimate in degree $(0,1)$ (see [@FK]): For every compact $K$ of $M$, there exists a constant $C_K > 0$ independent of $a$ such that $$\label{subelliptic} \Vert u\Vert^2_{\frac{1}{2}} \leq C_K (\Vert {\overline{\partial}}_{M_a} u\Vert^2 + \Vert{\overline{\partial}}^\ast_{M_a} u\Vert^2 + \Vert u\Vert^2)$$ for all $u\in \mathcal{D}_K^{0,1}(M_a)$.\ Combining Lemma \[L2\] and (\[subelliptic\]), we can establish an $L^2$ a priori estimate in degree $(0,1)$, which is uniform with respect to $a$ (in the sense that the constant involved does not depend on $a$). \[L2\][Lemma]{} \[firstlemma\]  \ There is $a_0> 0$ and a constant $C > 0$ such that $$\Vert u \Vert^2\leq C (\Vert {\overline{\partial}}_{M_a} u\Vert^2 + \Vert {\overline{\partial}}^\ast_{M_a}u\Vert^2)$$ for all $u\in L^2_{0,1}(M_a,\vert z\vert^2)$, $a < a_0$. [*Proof.*]{} Following [@N], assume by contradiction that there is a sequence $\lbrace u_{a_\nu}\rbrace\in L^2_{0,1}(M_{a_{\nu}},\vert z\vert^2)\cap\mathrm{Dom}({\overline{\partial}}_{M_{a_{\nu}}})\cap \mathrm{Dom}({\overline{\partial}}^\ast_{M_{a_{\nu}}}) $, $a_\nu \rightarrow 0$, such that $$\label{1} \Vert u_{a_\nu}\Vert = 1,$$ whereas $$\label{2} \Vert {\overline{\partial}}_{M_{a_\nu}} u_{a_\nu}\Vert + \Vert {\overline{\partial}}^\ast_{M_{a_\nu}} u_{a_\nu}\Vert < a_\nu.$$ We now want to show that $\lbrace u_{a_\nu}\rbrace$ is a Cauchy sequence.\ Remember that $M_{a_\nu}= M$ outside $K$. We now choose a slightly larger compact $K_1$ containing $K$ in its interior, and a smooth cut-off function $\chi$ such that $\chi\equiv 1$ outside $K_1$ and $\chi\equiv 0$ in a neighborhood of $K$. Since ${\overline{\partial}}_{M_{a_\nu}}$, ${\overline{\partial}}^\ast_{M_{a_\nu}}$ coincide with ${\overline{\partial}}_M$, ${\overline{\partial}}^\ast_{M}$ outside $K$, we obtain from (\[vanishing\]) $$2\Vert \chi u\Vert^2 \leq \Vert {\overline{\partial}}_M (\chi u)\Vert^2 + \Vert{\overline{\partial}}^\ast_{M}(\chi u)\Vert^2$$ for all $u\in L^2_{0,1}(M_a,\vert z\vert^2)$, which implies $$\label{outsideK} \Vert \chi u\Vert^2 \leq C^\prime (\Vert {\overline{\partial}}_M u\Vert^2 + \Vert{\overline{\partial}}^\ast_{M}u\Vert^2 + \int_{K_1\setminus K} \vert u\vert^2 dV )$$ for some constant $C^\prime > 0$.\ On the other hand, let $\eta$ be a smooth cut-off function so that $\eta\equiv 1$ in a neighborhood of $K_1$. Then $\Vert \eta u_{a_\nu} \Vert_{\frac{1}{2}}$ is bounded by (\[subelliptic\]), so the generalized Rellich lemma implies that the sequence $\lbrace u_{a_\nu}\rbrace$ restricted to $K_1$ is precompact in $L^2_{0,1}(K_1)$. Thus it is no loss of generality to asume that the restriction of $\lbrace u_{a_\nu}\rbrace$ to $K_1$ is a Cauchy sequence. But this combined with (\[outsideK\]) implies that $\lbrace u_{a_\nu}\rbrace$ is a Cauchy sequence in $L^2_{0,1}(M,\vert z\vert^2)$.\ Denote by $u_0$ the limit of this sequence. From (\[2\]) it follows that ${\overline{\partial}}_M u_0$ and ${\overline{\partial}}^\ast_{M}u_0$, defined in the distribution sense, both vanish. But from (\[1\]) it also follows that $\Vert u_0\Vert = 1$. This contradicts (\[vanishing\]) and therefore completes the proof of the lemma. $\square$\ [*Proof of theorem \[main\] for $M$ as above.*]{} Let $f$ be given. Then ${\overline{\partial}}_{M_a}f$ has compact support and tends to zero when $a$ tends to zero. It is well known (see e.g. [@H2]) that the a priori estimate (\[vanishing\]) implies that we can solve the equation ${\overline{\partial}}_{M_a} u_a = {\overline{\partial}}_{M_a} f$ with $\Vert u_a\Vert \leq C\Vert{\overline{\partial}}_{M_a}u_a\Vert$. Hence $u_a$ is as small as we wish in $L^2(M_a, \vert z\vert^2)$, provided $a$ is small enough. It is well-known that the subelliptic estimate (\[subelliptic\]) implies also the following: Suppose given a compact $K\subset M_a$ and two smooth real functions $\zeta,\ \zeta_1$ with $\mathrm{supp}\zeta \subset\mathrm{supp}\zeta_1\subset K$ and $\zeta_1 =1$ on $\mathrm{supp}\zeta$, then for any integer $m\in\mathbb{N}$ there exists a constant $C_{K,m}$ such that $$\Vert \zeta u\Vert^2_{m+\varepsilon} \leq C_{K,m} (\Vert \zeta_1{\overline{\partial}}_{M_a}u\Vert^2_m + \Vert \zeta_1{\overline{\partial}}^\ast_{M_a} u\Vert^2_m + \Vert \zeta_1 u\Vert^2)$$ Here $\Vert\ \Vert_m$ denotes the Sobolev norm of order $m$. But then, choosing the minimal solution satisfying ${\overline{\partial}}^\ast_{M_a}u=0$, also the $\mathcal{C}^\ell$-norm of $u_a$ over a given compact $K\subset M_a$ can be controlled by some $\mathcal{C}^m$-norm of ${\overline{\partial}}_{M_a}u_a = f$, and hence made small when letting $a$ tend to zero. Setting $f_a = f- u_a$ proves the first statement.\ Moreover, $u_a$ has compact support: Since the $CR$ structures of $M$ and $M_a$ coincide outside a compact set, and $u_a$ solves the equation ${\overline{\partial}}_{M_a}u_a= {\overline{\partial}}_{M_a}f$, $u_a$ is a $CR$ function on $M$ outside some compact set $K$. It is no loss of generality to assume that $M\setminus K$ is connected. But then, since the Hartogs phenomenon for $CR$ functions holds in 2-pseudoconcave $CR$ manifolds [@LT], the restriction of $u_a$ to $M\setminus K$ extends to a $CR$ function $\tilde u_a$ on $M$. Since $u_a$ belongs to $L^2_{0,0}(M,\vert z\vert^2)$, the same is true for $\tilde u_a$. But then (\[noCR\]) implies $\tilde u_a\equiv 0$. Hence $u_a$ vanishes on $M\setminus K$. $\square$\ The general case ================ In this section we will explain the proof of Theorem \[main\] for a general 2-pseudoconcave quadratic $CR$ submanifold $M$ of type $(n,d)$ given by $$M=\lbrace z\in {\mathbb{C}}^{n+d}\mid \mathrm{Im} z_\ell = \sum_{i,j =1}^n h^\ell_{ij} z_i{\overline }z_j,\ n+1\leq \ell\leq n+d\rbrace.$$ In this case, $T^{1,0}M$ is spanned by $$L_j = \frac{{\partial}}{{\partial}z_j} + i\sum_{\ell = n+1}^{n+d}\sum_{k=1}^n h^\ell_{jk}{\overline }z_k \frac{{\partial}}{{\partial}x_\ell} \quad j=1,\ldots,n,$$ and $T^{0,1}M$ is spanned by $${\overline }L_j = \frac{{\partial}}{{\partial}{\overline }z_j} - i\sum_{\ell = n+1}^{n+d}\sum_{k=1}^n h^\ell_{kj} z_k \frac{{\partial}}{{\partial}x_\ell} \quad j=1,\ldots,n.$$ First we show that the analogue of Lemma \[L2\] still holds true, i.e. we have the following \[section\] \[L2general\]  \ Let $M$ be a $2$-pseudoconcave quadratic $CR$ submanifold. 1. For all $u\in L^2_{0,1}(M, \vert z\vert^2) \cap \mathrm{Dom}({\overline{\partial}}_M)\cap\mathrm{Dom}({\overline{\partial}}_M^\ast)$ we have $$\label{vanishinggen} \Vert u\Vert^2 \leq \Vert {\overline{\partial}}_M u\Vert^2 + \Vert{\overline{\partial}}^\ast_{M} u\Vert^2.$$ 2. For all $u\in L^2_{0,0}(M, \vert z\vert^2) \cap \mathrm{Dom}({\overline{\partial}}_M)$ we have $$\label{noCRgen} \Vert u\Vert^2 \leq \Vert {\overline{\partial}}_M u\Vert^2$$ [*Proof of Lemma \[L2general\].*]{} We show how the proof of Lemma \[L2\] generalizes to this more general setting. In fact, we again use the partial Fourier transform with respect to the variables $(x_{n+1},\ldots,x_{n+d})$. For a fixed $\xi\in\mathbb{R}^d$, we define the hermitian matrix $$h^\xi = \sum_{\ell= n+1}^d H_\ell\xi_\ell,\quad\quad\mathrm{i.e.}\ h^\xi_{jk}= \sum_{\ell= n+1}^d h^\ell_{jk}\xi_\ell.$$ After possibly making a unitary change of coordinates in the variables $(z_1,\ldots,z_n)$, we may assume that $h^\xi$ is diagonal with diagonal entries $h^\xi_{jj} = \lambda_j$ with $\lambda_1\leq \ldots\leq \lambda_n$.\ Then, as in the proof of Lemma \[L2\] we compute $\widetilde{{\overline{\partial}}_M u}(z,\xi) = {\overline{\partial}}_{(z)}\tilde u (z,\xi)$ with $${\overline{\partial}}_{(z)}v(z,\xi) = \sum_{k,s=1}^n{\overline{\partial}}_k v_s d{\overline }z_k\wedge d{\overline }z_s,$$ where $$\begin{aligned} {\overline{\partial}}_k v_s & = & \frac{{\partial}}{{\partial}{\overline }z_k}v_s + \sum_{\ell=n+1}^{n+d}\sum_{m=1}^n h^\ell_{mk}z_m\xi_\ell v_s \\ & = & \frac{{\partial}}{{\partial}{\overline }z_k}v_s + \sum_{m=1}^n h^\xi_{mk}z_m v_s\\ & = & \frac{{\partial}}{{\partial}{\overline }z_k}v_s + \lambda_k z_k v_s\end{aligned}$$ Similarly we get $$\begin{aligned} \widetilde{{\overline{\partial}}^\ast_{M}u}(z,\xi) & = & \delta_{(z)} \tilde{u} (z,\xi),\end{aligned}$$ where $$\delta_{(z)}v(z,\xi)= \sum_{j=1}^n(\delta_j v_j) (z,\xi)$$ with $$\begin{aligned} \delta_j v_j & = & -\frac{{\partial}}{{\partial}z_j} v_j + \sum_{\ell = n+1}^{n+d}\sum_{k=1}^n h^\ell_{jk}{\overline }z_k\xi_\ell v_j - {\overline }z_j v_j\\ & = & -\frac{{\partial}}{{\partial}z_j} v_j + \sum_{k=1}^n h^\ell_{jk}{\overline }z_k v_j - {\overline }z_j v_j\\ & = & -\frac{{\partial}}{{\partial}z_j} v_j + \sum_{k=1}^n \lambda_j {\overline }z_j v_j - {\overline }z_j v_j .\end{aligned}$$ The commutator of ${\overline{\partial}}_k$ and $\delta_j$ can be computed as $$\label{comm} \lbrack{\overline{\partial}}_k,\delta_j\rbrack = (-1+2\lambda_j) \delta_{j,k} ,$$ where $\delta_{j,k}$ denotes the Kronecker symbol.\ Therefore, as in the proof of Lemma \[L2\], one obtains for $v\in\mathcal{D}^{0,1}(\mathbb{C}^n\times\mathbb{R}^d)$: $$\begin{aligned} \label{firstest} \Vert{\overline{\partial}}_{(z)}v\Vert^2_z + \Vert \delta_{(z)}v\Vert^2_z & = & \sum_{j\not= k} \Vert{\overline{\partial}}_j v_k\Vert^2_z + \sum_{j=1}^n\Vert \delta_j v_j\Vert^2_z \nonumber\\ & \geq & \sum_{j\not= k} \Vert{\overline{\partial}}_j v_k\Vert^2_z.\end{aligned}$$ Now we fix $k\in\lbrace 1,\ldots,n\rbrace$. Since $M$ is $2$-pseudoconcave, the hermitian matrix $h^\xi$ has at least $2$ negative and $2$ positive eigenvalues. But this implies that there exist indices $r,s \not=k$ such that $\lambda_r < 0$ and $\lambda_s > 0$. We now define real numbers $a_j\in\lbrack 0,1\rbrack $ by $$\begin{aligned} a_j & = & 0,\ j\not= r,s \\ a_r & = & \frac{\lambda_s}{\lambda_s -\lambda_r} \\ a_s & = & \frac{-\lambda_r}{\lambda_s -\lambda_r}\end{aligned}$$ Note that by definition of $a_j$ we have $\sum_{j=1}^n a_j = 1$ and $\sum_{j=1}^n a_j \lambda_j =0$. But then, using (\[comm\]) we obtain $$\begin{aligned} \sum_{\underset{j\not=k}{j=1}}^n \Vert{\overline{\partial}}_j v_k\Vert^2_z & \geq & \sum_{\underset{j\not=k}{j=1}}^n a_j \Vert{\overline{\partial}}_j v_k\Vert^2_z\\ & = & \sum_{j=1}^n a_j \Vert\delta_j v_k\Vert^2_z + \sum_{j=1}^n (1-2\lambda_j) a_j \Vert v_k\Vert^2_z \\ & \geq & \sum_{j=1}^n a_j \Vert v_k\Vert^2_z -2\sum_{j=1}^n \lambda_j a_j \Vert v_k\Vert^2_z\\ & \geq & \Vert v_k\Vert^2_z.\end{aligned}$$ From (\[firstest\]) we therefore obtain $$\Vert{\overline{\partial}}_{(z)}v\Vert^2_z + \Vert \delta_{(z)}v\Vert^2_z \geq \Vert v\Vert^2_z.$$ By reasoning as in the proof of Lemma \[L2\] we may therefore conclude that (\[vanishinggen\]) holds.\ Likewise, for the proof of (\[noCRgen\]), we define real numbers $c_j\in\lbrack 0,1\rbrack $ by $$\begin{aligned} c_j & = & 0,\ j\not= 1,n \\ c_1 & = & \frac{\lambda_n}{\lambda_n -\lambda_1} \\ c_n & = & \frac{-\lambda_1}{\lambda_n -\lambda_1}\end{aligned}$$ Then we have $\sum_{j=1}^nc_j =1$ and $\sum_{j=1}^nc_j\lambda_j =0$. Therefore (\[comm\]) implies $$\begin{aligned} \sum_{j=1}^n \Vert {\overline{\partial}}_j v\Vert^2_z & \geq & \sum_{j=1}^n c_j \Vert {\overline{\partial}}_j v\Vert^2_z\\ & = & \sum_{j=1}^n c_j\Vert \delta_j v\Vert^2_z + \sum_{j=1}^n c_j (1- 2 \lambda_j )\Vert v\Vert^2_z\\ & \geq & \sum_{j=1}^n c_j\Vert v\Vert^2_z -2 \sum_{j=1}^n c_j \lambda_j \Vert v\Vert^2_z\\ & = & \Vert v\Vert^2_z,\end{aligned}$$ This completes the proof of (\[noCRgen\]) by the same arguments as in the proof of Lemma \[L2\].$\square$\ [*Remark:*]{} The proof of this Lemma is essentially contained in [@N] with constants depending on the Levi form of $M$. Here we have shown that one can take the same constant $1$ for every $2$-pseudoconcave quadratic $CR$ submanifold $M$.\ The second essential ingredient for the proof of Theorem \[main\] in the general case is the subelliptic estimate proved for 2-pseudoconcave $CR$ manifolds of arbitrary codimension $d$ in [@HN1]: There exists $\varepsilon > 0$ such that for every compact $K$ of $M$, there exists a constant $C_K > 0$ independent of $a$ such that $$\label{subellipticgen} \Vert u\Vert^2_{\varepsilon} \leq C_K (\Vert {\overline{\partial}}_{M_a} u\Vert^2 + \Vert{\overline{\partial}}^\ast_{M_a} u\Vert^2 + \Vert u\Vert^2)$$ for all $u\in \mathcal{D}_K^{0,1}(M_a)$. This subelliptic estimate replaces (\[subelliptic\]) in the general situation.\ Using (\[vanishinggen\]) and (\[subellipticgen\]), one can prove the uniform $L^2$ a priori estimate for ${\overline{\partial}}_{M_a}$ as stated in Lemma \[firstlemma\]. The proof is the same. But this, together with (\[noCRgen\]) completes the proof of Theorem \[main\] as in section 3. <span style="font-variant:small-caps;">G.B. Folland, J.J. Kohn:</span> *The Neumann problem for the Cauchy-Riemann complex.* Ann. Math. Studies [**75**]{}, Princeton University Press, Princeton, N. J. (1972). <span style="font-variant:small-caps;">L. Hörmander:</span> *$L^2$ estimates and existence theorems for the ${\overline{\partial}}$ operator.* Acta Math. [**113**]{}, 89–152 (1965). <span style="font-variant:small-caps;">L. Hörmander:</span> *An introduction to complex analysis in several complex variables.* North Holland Mathematical Library (1990). <span style="font-variant:small-caps;">C.D. Hill, M. Nacinovich:</span> *Pseudoconcave $CR$ manifolds.* Preprint, Dipartimento de matematica, Pisa 1-76, 723 (1993). In: Complex analysis and geometry (V. Ancona, E. Ballico, A. Silva, eds), Lecture notes in pure and applied mathematics vol. [**173**]{}, Marcel Dekker, New York, 275–297 (1996). <span style="font-variant:small-caps;">H. Jacobowitz, F. Trèves:</span> *Non-realizable $CR$ structures,* Invent. Math. [**66**]{}, 231–249 (1982). <span style="font-variant:small-caps;">Ch. Laurent-Thiébaut:</span> *Résolution du ${\overline{\partial}}_b$ à support compact et phénomène de Hartogs-Bochner dans les variétés $CR$.* Proc. Sympos. Pure Math. [**52**]{}, 239–249 (1991). <span style="font-variant:small-caps;">I. Naruki:</span> *Localization principle for differential complexes and its applications.* Publ. RIMS [**8**]{}, 43–110 (1972). <span style="font-variant:small-caps;">L. Nirenberg:</span> *On a problem of Hans Lewy.* Uspeki Math. Naut. [**292**]{}, 241–251 (1974). <span style="font-variant:small-caps;">H. Rossi:</span> *Attaching analytic spaces to an analytic space along a pseudoconcave boundary,* Proc. Conf. Complex Manifolds (Minneapolis), 1964, Springer-Verlag, New York, 242–256 (1965). [^1]: Universität Leipzig, Mathematisches Institut, Augustusplatz 10, D-04109 Leipzig, Germany. E-mail: brinkschulte@math.uni-leipzig.de [^2]: Department of Mathematics, Stony Brook University, Stony Brook NY 11794, USA. E-mail: dhill@math.stonybrook.edu\ [**[Key words:]{}**]{} inflexible $CR$ submanifolds, deformations of $CR$ manifolds, embeddings of $CR$ manifolds\ [**[2010 Mathematics Subject Classification:]{}**]{} 32V30, 32V40
--- abstract: 'We report on [*XMM-Newton*]{} observations performed on 2001 September 13–14 of the neutron star X-ray transient KS 1731–260 in quiescence. The source was detected at an unabsorbed 0.5–10 keV flux of only $4 - 8 \times10^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{}, depending on the model used to fit the data, which for a distance of 7 kpc implies a 0.5–10 keV X-ray luminosity of approximately $2 - 5\times10^{32}$ [erg s$^{-1}$]{}. The September 2001 quiescent flux of KS 1731–260 is lower than that observed during the [*Chandra*]{} observation in March 2001. In the cooling neutron star model for the quiescent X-ray emission of neutron star X-ray transients, this decrease in the quiescent flux implies that the crust of the neutron star in KS 1731–260 cooled down rapidly between the two epochs, indicating that the crust has a high conductivity. Furthermore, enhanced cooling in the neutron star core is also favored by our results.' author: - 'Rudy Wijnands, Matteo Guainazzi, Michiel van der Klis, Mariano Méndez' title: '[*XMM-Newton*]{} observations of the neutron star X-ray transient KS 1731–260 in quiescence' --- Introduction \[section:intro\] ============================== X-ray transients are characterized by long episodes (years to decades) of very low X-ray luminosities ($10^{30-34}$ [erg s$^{-1}$]{}) with occasional short (weeks to months) outbursts during which they can be detected at luminosities of $10^{36-39}$ [erg s$^{-1}$]{} (e.g., Chen, Shrader, & Livio 1997). The huge increase in luminosity is thought to be due to a correspondingly large increase in the mass accretion rate onto the compact object in those systems, although the exact mechanisms behind the outbursts are not fully understood (Lasota 2001). Similarly, the exact origin of the quiescent X-ray emission remains elusive. For those systems harboring a neutron star, it has been argued (e.g., Campana et al. 1998b; Brown, Bildsten, & Rutledge 1998) that the observed emission below a few keV originates from the neutron star surface: the neutron star core is heated by the nuclear reactions occurring deep in the crust when the star is accreting and this heat is released as thermal emission during quiescence. The emission above a few keV (as observed in several systems; e.g., Asai et al. 1996, 1998; Campana et al. 1998a, 2000) cannot be explained by this model. Models proposed for this component include residual accretion either onto the neutron star surface or down to its magnetospheric radius, or the radio pulsar mechanism (e.g., Campana et al. 1998b; Campana & Stella 2000). A sub-class of transients are characterized by very long accretion episodes of years to decades instead of weeks to months. Recently, one of those systems (KS 1731–260) suddenly turned off after having actively accreted for over 12.5 years. A [*Chandra*]{} observation taken a few months after this transition showed the source at a 0.5–10 keV luminosity of $\sim10^{33}$ [erg s$^{-1}$]{} (Wijnands et al. 2001), assuming a distance of 7 kpc (Muno et al. 2000). If the cooling neutron star model is responsible for the quiescent emission in this system, then it should be in quiescence between outbursts for $>$1000 years, assuming all outbursts are similar to the one observed and standard cooling processes (e.g., modified Urca; Colpi et al. 2001; Ushomirsky & Rutledge 2001) occur in the neutron star core. However, Rutledge et al. (2002) argued that for systems like KS 1731–260, the long accreting episodes will heat the crust to high temperatures and it might take years to decades for the crust to come into thermal equilibrium with the core. Until this happens, the quiescent emission will be dominated by the thermal state of the crust and not that of the core. Rutledge et al. (2002) calculated crust cooling tracks for this source assuming different scenarios of the microphysics involved (the heat conductivity of the crust; standard vs. “enhanced” core cooling). Burderi et al. (2002) reported on a [*BeppoSAX*]{} observation of KS 1731–260 performed a few weeks before the [*Chandra*]{} observation. They detected KS 1731–260 at a luminosity of at most $\sim10^{33}$ [erg s$^{-1}$]{}. In addition to the cooling neutron star model, they discussed several alternative explanations for the observed quiescent emission (such as residual accretion or the onset of the radio pulsar mechanism). By considering those alternative models, they were able to set an upper limit of $1-4 \times 10^9$ Gauss on the magnetic field strength of the neutron star in KS 1731–260. Here we report on [*XMM-Newton*]{} observations of KS 1731–260 taken approximately half a year after the [*Chandra*]{} and [*BeppoSAX*]{} observations. With these [*XMM-Newton*]{} observations we are able to study the time evolution of the quiescent emission. Observation, analysis, and results ================================== We have analyzed [*XMM-Newton*]{} observations of KS 1731–260 performed on 13 September 2001 01:54–09:01 UTC and 13–14 September 2001 22:43–05:58 UTC. All instruments were active; here we only discuss the data as obtained with the three European Photon Imaging Camera (EPIC) instruments (due to the very low flux of the source, it was not detected in the RGS instrument). The two EPIC MOS cameras and the EPIC pn camera operated in full window mode with the thin optical blocking filter. To analyze the data, we used the Science Analysis System (SAS[^1]; version 5.2). We used the calibrated pipeline product data to extract images, light curves, and spectra using the tools available in SAS. Several background flares occurred during our observations, which were filtered out before analyzing the data to minimize the effects of those strong background flares on the quality of the X-ray spectra; we did not use those data during which the count rate exceeded 7 counts [s$^{-1}$]{} for the MOS cameras (using time bins of 10 seconds) and 20 counts [s$^{-1}$]{} for the pn camera (also using 10 seconds time bins). These criteria resulted in a total good time of $\sim$23 ksec for the pn camera and $\sim$33 ksec for both MOS cameras. No difference in the count rates between the two [*XMM-Newton*]{} observations was observed, and, therefore, we combined the data of both observations to increase our sensitivity. We combined the data of the three EPIC cameras to create one image of the field of KS 1731–260, representing the most sensitive image of this region so far obtained. In Figure \[fig:images\], we show both the [*Chandra*]{}/ACIS-S (left) and the XMM-Newton/EPIC (right) images of KS 1731–260. The [*Chandra*]{} image was rebinned by a linear factor of 8 to obtain roughly the same pixel size as that of the [*XMM-Newton*]{} image (3.95$''$ for the [*Chandra*]{} image vs. 4.35$''$ for the [*XMM-Newton*]{} image) and both images have been smoothed using a Gaussian function with a width equal to the pixel size of the image. We clearly detected KS 1731–260 together with the nearby star 2MASSI J173412.7–260548 (Fig. \[fig:images\] right), both of which were also detected during the [*Chandra*]{} observation (Fig. \[fig:images\] left; Wijnands et al. 2001). To allow for a visual comparison, we used a scaling such that the appearance of this 2MASS star is very similar in both images (below we will show that the flux of this star is consistent with being constant between the two observations). A comparison of the images indicates that KS 1731–260 has decreased in luminosity between the [*Chandra*]{} and [*XMM-Newton*]{} observations. In principle, systematic effects due to the difference in the energy response of the instruments and the different X-ray spectra of the two detected sources might be responsible for this dimming of KS 1731–260 relative to the 2MASS star. However, below we show that the decrease in luminosity as observed for KS 1731–260 is real. The source spectra ------------------ The spectrum of KS 1731–260 in each EPIC camera was extracted using a circle of 15$''$ in radius on the source position. The background spectra were extracted from a circle with a radius of 50$''$ close to KS 1731–260 (different background regions resulted in very similar results) which did not contain any other point source (the standard practice of using an annulus around the source position as background could not be used because of the presence of the 2MASS source $\sim 30''$ away from KS 1731–260). The extracted spectra were rebinned using the FTOOLS routine GRPPHA into bins with a minimum of 10 counts per bin. We used the ready-made response matrices provided by the calibration team (available at http://xmm.vilspa.esa.es/ccf/epic/). We fitted the three spectra simultaneously using XSPEC version 11.1 (Arnaud 1996). We used several models to fit the data, and the neutral hydrogen column density $N_{\rm H}$ was either fixed to $1.1\times10^{22}$ cm$^{-2}$ (see, e.g., Barret et al. 1998 or Narita et al. 2001) or left as a free parameter. All single-component models resulted in acceptable fits. Currently, the two models most often used to fit the quiescent spectra of neutron star systems are the blackbody and the neutron star atmosphere models. Therefore, we concentrated on those models, with the neutron star atmosphere model being that described by Zavlin, Pavlov, & Shibanov 1996 (the non-magnetic case). In certain systems, a power-law tail above a few keV was found, and although such power-law component was not required by the data, we fitted the spectra with the above two models including a power-law component with a photon index of 1 or 2 to obtain an upper limit on this component. The spectral results are listed in Tab. \[tab:spectra\] and the pn spectrum is shown in Fig. \[fig:spectra\]. We have also plotted the spectrum obtained with [*Chandra*]{} (Wijnands et al. 2001), which again suggests that the source was fainter during our [*XMM-Newton*]{} observation than during the [*Chandra*]{} observation. When left free, $N_{\rm H}$ was consistent with the value previously obtained with other instruments, although for the atmosphere model a slightly higher value was preferred, resulting in a slightly higher unabsorbed flux compared to the fixed $N_{\rm H}$ case. When $N_{\rm H}$ was fixed, the atmosphere model measured a similar flux as the blackbody model, $\sim5\times10^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{} (unabsorbed and for 0.5–10 keV). To obtain the errors on the fluxes, we have calculated the $1\sigma$ error contours for the temperature and normalization, fixing $N_{\rm H}$ at the value in Table \[tab:spectra\] in each case, and obtained the fluxes associated with the circumference of the error ellipse. The temperature $kT$ and $N_{\rm H}$ are strongly correlated in the fits, and when both are free no useful constraints could be obtained on the unabsorbed flux. The best-fit temperature was in all cases $\sim$0.3 keV for the blackbody fits and $\sim$0.1 keV for the atmosphere model. In the latter model, the neutron star radius could not be constrained and was fixed to the best fit radius of 15 km (at infinity; the other parameters are not very sensitive to its actual value). When including a power-law component in the fit, it could not be detected significantly and its 0.5–10 keV flux could be constrained to be less than 25% of that obtained from the blackbody or atmosphere component. The images and spectra of KS 1731–260 both indicate that the flux decreased between the two observation epochs. To investigate whether the apparent flux decrease is statistically significant, we have fitted the [*Chandra*]{} and [*XMM-Newton*]{} data simultaneously. When all spectral parameters were tied between the two data sets, a blackbody fit is statistically unacceptable, with $\chi^2$ = 83 for 38 degrees of freedom, corresponding to a probability of only $3 \times 10^{-5}$ that the source did not change. We obtained a similar result when we used other models (e.g., atmosphere models) instead of a blackbody. When we did not tie the spectral parameters (except $N_{\rm H}$ which was assumed to be constant), we obtained acceptable fits. The fit results using a blackbody or an atmosphere model are listed in Table \[tab:spectra\]. In all cases, the flux difference between the [*Chandra*]{} and [*XMM-Newton*]{} data is significant at a 3 to 4 $\sigma$ level. Although this shows that the flux of KS 1731–260 decreased, this could conceivably be due to a calibration error in one or both of the instruments. Although this is unlikely (e.g., Ferrando et al. 2002; Weisskopf et al. 2002), we can perform a check on this in the same data set by analyzing the data of both instruments of the 2MASS star assuming that the star has a constant spectrum. To this end, we have extracted the spectra of this source from the [*Chandra*]{} and [*XMM-Newton*]{} data. We fitted all obtained spectra of the 2MASS star simultaneously keeping all spectral parameters tied between both instruments (note that due to low statistics the [*Chandra*]{} data alone did not allow to constrain the source spectrum). Either a blackbody or a power-law spectrum fit the data well, yielding a probability of only 0.12 (blackbody model) or 0.09 (power-law model) that the flux of the 2MASS star changed by the same factor (a factor of 3.5) as we observed for KS 1731–260. Furthermore, a recent cross calibration study between [*XMM-Newton*]{} and [*Chandra*]{} (Snowden 2002) indicates that, for sources with different intrinsic spectra, the measured fluxes of both instruments agree to within 10%. Both these results reinforce the idea that, the flux decrease we observed in KS 1731–260 is real, and not due to calibration problems in any of the two instruments. In the cooling neutron star model, this flux decrease is due to a temperature decrease. To investigate this, we fitted the two data sets simultaneously with a blackbody model, letting $kT$ float between the two observations, but keeping the same $N_{\rm H}$ and emitting radius. This resulted in a $kT$ of $0.33^{+0.06}_{-0.05}$ and 0.27[$\pm$]{}0.04 keV for [*Chandra*]{} and [*XMM-Newton*]{}, respectively. The error ellipse of the two temperature parameters (Fig. \[fig:temp\]) exclude the line $kT_{\rm Chandra} = kT_{\rm XMM-Newton}$, which shows that in the constant-radius blackbody model a systematically lower $kT$ is preferred to fit the [*XMM-Newton*]{} spectrum than the [*Chandra*]{} one, suggesting that the temperature decreased between the two epochs. The resulting fluxes are 11[$\pm$]{}2 ([*Chandra*]{}) and 4.8[$\pm$]{}0.8 $\times10^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{}([*XMM-Newton*]{}). This results in a flux decrease between the two data sets of 6[$\pm$]{}2 $\times10^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{}, which is significant at the 3$\sigma$ level. Discussion\[section:discussion\] ================================ We have reported on [*XMM-Newton*]{} observations performed on 2001 September 13–14 of the neutron star X-ray transient KS 1731–260 when it was in quiescence. We detected the source at an unabsorbed 0.5–10 keV flux of $\sim 4 - 8 \times 10^{-14}$ [erg cm$^{-2}$ s$^{-1}$]{}, which for a distance of 7 kpc implies a 0.5–10 keV luminosity of $\sim 2-5 \times 10^{32}$ [erg s$^{-1}$]{}, depending on the model used to fit the data. This luminosity is lower than what has been reported for the source during the [*Chandra*]{} observation performed about half a year earlier (Wijnands et al. 2001; Rutledge et al. 2002). KS 1731–260 is not the only system for which X-ray variability in quiescence has been observed. Several other neutron star systems have also been found to be variable in quiescence by factors of 3 to 5 on time scales of days to years (see Ushomirsky & Rutledge 2001 for a summary of the observed variability). It is expected that at some level the neutron star in KS 1731–260 should emit X-rays due to the thermal cooling of the neutron star core. Our low X-ray flux provides an upper limit to the thermal flux from the core. If the crust of the neutron star has a higher temperature than the core (Rutledge et al. 2002 argued that the crust should be considerably hotter than the core due to the prolonged accretion episode of KS 1731–260) and/or if additional X-ray production mechanisms are at work in the system (e.g., residual accretion, radio pulsar mechanism), then the thermal flux related to the core will be even lower. Based on the Brown et al. (1998) model and assuming standard core cooling, Wijnands et al. (2001) already calculated that KS 1731–260 had to be in quiescence for over a 1000 years between outbursts in order to emit at the low flux level measured with [*Chandra*]{} (see also Rutledge et al. 2002 or Burderi et al. 2002). However, for the factor of 2 to 4 lower quiescent luminosity we observed with [*XMM-Newton*]{}, this inferred cooling time increases by approximately the same factor. This would make the quiescent intervals of KS 1731–260 extremely long. However, if we assume that enhanced cooling takes place in the neutron star core (e.g., due to enhanced neutrino production), this inferred quiescent interval would decrease considerably, making it more similar to that of the ordinary transients. In the cooling neutron star model, the variability we observe would have to be explained by assuming that the neutron star surface has cooled between the [*Chandra*]{} and [*XMM-Newton*]{} observations. In the previous section we have presented evidence that the measured temperature decreased, supporting this interpretation. For KS 1731–260, Rutledge et al. (2002) calculated four crust cooling curves assuming different values of the crustal conductivity and the different cooling processes in the core of the neutron star (standard vs. enhanced cooling). A comparison of the observed decrease in luminosity with those cooling curves (see Figure 3 in Rutledge et al. 2002), suggests that our data are only consistent with a highly conductive crust, and likely also enhanced core cooling occurs. For a low heat conductivity in the crust, the X-ray luminosity of the system should remain constant or even increase slightly, in contrast to what we observed. For a highly conductive crust but only standard core cooling a decrease in luminosity is also predicted, but by an amount that is less than we have observed. However our uncertainties in the actual luminosity decrease are considerable and our data might still be consistent with this possibility. As explained above, the low measured flux of the system by itself already suggests that enhanced core cooling occurs. In order to calculate the cooling curves, Rutledge et al. (2002) assumed quiescent episodes for KS 1731–260 of 1500 years, which was calculated assuming standard core cooling. However, if enhanced core cooling occurs, the neutron star core can cool more rapidly than assumed and the system could have quiescent episodes of only years to decades. This has to be taken into account in the modeling of the cooling curves. Although the exact implications are unclear, our conclusion that the crust has to be highly conductive to explain the rapid cooling of the crust is unlikely to change. Therefore, within the cooling neutron star model, our new results indicate that the neutron star in KS 1731–260 has a highly conductive crust and enhanced cooling is likely to occur in its core. Colpi et al. (2001) suggested that when the mass of the neutron star exceeds $\sim1.6$ [M$_{\odot}$]{}, such enhanced core cooling might occur. A massive neutron star in KS 1731–260 is not unexpected because a significant amount of matter must have been accreted in order for the neutron star to be spinning rapidly. A fast spinning neutron star (with a spin frequency of $\sim$524 Hz) in KS 1731–260 has been inferred from the burst oscillations detected in this system (Smith, Morgan, & Bradt 1997). Alternative models explaining the quiescent emission in neutron star X-ray transients have to be considered as well. In models assuming that the emission is due to residual accretion onto the neutron star surface, either directly or via leakage through the magnetospheric barrier, or down to the magnetospheric radius, the detected luminosity decrease can be explained by assuming that the accretion rate has decreased considerably. These alternative models were discussed in detail by Burderi et al. (2002) and because the lower luminosity of KS 1731–260 does not strongly affect their conclusions (an upper limit on the magnetic field strength of the neutron star can be obtained that is a factor $\sim$2 lower), we will not discuss those models here in detail. Note, that if the luminosity is indeed due to residual accretion, the decrease in accretion rate inferred from our luminosity decrease might cause a change in the X-ray production mechanism due to the fact that the magnetospheric radius might move outside the co-rotation radius or outside the light cylinder (see also Burderi et al. 2002). With further monitoring observations of KS 1731–260 in quiescence, the quiescent properties of this source and their time evolution will be better constrained. More detailed observations of other quiescent neutron star systems will help to understand how similar they are to KS 1731–260. So far, at least two other systems have been identified with similar properties: X 1732–304 (Wijnands, Heinke, & Grindlay 2002) and 4U 2129+47 (Wijnands 2002; Nowak, Heinz, & Begelman 2002). Those systems also have very long outburst durations, and from their quiescent properties it has been inferred that they should be in quiescence for hundreds of years if only standard neutron star core cooling occurs. This spurred Wijnands et al. (2002) to suggest that in the standard cooling scenario a correlation between the duration of the outburst episodes and that of the quiescent intervals might be required. However, such a correlation is difficult to understand in accretion disk instability models (Lasota 2001). This could indicate that enhanced cooling takes place in the neutron star cores of those systems (Wijnands et al. 2002). Our results indicating that enhanced cooling may occur in the neutron star core of KS 1731–260 lends further support to this idea. We thank [*XMM-Newton*]{} project scientist Fred Jansen for scheduling the observations used in this [*Letter*]{}. RW was supported by NASA through Chandra Postdoctoral Fellowship grant number PF9-10010 awarded by CXC, which is operated by SAO for NASA under contract NAS8-39073. This research has made use of the data and resources obtained through the HEASARC online service, provided by NASA-GSFC. Arnaud, K. 1996, in G. Jacoby & J. Barnes (eds.), [*Astronomical Data Analysis Software and Systems V.*]{}, Vol. 101, p. 17, ASP Conf. Series. Asai, K., Dotani, T., Kunieda, H., Kawai, N. 1996, , 48, L27 Asai, K., Dotani, T., Hoshi, R., Tanaka, Y., Robinson, C. R., Terada, K. 1998, , 50, 611 Barret, D., Motch, C., & Predehl, P. 1998, , 329, 965 Brown, E. F., Bildsten, L., & Rutledge, R. E. 1998, , 504, L95 Burderi, L., et al. 2002, , in press (astro-ph/0201175) Campana, S., Stella, L., Mereghetti, S., Colpi, M., Tavani, M., Ricci, D., Dal Fiume, D., Belloni, T. 1998a, , 499, L65 Campana, S., Colpi, M., Mereghetti, S., Stella, L., Tavani, M. 1998b, , 8, 279 Campana, S. & Stella, L. 2000, , 541, 849 Campana, S., Stella, L., Mereghetti, S., Cremonesi, D. 2000, , 358, 583 Chen, W., Shrader, C. R., & Livio, M. 1997, , 491, 312 Colpi, M., Geppert, U., Page, D., Possenti, A. 2001, , 548, L175 Ferrando, P. et al. 2002 in ‘New Visions of the X-ray Universe in the XMM-Newton and Chandra era’, 26-30 November 2001, ESTEC, Noordwijk, The Netherlands (astro-ph/0202372) Lasota, J.-P. 2001, NewA Rev., 45, 449 Muno, M. P., Fox, D. W., Morgan, E. H., Bildsten, L. 2000, , 542, 1016 Narita, T., Grindlay, J. E., & Barret, D. 2001, , 547, 420 Nowak, M. A., Heinz, S., Begelman, M. C. 2002, , submitted Rutledge, R. E., Bildsten, L., Brown, E. F., Pavlov, G. G., Zavlin, V. E., Ushomirsky, G., 2002 , submitted (astro-ph/0108125) Smith, D. A., Morgan, E. H., & Bradt, H., 1997, , 479, L137 Snowden, S. L. 2002, in the proceedings of “New Visions of the X-ray Universe in the XMM-Newton and Chandra Era”, astro-ph/0203311 Ushomirsky, G. & Rutledge, R. E. 2001, , 325, 1157 Weisskopf, M. C., Brinkman, B., Canizares, C., Garmire, G., Murray, S., Van Speybroeck, L. P. 2002, PASP, 114, 1 Wijnands, R. 2002 To appear in “The High Energy Universe at Sharp Focus: Chandra Science”, proceedings of the 113th Meeting of the Astronomical Society of the Pacific. 16-18 July 2001, St. Paul, MN (astro-ph/0107600) Wijnands, R., Miller, J. M., Markwardt, C., Lewin, W. H. G., van der Klis, M. 2001, , 560, L159 Wijnands, R., Heinke, C. O., Grindlay, J. E. 2002, , in press (astro-ph/0111337) Zavlin, V. E., Pavlov, G. G., & Shibanov, Yu. A., 1996, , 315, 141 [c]{} [c]{} [c]{} [lcccc]{} $N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $1.1^{+0.6}_{-0.4} $ & 1.1 (fixed) & $1.3^{+0.3}_{-0.4}$ & 1.1 (fixed)\ $kT$ (keV) & $0.30^{+0.06}_{-0.05}$ & $0.30^{+0.04}_{-0.03}$ & $0.11^{+0.03}_{-0.04}$ & $0.12^{+0.04}_{-0.02}$\ $F$ & 4.8$^{+1.0}_{-0.9}$ & 4.8[$\pm$]{}1.0 & $7.4^{+2.3}_{-1.6}$ & $5.6^{+1.3}_{-1.1}$\ $\chi^2$/dof & 15.7/23 & 15.8/24 & 15.9/23 & 16.0/24\ \ Parameter& &\ $N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & $0.9^{+0.4}_{-0.3} $ & 1.1 (fixed) & 1.0[$\pm$]{}0.2 & 1.1 (fixed)\ $kT_{\rm XMM-N}$ (keV) & 0.31[$\pm$]{}0.05 & $0.30^{+0.04}_{-0.03}$ & $0.14^{+0.04}_{-0.05}$ & $0.12^{+0.03}_{-0.02}$\ $kT_{\rm Chandra}$ (keV) & $0.29^{+0.06}_{-0.05}$ & 0.27[$\pm$]{}0.03 & $0.11^{+0.03}_{-0.04}$ & 0.11[$\pm$]{}0.02\ $F_{\rm XMM-N}$ & $3.8^{+0.7}_{-0.5}$ & 5[$\pm$]{}1 & $5.0^{+1.2}_{-0.8}$ & 6[$\pm$]{}1\ $F_{\rm Chandra}$ & $13^{+3}_{-2}$ & $17^{+4}_{-3}$ & $18^{+4}_{-3}$ & $20^{+5}_{-4}$\ $\Delta F$ & $9^{+3}_{-2}$ & $12^{+4}_{-3}$ & $13^{+4}_{-3}$ & $14^{+5}_{-4}$\ $\chi^2$/dof & 23.8/36 & 24.6/37 & 23.2/36 & 23.1/37\ [^1]: See http://xmm.vilspa.esa.es/user/sas\_top.html
--- abstract: '**T**he **IN**dia’s **TIN** (TIN.TIN) detector is under development in the search for neutrinoless double-$\beta$ decay (0$\nu\beta\beta$) using 90% enriched $^{124}$Sn isotope as the target mass. This detector will be housed in the upcoming underground facility of the **I**ndia based **N**eutrino **O**bservatory. We present the most important experimental parameters that would be used in the study of required sensitivity for the TIN.TIN experiment to probe the neutrino mass hierarchy. The sensitivity of the TIN.TIN detector in the presence of sole two neutrino double-$\beta$ decay (2$\nu\beta\beta$) decay background is studied at various energy resolutions. The most optimistic and pessimistic scenario to probe the neutrino mass hierarchy at 3$\sigma$ sensitivity level and 90% C.L. is also discussed.' address: - '$^{1}$ Department of Physics, Institute of Science, Banaras Hindu University, Varanasi 221005, India.' - '$^{2}$ Institute of Physics, Academia Sinica, Taipei 11529, Taiwan.' author: - 'Manoj Kumar Singh,$^{1, 2} \mbox{\large$^{\ast}$}$ Lakhwinder Singh,$^{1, 2}$ Vivek Sharma,$^{1, 2}$ Manoj Kumar Singh,$^{1}$ Abhishek Kumar,$^{1}$ Akash Pandey,$^{1}$ Venktesh Singh,$^{1}\mbox{\large$^{\ast}$}$ Henry Tsz-King Wong$^{2}$' title: 'Required sensitivity to search the neutrinoless double beta decay in $^{124}Sn$' --- **Keywords**: Double Beta Decay, Nuclear Matrix Element, Neutrino Mass Hierarchy. Introduction {#intro} ============ Neutrinoless double-$\beta$ decay (0$\nu\beta\beta$) is an interesting venue to look for the most important question whether neutrinos have Majorana or Dirac nature. During the last two decades, the discovery of non-zero neutrino mass and mixing with various sources gives new motivation for more sensitive searches of 0$\nu\beta\beta$. In fact, the observation of 0$\nu\beta\beta$ would not only establish the Majorana nature of neutrinos, but also provide a measurement of effective mass and probe the neutrino mass hierarchy. Furthermore, this is the only proposed process which has potential to allow the sensitivity of the absolute mass scale of neutrino below 100 meV. There is no exact gauge symmetry associated with lepton number, therefore there is no fundamental reason why lepton number should be conserved at all levels [@ADGO; @GSTE]. The lepton number violates by two units in the case of 0$\nu\beta\beta$. This distinctive feature together with CP (charge parity) violation supports the exciting possibility that neutrino plays an important role in the matter-antimatter asymmetry in the early universe. The experimental search for 0$\nu\beta\beta$ is an attractive field of nuclear and particle physics. There are several isotopes available which energetically allow 0$\nu\beta\beta$ process, only 35 of them are stable and have their experimental importance [@RHEN]. Several experiments are focusing on different isotopes via utilizing various detector techniques such as GERDA (GERmanium Detector Array) [@AGOS], MAJORANA (Majorana Demonstrator) [@AALS] and CDEX (China Dark matter EXperiment) with $^{76}$Ge enrich high purity Ge detectors [@WANG]; EXO (Enriched Xenon Observatory) [@ALBE] and KamLandZen (Kamioka Liquid Scintillator Antineutrino Detector) with liquid $^{136}$Xe time projection chambers [@GAND]; and CUORE (Cryogenic Underground Observatory for Rare Events) with $^{130}$Te bolometric detectors [@ALDU]. The next-generation experiments with tonne scale detectors such as LEGEND ($^{76}$Ge) (MAJORANA + GERDA) [@ABGR; @SCHW], nEXO ($^{136}$Xe) [@JBAL], NEXT ($^{136}$Xe) [@MART], CUPID ($^{130}$Te) [@GWAN], SuperNEMO ($^{82}$Se, $^{150}$Nd) [@RBPA], AMoRE ($^{100}$Mo) [@VALE], COBRA ($^{116}$Cd) [@JEBE], CANDLES-III ($^{48}$Ca) [@TIID], SNO$^{+}$ ($^{130}$Te) [@VLOZ], TIN.TIN ($^{124}$Sn) [@VNAN], MOON ($^{100}$Mo) [@TSHI] and LUMINEU ($^{100}$Mo) [@EARM] have been proposed. Some of them will start data taking over the next few years and others are under construction phase. These large number of experiments, reveals the enthusiasm of the scientists working in this field world-wide. The two neutrino double-$\beta$ decay (2$\nu\beta\beta$) is a second order weak process, in which two neutrons simultaneously transfer into two protons by emitting two electrons and two anti-neutrinos within the same nucleus [@OCRE] $$^N_Z A_{\beta\beta} ~ \rightarrow ~ _{Z+2}^{N-2}A ~ + ~ 2 e^- ~ + ~ 2\bar{\nu}_{e}. ~~ \label{eq:2nubb}$$ The energy spectrum of 2$\nu\beta\beta$ process has a continuous spectrum, ending at a well-defined end point which is determined by the Q$_{\beta\beta}$-value of the process, as depicted in Fig. . The 2$\nu\beta\beta$ decay follows the conservation of lepton number and also allowed by the standard model [@OCRE]. In the case of 0$\nu\beta\beta$ process, no neutrino is emitted and both electrons carry the full energy equal to the Q$_{\beta\beta}$-value of the transition. Indeed, being the energy of the recoiling nucleus negligible due to its high mass. Therefore, the experimental signature of 0$\nu\beta\beta$ is characterized by a monoenergetic peak at the Q$_{\beta\beta}$-value which relies just on the detection of the two emitted electrons. $$^N_Z A_{\beta\beta} ~ \rightarrow ~ _{Z+2}^{N-2}A ~ + ~ 2 e^{-} ~~ \label{eq:0nubb}$$ ![Summed energy spectrum of two electrons emitted in 2$\nu\beta\beta$ and 0$\nu\beta\beta$ decay modes of $^{124}$Sn.[]{data-label="fig:Spectrum"}](fig1.pdf){width="8.5cm"} The TIN.TIN (**T**he **IN**dia’s **TIN**) detector is under development in search of 0$\nu\beta\beta$ in $^{124}$Sn isotope. The TIN.TIN detector will use the cryogenic bolometer technique in closely packed module structure arrays [@VNAL]. This experiment will be housed at the **I**ndia based **N**eutrino **O**bservatory, an upcoming underground laboratory [@VNAL; @MKSI]. Although the natural abundance of $^{124}$Sn isotope is only $\sim$ 5.8%, but its quite high Q$_{\beta\beta}$-value of 2287.7 keV makes it a good candidate for search of 0$\nu\beta\beta$ [@JDAW; @VIVS]. A high Q$_{\beta\beta}$-value means that the search of 0$\nu\beta\beta$ process will be less affected by the natural radioactivity process and hence it increases the sensitivity factor of experiment [@GGER; @NEH3]. The High Energy Physics experimental group of Tata Institute of Fundamental Research (TIFR), Mumbai, has tested the cryogenic Sn bolometers (size of mg scale) and found these bolometers work very impressively with very good energy resolution at sub-Kelvin temperature [@VNAN; @VSIN]. The R&D on approximately 1 kg $^{natural}$Sn prototype and the enrichment of $^{124}$Sn is in progress [@VNAN]. The sensitivity of an experiment can be decided by the following five important parameters; (1) Energy resolution ($\Delta$) at Q$_{\beta\beta}$, (2) Exposure ($\beta\beta_{isotope}$ mass$\times$time) ($\Sigma$), (3) Background rate ($\Lambda$), (4) Isotopic abundance (IA), and (5) Signal detection efficiency ($\epsilon_{expt}$). The smearing of 2$\nu\beta\beta$ events (B$_{2\nu}$) ($\tau^{2\nu}_{\frac{1}{2}}$ = 0.8-1.2$\times$10$^{21}$ yr) [@VNAN] in the 0$\nu\beta\beta$ “Region of Interest” (ROI) is the irreducible background in the search of 0$\nu\beta\beta$. This can be minimized by using a detector with very good energy resolution. Therefore, the cryogenic bolometers will be a novel technique in the search of 0$\nu\beta\beta$ decay. Neutrino parameters and 0$\nu\beta\beta$ half-life ================================================== In the simplest case, 0$\nu\beta\beta$ decay is mediated by the virtual exchange of a light Majorana neutrino in the absence of right-handed currents. The half-life (${\tau_{\frac{1}{2}}^{0\nu}}$) of 0$\nu\beta\beta$ isotopes can be expressed as [@RGHR] $$\Big [\tau^{0\nu}_{\frac{1}{2}}\Big]^{-1} ~ = ~ G^{'0 \nu} ~ g_{A}^{4} ~ | M^{0 \nu} |^{2} ~ \Bigg[\frac{\langle m_{\beta\beta}\rangle ^{2}}{m_{e}^{2}}\Bigg] \equiv ~ G^{0 \nu} ~ | M^{0 \nu} |^{2} ~ \Bigg[\frac{\langle m_{\beta\beta}\rangle ^{2}}{m_{e}^{2}}\Bigg]. \label{eq:Core_Rel}$$ Here, G$^{'0 \nu}$ is the known phase space factor, G$^{0\nu}$ is the phase space factor combined with the weak axial vector coupling constant (g$_{A}$), $|M^{0\nu}|$ is the Nuclear Physics matrix element and $m_{e}$ is the mass of the electron. To avoid the ambiguity of g$_{A}$ in the presence of nuclear medium, its free nucleon value (g$_{A}$ = 1.269) [@JENG] is adopted. The effective Majorana neutrino mass is given by[@ADUE]: $$\langle m_{\beta\beta}\rangle ~ = ~ \big| ~ \sum^{j~=~0,\alpha,\beta}_{\gamma~=~1,2,3} ~~ e^{\gamma j}~|U_{e\gamma}|^{2}~~m_{\gamma} ~\big|, \label{eq:First_Mbb}$$ which depends on the neutrino masses (m$_{\gamma}$ for eigenstate $\nu_{\gamma}$), Majorana phases ($\alpha$, $\beta$) and PMNS (Pontecorvo-Maki-Nakagawa-Sakata) mixing matrix (U)[@RGHR; @ADUE; @GBEN]. Expansion of Eq. will provide the $\langle$m$_{\beta\beta}$$\rangle$ as [@ADUE] $$\langle m_{\beta\beta}\rangle = \big|c_{12}^{2}~c_{13}^{2}~m_{1}~+~s_{12}^{2}~c_{13}^{2}~m_{2}~e^{i\alpha}~+~s_{13}^{2}~m_{3}~e^{i(\beta-2\delta)} \big|. \label{eq:Final_Mbb}$$ The value of $\langle$m$_{\beta\beta}$$\rangle$ depends on sines (s) and cosines (c) of the leptonic mixing angles $\theta_{ij}$, the mass eigenvalues ($m_{\gamma}$), Majorana Phases e$^{i\alpha}$ = e$^{i\beta}$ = $\pm$1 and the CP violating phase e$^{-i2\delta}$ = 1. The measurement of mass-squared splitting ($\delta$m$^{2}_{\odot}$ = $\Delta$m$^{2}_{21}$ and $\Delta$m$^{2}_{atm}$ = $\frac{1}{2}$$|\Delta$m$^{2}_{31}$ + $\Delta$m$^{2}_{32}|$) allows two hierarchy configurations for the mass eigenstates: either “Inverted Hierarchy” (IH) ($m_{3} <m_{1} <m_{2}$) or “Normal Hierarchy” (NH) ($m_{1} < m_{2} <m_{3}$)  [@WMAN; @GBEN]. The allowed range for $\langle$m$_{\beta\beta}$$\rangle$ as a function of the lightest neutrino mass m$_{min}$ can be constrained by the experimental measurements of the neutrino mixing parameters. The lower and upper range of $\langle$m$_{\beta\beta}$$\rangle$ is derived from the cutoff choice m$_{min}$ = 10$^{-5}$ eV $$\begin{aligned} \nonumber IH: 1.765550\times10^{-2} ~(eV) \leq |\langle m_{\beta\beta}\rangle| \leq 4.981276\times10^{-2} ~(eV) \\ NH: 1.363476\times10^{-3} ~(eV) \leq |\langle m_{\beta\beta}\rangle| \leq 4.093182\times10^{-3} ~(eV). \label{eq:hierarchy}\end{aligned}$$ The precise calculations of G$^{0\nu}$ and $|M^{0\nu}|$ are needed in order to translate the experimental values of the 0$\nu\beta\beta$ half-lives into $\langle$m$_{\beta\beta}$$\rangle$. With an uncertainty of approximately 7 %, G$^{0\nu}$ is well known [@PGUO]. On the other hand, the calculation of $|M^{0\nu}|$ is a difficult task involving the details of the underlying theoretical models. Several different theoretical models have been used to compute $|M^{0\nu}|$ for the different A$_{\beta\beta}$ such as interacting shell model (ISM) [@MJNE], quasiparticle random phase approximation (QRPA) (and its variants) [@FEDO; @DLFG], interacting boson model (IBM-2) [@JBAR], angular momentum projected hartree-fock bogoliubov method (PHFB) [@PKRT], generating coordinate method (GCM) and energy density functional method (EDF) [@JENG; @RRTO]. Deviations among their results are the main sources of theoretical uncertainties in the required sensitivity. [Theoretical Model (Scheme)]{} [$|M^{0\nu}|$]{} --------------------------------------------- ------------------ [Projected Hartree-Fock-Bouglebov (PHFB)]{} 6.04 [Generating coordinate method (GCM)]{} 4.81 [Interacting boson model (IBM)]{} 3.53 [Shell Model (SM)]{} 2.62 : Nuclear matrix elements for $^{124}$Sn isotope extracted from the references  [@VNAN; @RGHR; @JKFI].[]{data-label="table:NME"} For $^{124}$Sn isotope, $|M^{0\nu}|$ along with the corresponding theoretical models are listed in Table . In the given range of $|M^{0\nu}|$, the PHFB and SM are in most optimistic and most conservative scenario, respectively. Therefore, the required sensitivity corresponding to other $|M^{0\nu}|$ will lie in between this range. Using the range of $\langle$m$_{\beta\beta}$$\rangle$ from Eq. and $|M^{0\nu}|$ from Table , with the help of Eq. , corresponding benchmark sensitivities can be calculated in terms of ${\tau_{\frac{1}{2}}^{0\nu}}$. The value of combined function ($F_{n}$) for PHFB and SM models are adopted from Ref. [@VNAN] $$\begin{aligned} \nonumber F_{n} = G^{0\nu}.|M^{0\nu}|^{2} =8.569\times10^{-13} yr^{-1} (PHFB) \\ ~~~~~~~~~~~~~~~~~~~~~=1.382\times10^{-13} yr^{-1} (SM).~~~~~ \label{eq:NME}\end{aligned}$$ Using Eqns. , and , the required sensitivities in the form of ${\tau_{\frac{1}{2}}^{0\nu}}$ are $$\begin{aligned} \nonumber PHFB \equiv IH:~ 1.228091\times10^{26} (yr) < \tau_{\frac{1}{2}}^{0\nu} < 9.776263\times10^{26} (yr)\\\nonumber ~~~~~~~~~~~~~NH:~ 1.818819\times10^{28} (yr) < \tau_{\frac{1}{2}}^{0\nu} < 1.639143\times10^{29} (yr) \\\nonumber SM \equiv IH:~ 7.614697\times10^{26} (yr) < \tau_{\frac{1}{2}}^{0\nu} < 6.061708\times10^{27} (yr) \\ ~~~~~~~~NH:~ 1.127747\times10^{29} (yr) < \tau_{\frac{1}{2}}^{0\nu} < 1.016340\times10^{30} (yr). \label{eq:Hierarchy_HF}\end{aligned}$$ The current generation of oscillation experiments may reveal Nature’s choice among the two hierarchy options. Moreover, the combined cosmology data may provide a measurement on the sum of $m_{i}$ [@FCAP; @AGIU]. Thus, it can be expected that the ranges of parameter space in 0$\nu\beta\beta$ searches will be further constrained. From the experimental point of view, the measurement of half-life ${\tau_{\frac{1}{2}}^{0\nu}}$ of 0$\nu\beta\beta$ relies just on the observed signal ($S_{0\nu}$ (0$\nu\beta\beta$-events)). The relationship between ${\tau_{\frac{1}{2}}^{0\nu}}$ and observed $S_{0\nu}$ can be derived from the law of radioactive decay $$\Big[ \tau^{0\nu}_{\frac{1}{2}}\Big]^{-1} ~ = ~ \big[ {\rm log_e 2}\big]^{-1} ~ \bigg[ \frac{A}{N_{A}} \bigg] ~ \bigg[ \frac{1}{\Sigma} \bigg] ~ \bigg[ \frac{S_{0\nu}}{\varepsilon_{ROI}} \bigg], \label{eq:Formula}$$ where A is the molar mass of the source $A_{\beta\beta}$, $N_{A}$ is the Avogadro Number and $\varepsilon_{ROI}$ is the efficiency of selected ROI. In the search of 0$\nu\beta\beta$ decay, the ROI around the Q$_{\beta\beta}$ value could be symmetric and asymmetric. The symmetric FWHM ROI at Q$_{\beta\beta}$ value is most often choice of experiments. The ROI in the current study is taken to be the FWHM window centered at Q$_{\beta\beta}$, such that the efficiency $\varepsilon_{ROI}$ = 76.1 %. Every experiment needs to use an enriched isotope for obtaining the better sensitivity. Therefore for simplicity and being easily convertible, both the IA of the 0$\nu\beta\beta$ isotopes in the target and the other experimental efficiencies ($\varepsilon_{expt}$) are taken to be 100 %. In practice, the required combined exposure $\Sigma^{'}$ of $A_{\beta\beta}$, can be converted from the ideal $\Sigma$ of the present work via $\Sigma^{'}$ = $\Sigma$/(IA. $\varepsilon_{expt}$). Experimental constraints on sensitivity ======================================= The background events are always present in realistic experiments which degrade the sensitivities of the identifying spectral peaks at Q$_{\beta\beta}$. The source of background in the search of 0$\nu\beta\beta$ can be divided into two categories: intrinsic and ambient. The ambient background is mostly induced by external $\gamma$-rays, especially from trace radioactivity present in the experimental hardware and cosmogenically activated isotopes in the vicinity of target volume. The total ambient background counts N$_{a}$ in the 0$\nu\beta\beta$ ROI can be obtained from the following expression $$N_{a}~ =~\Lambda_{a}.~\Sigma.~[\Delta.~Q_{\beta\beta}], \label{eq:Ambient}$$ where $\Lambda_{a}$ is the flat ambient background rate in the units of counts/tonne-year-keV (/tyk). The intrinsic background in the 0$\nu\beta\beta$ search come from the 2$\nu\beta\beta$ decay process. It is therefore inherently associated with the A$_{\beta\beta}$ and directly proportional to $\Sigma$. The finite detector resolution leads the irreducible B$_{2\nu}$ events which contaminates the 0$\nu\beta\beta$ ROI. Therefore, the sum (B$_{0}$ = B$_{2\nu}$+N$_{a}$) would be the total background counts in the selected ROI. If the ambient background reduces to a minimum level (N$_{a}$ = 0), the irreducible background B$_{2\nu}$ would remain in the ROI. The contamination of B$_{2\nu}$ mainly depends on the detector $\Delta$ and $\Sigma$. The lower limit on $\tau^{0\nu}_{\frac{1}{2}}$ due to only B$_{2\nu}$ as a function of $\Delta$ for $\Sigma$ = 0.1 and 1.0 tonne-year (ty) are represented by the continuous and dotted lines respectively in Fig. . The IH and NH bands corresponding to the SM and PHFB $|M^{0\nu}|$ are also superimposed (From Eqns. and ) to get the prospects of B$_{2\nu}$ for future $^{124}$Sn isotope based experiments. ![image](fig2.pdf){width="\textwidth"} ![image](fig3.pdf){width="\textwidth"} The conversion of $\tau^{0\nu}_{\frac{1}{2}}$ in $\langle$m$_{\beta\beta}$$\rangle$ sensitivity face the theoretical uncertainty of $|M^{0\nu}|$ (Eqn. ). Therefore, the $\langle$m$_{\beta\beta}$$\rangle$ sensitivity due to B$_{2\nu}$ form band structure (apart from the IH and NH bands) in the $\langle$m$_{\beta\beta}$$\rangle$ vs $\Delta$ parameter space as shown in Fig. . The upper and lower line of $|M^{0\nu}|$ uncertainty band arises due to the $|M^{0\nu}|$ of SM and PHFB, respectively. This leads that the $|M^{0\nu}|$ of SM would impose severe requirements on experimental sensitivity in comparison to the PHFB. With the maximum range of $|M^{0\nu}|$ uncertainty for $\Sigma_{0}$ = 1.0 ty to cover the NH, the safe zone from B$_{2\nu}$ events begins at $\Delta$ $<$ 1.61% for SM and $\Delta$ $<$ 2.19% for PHFB. The safe zone for IH case begins from $\Delta$ $<$ 3.88% for SM and $\Delta$ $<$ 5.34% for PHFB. This leads that the TIN.TIN experiment would be very less affected by the B$_{2\nu}$ events ($\sim$ 3.08 $\times10^{-6}$ counts) if it reach to the energy resolution $\Delta_{0}$ = 0.5% at Q$_{\beta\beta}$, which is close to the achieved energy resolution = 0.31% at Q$_{\beta\beta}$ of the CUORE experiment (Bolometric detector using $^{130}$Te) [@ALDU]. Statistical significance of signal ================================== Rare event physics search like 0$\nu\beta\beta$ and dark matter naturally demands the very low background experiment [@NEH1]. The understanding of background and its suppression would significantly improves the experimental sensitivity. In the design stage of experiments, the averaged N$_{a}$ and B$_{2\nu}$ can be precisely estimated from the prior knowledge of the most relevant sources of background and simulation studies [@NEH2; @VSNG; @NEH3; @NEH4]. The low background counts in the ROI are subjected to the Poisson fluctuation. Excess of counts from expected background may originate from the upward fluctuations of the background channels. The discovery potential (D.P.) and sensitivity level (S.L.) can be expressed in the frame of background fluctuation. In order to get the strong evidence, we have calculated the signal counts with 3$\sigma$ S.L. and 5$\sigma$ is expressed for D.P. ![Variation of S$_{0\nu}$ corresponding to B$_{0}$ under the 3$\sigma$ S.L., 5$\sigma$ D.P. and at 90% C.L. schemes of signal identification.[]{data-label="fig:S_vs_B"}](fig4.pdf){width="8.5cm"} The Poisson distribution is discrete and provide the significance level for certain values only. The continuous representation of the Poisson distribution is obtained by normalized upper incomplete gamma function and it gives the probability distribution [@SHAB] $$F(k) = \frac{\Gamma(k+1, \lambda)}{\Gamma(k+1)}, k>0, with \label{eq:GAMMA}$$ $$\Gamma(k, \lambda) = \int_{\lambda}^{\infty} e^{-t}~t^{k-1}~dt ~~~~ and ~~~~ \Gamma(k) = \int_{0}^{\infty} e^{-t}~t^{k-1}~dt, \label{eq:_Final_Gamma}$$ where $\lambda$ is the mean value of distribution, k is the number of counts, $\Gamma(k, \lambda)$ is the upper incomplete gamma function and $\Gamma(k)$ is the ordinary gamma function. The variation in sensitivity became free from the discrete steps (with Eq. ). For completeness, the signal counts at 90% C.L. are also calculated from the Poisson distribution as illustrated in Fig. . For very low expected background, the requirement of S$_{0\nu}$ for an experiment is chosen to be a 1 event. This leads to the same sensitivity at the background free level. The background free criteria depend on the chosen statistical scheme. The background free scenario is shown in Fig. from the horizontal line limiting at S$_{0\nu}$ = 1 event. As the background decreases the significance of S$_{0\nu}$ increases. This increment in significance is shown in Fig. by flattened line. On reaching the background free criteria, the extension of 90% C.L. is extended up to the 2$\sigma$ level while the 3$\sigma$ S.L. is extended up to the 5$\sigma$ D.P. After using these two schemes for the identification of S$_{0\nu}$, the ${\tau_{\frac{1}{2}}^{0\nu}}$ sensitivity of Eq.  would take the following form $$\Big[\tau^{0\nu}_{\frac{1}{2}}\Big]^{-1} ~ = ~ \big[{\rm log_e 2}\big]^{-1} ~ \bigg[\frac{A}{N_{A}} \bigg] ~ \bigg[\frac{1}{\Sigma} \bigg] ~ \bigg[\frac{S_{90\%}~|~S_{3\sigma}~|~S_{5\sigma}}{\varepsilon_{ROI}} \bigg], \label{eq:Final_Formula}$$ where S$_{0\nu}$ of Eq. is replaced by S$_{90\%}$, S$_{3\sigma}$ and S$_{5\sigma}$ to obtain the ${\tau_{\frac{1}{2}}^{0\nu}}$ sensitivity at 90% CL, 3$\sigma$ S.L. and 5$\sigma$ D.P. level, respectively. Under these two schemes the required sensitivity for $^{124}$Sn isotope is studied in terms of required $\Lambda$, $\Sigma$ at the $\Delta_{0}$ = 0.5% at Q$_{\beta\beta}$. These sensitivities are calculated with the aim to reach the most conservative (min.) and most optimistic (max.) regime of IH and NH (see Eq. ). 0$\nu\beta\beta$ half-life sensitivity as a function of $\Sigma$ and $\Lambda$ at $\Delta_{0}$ ============================================================================================== The accessible physics with 0$\nu\beta\beta$ experiments is the effective mass of Majorana neutrinos $\langle$m$_{\beta\beta}$$\rangle$, which is the linear combination of neutrino mass eigenstates. Therefore, the minimum desired experimental sensitivity of the TIN.TIN experiment is to probe the IH mass region. The $\tau^{0\nu}_{\frac{1}{2}}$ is inversely proportional to the $\langle$m$_{\beta\beta}$$\rangle$. The variation of $\tau^{0\nu}_{\frac{1}{2}}$ at 3$\sigma$ S.L. and 90% C.L. as a function of $\Sigma$ at a fixed $\Delta_{0}$ with various background rates ($\Lambda$) is depicted in Figs. and respectively. The hierarchy bands arises from uncertainty of $|M^{0\nu}|$ and range of $\langle$m$_{\beta\beta}$$\rangle$ (Eqns. and ) is also superimposed over it. ![image](fig5.pdf){width="\textwidth"} ![image](fig6.pdf){width="\textwidth"} The required sensitivity in terms of exposure $\Sigma$ and benchmark background rate $\Lambda$ = (0, 0.1, 1.0, 10.0)/tyk, at $\Delta_{0}$, to just enter the hierarchy is summarized in Table . In order to enter the IH$_{PHFB}$ mass region with $\Lambda$ = 0.1/tyk the TIN.TIN experiment must have $\Sigma$ = 0.12 ty at 3$\sigma$ S.L. ($\Sigma$ = 4.80$\times10^{-2}$ ty at 90% C.L.), but the IH$_{SM}$ mass region requires $\Sigma$ = 1.74 ty at 3$\sigma$ S.L. ($\Sigma$ = 0.62 ty at 90% C.L.). Similarly, having the same background rate of $\Lambda$ = 0.1/tyk requires $\Sigma$ = 5.45$\times10^{2}$ ty at 3$\sigma$ S.L. (1.67$\times10^{2}$ ty at 90% C.L.) for NH$_{PHFB}$ and 2.01$\times10^{4}$ ty at 3$\sigma$ S.L. (6.04$\times10^{3}$ ty at 90% C.L.) for NH$_{SM}$. The uncertainty of $|M^{0\nu}|$ leads the uncertainty in required sensitivity. Therefore, precise calculation of $|M^{0\nu}|$ from different model is the main requirement. [ccccc|cccc]{} & &\ &\ &**[0.0]{} &**[0.1]{} &**[1.0]{} &**[10.0]{} &**[0.0]{} &**[0.1]{} &**[1.0]{} &**[10.0]{}\ &0.30 &1.74 &10.15 &92.12 &44.03 &2.01$\times$10$^{4}$ &2.18$\times$10$^{5}$ &2.47$\times10^{6}$\ &0.30 &0.62 &3.18 &27.84 &44.03 &6.04$\times$10$^{3}$ &6.28$\times$10$^{4}$ &7.04$\times10^{5}$\ **************** &\ &4.80$\times10^{-2}$ &0.12 &0.38 &2.55 &7.11 &5.45$\times$10$^{2}$ &5.27$\times10^{3}$ &5.82$\times10^{4}$\ &4.80$\times10^{-2}$&4.80$\times10^{-2}$&0.13&0.78 &7.11 &1.67$\times10^{2}$ &1.56$\times10^{3}$ &1.67$\times10^{4}$\ The potential of improvement in background is explained in the parameter space of $\Sigma$ and $\Lambda$ at $\Delta_{0}$ in conjunction with the uncertainty bands of $|M^{0\nu}|$ for both the IH and NH (Figs. and ). The reduction in the background leads the controllable requirement imposed on $\Sigma$. Therefore, the background improvement is a necessity for the experiment. This improvement in the background plays crucial role in order to cover the hierarchy region completely. The required sensitivity in terms of $\Sigma$ at $\Delta_{0}$ to completely cover both the hierarchy are summarized in Table for both $|M^{0\nu}|$ at 3$\sigma$ S.L. and 90% C.L. ![image](fig7.pdf){width="\textwidth"} ![image](fig8.pdf){width="\textwidth"} At the earlier chosen background rate $\Lambda$ = 0.1/tyk, the coverage of IH$_{PHFB}$ requires $\Sigma$ = 2.61 ty at 3$\sigma$ S.L. ($\Sigma$ = 0.91 ty at 90% C.L.) while for IH$_{SM}$ this requirement becomes $\Sigma$ = 65.88 ty at 3$\sigma$ S.L. ($\Sigma$ = 20.82 ty at 90% C.L.). Similarly, in order to cover the NH$_{PHFB}$ at 3$\sigma$ S.L. requires $\Sigma$ = 4.27$\times10^{4}$ ($\Sigma$ = 1.27$\times10^{4}$ ty at 90% C.L.) and coverage of NH$_{SM}$ demanded an exposure of $\Sigma$ = 1.76$\times10^{6}$ ty at 3$\sigma$ S.L. ($\Sigma$ = 5.08$\times10^{5}$ ty at 90% C.L.). [ccccc|cccc]{} & &\ &\ &**[0.0]{} &**[0.1]{} &**[1.0]{} &**[10.0]{} &**[0.0]{} &**[0.1]{} &**[1.0]{} &**[10.0]{}\ &2.37 &65.88 &5.86$\times10^{2}$ &6.11$\times10^{3}$ &3.97$\times10^{2}$ &1.76$\times$10$^{6}$ &2.00$\times$10$^{7}$ &2.28$\times10^{8}$\ &2.37 &20.82 &1.77$\times10^{2}$ &1.77$\times10^{3}$ &3.97$\times10^{2}$ &5.08$\times$10$^{5}$ &5.69$\times$10$^{6}$ &6.42$\times10^{7}$\ **************** &\ &0.38 &2.61 &16.36 &1.51$\times10^{2}$ &64.05 &4.27$\times$10$^{4}$ &4.70$\times10^{5}$ &5.34$\times10^{6}$\ &0.38 &0.91 &5.09 &45.68 &64.05 &1.27$\times10^{4}$ &1.35$\times10^{5}$ &1.52$\times10^{6}$\ The value of minimum exposure $\Sigma_{min}$ corresponding to 1 S$_{0\nu}$ event is obtained at very low background (close to $\Lambda$ = 0/tyk) and shown by the left flattened region in Figs. and . $\Sigma_{min}$ is an important parameter where each related experiment wants to reach by making improvement in the achieved background rate $\Lambda_{0}$ $\rightarrow$ $\Lambda$ = 0/tyk. The value of $\Sigma_{min}$ gives clear indication about the enhancement of required sensitivity in terms of $\Sigma$ with $\Lambda$ in the experiment. It has explicitly comes out that in order to just enter the IH$_{PHFB}$, it needs $\Sigma_{min}$ = 4.80$\times10^{-2}$ ty and for IH$_{SM}$ the value of $\Sigma_{min}$ = 0.30 ty. Similarly to enter the NH$_{PHFB}$ requires $\Sigma_{min}$ = 7.11 ty and for NH$_{SM}$ this value became 44.03 ty. In order to cover the IH$_{PHFB}$ requires $\Sigma_{min}$ = 0.38 ty and the coverage of IH$_{SM}$ requires 2.37 ty. The coverage of NH$_{PHFB}$ demands $\Sigma_{min}$ = 64.05 ty and for NH$_{SM}$ this requirement reaches up to 3.97$\times10^{2}$ ty. Summary and prospects ===================== The next generation neutrinoless double-beta decay experiments like TIN.TIN have a primary aim to probe the IH region. We have investigated the experimental parameters such as energy resolution, exposure and background rate to meet this goal in reference of background fluctuation sensitivity at 3$\sigma$ S.L. and 90% C.L. This background fluctuation sensitivity study can be straightforward extended to the discovery potential for any experiment. Our present study shows that the energy resolution of 0.5% at Q$_{\beta\beta}$ for TIN.TIN detector is good enough to overcome the two neutrino double-beta decay background events in perspective to probe the IH. In order to probe the NH region, the two neutrino double-beta decay background events start contributing in the total background. Therefore, the detector resolution requires improvement to diminish the contribution of 2$\nu\beta\beta$ background events. The ambiguity of nuclear matrix elements leads to severe uncertainty in the required experimental sensitivity. It is observed that using PHFB model the required sensitivity in terms of energy resolution, exposure and background rate is in the optimistic scenario in comparison to the SM model. The accurate knowledge of the nuclear matrix element is required to minimize the uncertainty in the required sensitivity and furthermore, it is the essential parameter for determining the effective mass of Majorana neutrino once this 0$\nu\beta\beta$ process is observed. The optimistic region of required sensitivity in terms of the background rate to enter the hierarchy starts from $\Lambda$ $\leq$ 0.1/tyk and the pessimistic region starts from $\Lambda$ $>$ 0.1/tyk for both nuclear matrix elements at 3$\sigma$ S.L. and 90% C.L. Although entering the IH$_{PHFB}$ can tolerate the background rate up to $\Lambda$ = 10/tyk at 90% C.L., for NH$_{PHFB}$ requires $\Lambda$ $\ll$ 0.1/tyk. The TIN.TIN experiment at energy resolution 0.5% at Q$_{\beta\beta}$, needs a minimum exposure of $\Sigma_{min}$ = 0.38 ty to cover the IH$_{PHFB}$ completely and in a conservative scenario to cover the IH$_{SM}$ requires $\Sigma_{min}$ = 2.37 ty. Similarly, the coverage of NH$_{PHFB}$ requires $\Sigma_{min}$ = 64.05 ty whereas for NH$_{SM}$ this needs $\Sigma_{min}$ = 3.97$\times10^{2}$ ty. This $\Sigma_{min}$ is necessity to observe the minimum 1 signal event at background free level. Though this $\Sigma_{min}$ is an ideal case, this will provide the limiting factor of the required exposure. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to collaborators of the TEXONO Program. This work is supported by the Academia Sinica Investigator Award AS-IA-106-M02. Author M. K. Singh acknowledges University Grant Commission (UGC), India for providing financial support. References {#references .unnumbered} ========== [99]{}
--- abstract: 'The cascade rate of passive scalar and Bachelor’s constant in scalar turbulence are calculated using the flux formula. This calculation is done to first order in perturbation series. Batchelor’s constant in three dimension is found to be approximately 1.25. In higher dimension, the constant increases as $d^{1/3}$.' author: - 'Mahendra K. Verma [^1]' date: 24 August 2001 title: Field theoretic calculation of scalar turbulence --- Introduction ============ Perturbative field-theoretic techniques have been very useful in turbulence research. One of the celebrated field-theoretic method, renormalization groups (RG), has been applied to fluid turbulence [@FNS; @YakhOrsz; @McCo:book], scalar turbulence [@YakhOrsz; @Zhou:scalar], MHD turbulence [@MKV:MHD_PRE] etc. In RG analysis, one can calculate the renormalized parameters at large length scales. In addition to RG, one can also apply the field-theoretic techniques to calculate turbulent cascade rates [@Lesl:book; @MKV:MHD_PRE]. In this paper we will calculate the cascade rates of passive scalar using perturbative technique. From this calculation we can also calculate Batchelor’s constant, which is very important for large-eddy simulations. The study of passive scalar is one of the important areas in turbulence research. It finds application in evolution of temperature field, pollution diffusion, etc. The phenomenology of passive scalar is well developed [@Lesl:book], and their predictions are in agreement with the experimental results. According to the phenomenology, the energy spectrum of both velocity field ${\bf u}$ and scalar field $\psi$ in the inertial-convective range are proportional to $k^{-5/3}$. Note that in the inertial-convective range both the nonlinear terms ${\bf u \cdot \nabla u}$ and ${\bf u \cdot \nabla \psi}$ dominate the viscous term. However, there exist two other ranges depending on the value of Prandtl number (the ratio of viscosity and diffusivity). In this paper we will only focus on inertial-convective range. Regarding the calculation of renormalized viscosity and diffusivity for passive scalar admixture, Yakhot and Orszag [@YakhOrsz] adopted $\epsilon$-expansion, while Zhou and Vahala [@Zhou:scalar], and Lin et al.’s [@Lin] procedure is recursive based on the original idea of McComb and his group ([@McCo:book] and reference therein). Adzhemyan et al. [@Adzh] used De Dominicis and Martin’s [@DeDo] procedure for fluid turbulence to passive scalars and computed the renormalized parameters. Earlier, Wyld [@Wyld] had given a perturbative expansion of Navier-Stokes equation. Canuto and Dubovikov [@Canu1], and Canuto et al. [@Canu3] started with Wyld’s formalism and computed the renormalized diffusivity for passive scalar; they also computed Batchelor’s constant. Turbulence cascade rates or fluxes play important role in turbulence calculations. It is a measure of transfer of a certain quantity from inside of a wavenumber sphere to the outside wavenumber sphere. In fluid turbulence, Kraichnan [@Krai:59] applied direct interaction approximation and calculated the flux. Later, the cascade rates have been calculated by many researchers using various techniques, e.g. Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure scheme, RG, etc. Here we are interested in the cascade rate of passive scalar. This cascade rate quantifies how scalar fluctuations at large length-scales diffuse to small length-scales. In this paper we apply perturbative techniques to calculate the cascade rate of passive scalar. In our scheme the cascade rate of passive scalar is calculated using the flux formula and the renormalized parameters. In section 2 we recapitulate the earlier RG calculation [@Zhou:scalar] and extend their results to higher dimensions. In the subsequent section we apply the perturbative technique and calculate the cascade rate in the inertial-convective range to first order. The final expression involves energy spectrum for which we substitute $k^{-5/3}$ obtained from the phenomenology. From this procedure we also calculate Batchelor’s constant. We have extended our calculations to higher space dimensions, because higher-dimensional field theory usually provide important insights into the nature of nonlinear interactions [@Nelk]. The outline of the paper is as follows: in section 2 we provide the definitions and recapitulation of the renormalization procedure for passive scalar. In section 3, we carry out the calculation of flux of passive scalar and Batchelor’s constant. Section 4 contains conclusions. renormalization of viscosity and diffusivity revisited ====================================================== Earlier calculations of renormalization in scalar turbulence have been carried out by Yakhot and Orszag [@YakhOrsz], Zhou and Vahala [@Zhou:scalar], and Lin et al. [@Lin]. In this section we recapitulate very briefly Zhou and Vahala’s calculation for passive scalar and extend their results to higher dimensions. Zhou and Vahala’s calculation is based on recursive scheme proposed by McComb and his coworker ([@McCo:book] and references therein). The equations for the velocity [**u**]{} and passive scalar $\psi$ fields in Fourier space are ( -i+ k\^2 ) u\_i () & = & - P\_[ijm]{}([**k**]{}) d u\_j () u\_m (-) \[eqn:NSk\]\ ( -i+ k\^2 ) () & = & -i k\_j d u\_j () (-) \[eqn:scalark\] with P\_[ijm]{}([**k**]{}) & = & k\_j P\_[im]{}([**k**]{}) - k\_m P\_[ij]{}([**k**]{});\ P\_[im]{}([**k**]{}) & = & \_[im]{}-;\ & = & ([**k**]{},);\ d & = & d [**p**]{} d /(2 )\^[d+1]{}. Here $\nu$ and $\kappa$ are the viscosity and diffusivity respectively, $p$ is the fluid pressure, and $d$ is the space dimension. We have assumed that the flow is incompressible, i.e. $k_i u_i({\bf k}) = 0$. In the recursive RG procedure the wavenumber range $(k_N,k_0)$ is divided logarithmically into N shells. The effective parameters are obtained by eliminating the high wavenumber shells iteratively. We denote the higher wavenumber shells by $k^>$ and the remaining wavenumber region by $k^<$. In this procedure the field variables $u^>_i(\hat{k})$ and $\psi^>(\hat{k})$ are assumed to be gaussian with zero mean, and u\_i\^&gt; () u\_j\^&gt; ()& = & P\_[ij]{}([**p)**]{} C\^[u]{} () (+) \[eqn:Cu\]\ \^&gt; () \^&gt; ()&= & C\^ () (+) \[eqn:Cpsi\] where $C^{u}(\hat{p})$ and $C^{\psi}(\hat{p})$ are velocity and scalar correlation functions respectively. If we denote $\nu_{(n)}$ and $\kappa_{(n)}$ as viscosity and diffusivity respectively after the elimination of $n$ shells, then the elimination of the next shell yields the following equations to the first order in perturbation: ( -i+ \_[(n)]{} k\^2 + \_[(n)]{} k\^2 ) u\_i\^&lt;() & = & - P\_[ijm]{}([**k**]{}) d u\_j\^&lt; () u\_m\^&lt; (-)\ ( -i+ \_[(n)]{} k\^2 + \_[(n)]{} k\^2 ) \^&lt;() & = & -i k\_j d u\_j\^&lt; () \^&lt; (-) where In the above Feynmann diagrams, the solid, wiggly (photon), and curly (gluon) lines represent correlation function $\la u_i u_j \ra$, and Green functions $G^u$, $G^{\psi}$ respectively. The filled circle represents $(-i/2) P_{ijm}$ vertex, while the empty circle represents $-i k_j$ vertex. The RG procedure adopted here is the same as that of Zhou and Vahala [@Zhou:scalar]. Some of the notation used here is close to the that of MHD turbulence calculation of Verma [@MKV:MHD_PRE; @MKV:MHDRG]. The frequency dependence of the correlation function are taken as: $C^{u}(k,\omega)=2 C^{u}(k) \Re(G^{u}(k,\omega))$ and $C^{\psi}(k,\omega)=2 C^{\psi}(k) \Re(G^{\psi}(k,\omega))$. With this assumption, the expressions corresponding to the above Feynmann diagrams will be \_[(n)]{}(k) & = & \^\_[**p+q=k**]{} S\_1(k,p,q) \[eqn:nu\]\ \_[(n)]{}(k) & = & \^\_[**p+q=k**]{} S\_2(k,p,q) \[eqn:kappa\] with S\_1(k,p,q) & = & kp ( (d-3)z+2 z\^3+(d-1)xy )\ S\_2(k,p,q) & = & k p ( z+x y ) The quantities $x,y,$ and $z$ are defined by x= - ; y=; z=. The effective viscosity and diffusivity after the elimination of $(n+1)$ shell are (,)\_[(n+1)]{} (k) & = & (,)\_[(n)]{} (k) + (,)\_[(n)]{} (k) \[eqn:nukappa\_n\] The spectrum $C^{u}(k)$ can be written in terms of one-dimensional energy spectrum $E^{u}(k)$ as C\^[u]{}(k) = k\^[-(d-1)]{} E\^[u]{}(k) \[eqn:Cu\_k\] where $S_d$ is the surface area of $d$ dimensional spheres. It is known that $E^u(k)$ follows Kolmogorov’s spectrum, i.e., E\^u(k) = K\^u (\^u)\^[2/3]{} k\^[-5/3]{} \[eqn:Eu\_k\] where $\Pi$ is the kinetic-energy flux, and $K^u$ is Kolmogorov’s constant for fluid turbulence. Using the dimensional arguments we find that $\nu_{(n)}$ and $\kappa_{(n)}$ have the following forms: (,)\_[(n)]{} (k\_n k’) & = & (K\^u)\^[1/2]{} (\^u)\^[1/3]{} k\_n\^[-4/3]{} (,)\_[(n)]{}\^\* (k’) with $k=k_{n+1}k' (k' < 1)$. The large-$n$ limit of the $\nu_{(n)}^* (k')$ and $\kappa_{(n)}^* (k')$ are expected to be universal functions in the RG sense. We solve for $\nu_{(n)}^*(k')$ and $\kappa_{(n)}^*(k')$ iteratively using Eqs. (\[eqn:nu\], \[eqn:kappa\], \[eqn:nukappa\_n\]). We take $h=0.7$, and start with constant $\nu_{(0)}^*$ and $\kappa_{(0)}^*$. We iterate the process till $\nu^*_{(n+1)}(k') \approx \nu^*_{(n)}(k')$ and $\kappa^*_{(n+1)}(k') \approx \kappa^*_{(n)}(k')$, that is, till they converge. We find that the iteration process converges; the limiting value $\nu^*$ and $\kappa^*$ are shown in Table \[tab:scalar\]. We can draw many interesting conclusions from the above results. Since the scalar does not appear in the equation for $u$, $\nu^*$ computed here is the same as that obtained for fluid turbulence. In Table \[tab:scalar\] we have listed the renormalized diffusivity $\kappa^*$ and the turbulent Prandtl number $Pr_{turb}$. For $d=3$, $\kappa^*=0.85$ and $Pr_{turb} = \nu^*/\kappa^*=0.42$. The above quantities vary a bit with the variation of $h$, but they are roughly in the same range. The error in our estimate of the parameters is of the order of 0.1. Our results are in the same range as those obtained by Zhou and Vahala [@Zhou:scalar]. We have also carried out the above analysis for higher space dimensions. The calculated $\kappa^*$ and $Pr_{turb}$ are listed in Table \[tab:scalar\]. For large $d$, $\nu^* \approx \kappa^* \propto d^{-1/2}$. The $d$ dependence is in the agreement with the finding of Fournier and Frisch for fluid turbulence [@FourFris]. The above result also implies that $Pr_{turb} \approx 1$ for large $d$. In two-dimensions the scalars are not constrained to double energy-enstrophy conservation like velocity field. The RG analysis for two-dimensional scalar turbulence is beyond the scope of this paper. Calculation of cascade rates ============================ In this section we compute cascade rates of $u$ and $\psi$, and Bachelor’s constant. To this end we use the flux formulas and the renormalized parameters computed in the previous section. The time evolution of correlation functions $C^u$ and $C^{\psi}$ (defined by Eqs. \[\[eqn:Cu\], \[eqn:Cpsi\]\]) are given by [@Lesl:book; @Stan:book; @MKV:MHDflux; @Dar:flux] ( + 2 k\^2 ) C\^[u]{}([**k**]{},t,t) & = & \_[**k’+p+q=0**]{} \[S\^[uu]{}([**k’|p|q**]{})+S\^[uu]{}([**k’|q|p**]{})\] \[eqn:Cu\_t\]\ ( + 2 k\^2 ) C\^([**k**]{},t,t) & = & \_[**k’+p+q=0**]{} \[S\^([**k’|p|q**]{})+S\^([**k’|q|p**]{})\] \[eqn:Cpsi\_t\] where S\^[uu]{}([**k’|p|q**]{}) & = & -([****]{} [****]{} ) \[eq:Sukup\_def\]\ S\^([**k’|p|q**]{}) & = & -([****]{} ) \[eq:Spsikpsip\_def\] Here $\Im$ stands for the imaginary part of the argument. Note that Eqs. (\[eqn:Cu\_t\], \[eqn:Cpsi\_t\]) have been discussed in the earlier literature, e.g., Lesieur [@Lesl:book] and Stanisić [@Stan:book]. However, reinterpretation of the terms $S({\bf k|p|q})$ by Dar et al. [@Dar:flux] as energy transfer from mode [**p**]{} (the second argument of $S$) to [**k**]{} (the first argument of $S$) with mode [**q**]{} (the third argument of $S$) as a mediator makes the formalism more transparent and simple. Also, some quantities which were impossible to calculate in earlier formalism could be computed now [@Dar:flux]. This interpretation of Dar et al. is consistent with the earlier formalism. The energy fluxes $\Pi^u$ and $\Pi^{\psi}$ from a wavenumber sphere of radius $k_0$ is [@Dar:flux] \^[u]{}(k\_0) & = & \_[k’&gt;k\_0]{} \_[p&lt;k\_0]{} S\^[uu]{}([**k’|p|q**]{}) \[eqn:u\_flux\]\ \^(k\_0) & = & \_[k’&gt;k\_0]{} \_[p&lt;k\_0]{} S\^([**k’|p|q**]{}) \[eqn:psi\_flux\] Note that there is no cross-transfer between $u$ and $\psi$ energy. It is also important to note that both $C^u$ and $C^{\psi}$ are conserved in every triad interaction, i.e., S\^[uu]{}([**k’|p|q**]{}) + S\^[uu]{}([**k’|q|p**]{}) + S\^[uu]{}([**p|k’|q**]{}) + S\^[uu]{}([**p|q|k’**]{}) + S\^[uu]{}([**q|k’|p**]{}) + S\^[uu]{}([**q|p|k’**]{}) & = & 0\ S\^([**k’|p|q**]{}) + S\^([**k’|q|p**]{}) + S\^([**p|k’|q**]{}) + S\^([**p|q|k’**]{}) + S\^([**q|k’|p**]{}) + S\^([**q|p|k’**]{}) & = & 0 These are the statements of “detailed conservation of energy” in triad interaction (when $\nu=\kappa=0$) [@Lesl:book]. The energy fluxes can be calculated using Eqs. (\[eqn:u\_flux\], \[eqn:psi\_flux\]) by taking ensemble averages of $S^{uu}$ and $S^{\psi \psi}$. It is easy to check that $\la S^{uu} \ra = \la S^{\psi \psi} \ra =0$ to the zeroth order, but are nonzero to the first order. The field-theoretic calculation performed here is very similar to Verma’s MHD flux calculation [@MKV:MHDflux]. Please refer to Verma’s paper [@MKV:MHDflux] for further details. The Feynmann diagrams for the first order of $\la S \ra$ are In the above Feynmann diagrams, the solid, dashed, wiggly (photon), and curly (gluon) lines represent $\la u_i u_j \ra$, $\la \psi \psi \ra$, $G^u$, and $G^{\psi}$ respectively. In all the diagrams, the left vertex denotes $k_i$, while the filled circle and the empty circles of right vertex represent $(-i/2) P_{ijm}$ and $-i k_j$ respectively. Algebraically, S\^[uu]{}(k|p|q)& = &\_[-]{}\^t dt’ \[ T\_[1]{}(k,p,q) G\^[u]{}(k,t-t’) C\^[u]{}(p,t,t’) C\^[u]{}(q,t,t’)\ & & + T\_[2]{}(k,p,q) G\^[u]{}(p,t-t’) C\^[u]{}(k,t,t’) C\^[u]{}(q,t,t’)\ & & + T\_[3]{}(k,p,q) G\^[u]{}(q,t-t’) C\^[u]{}(k,t,t’) C\^[u]{}(p,t,t’) \] \[eqn:Suu\]\ S\^(k|p|q)& = &\_[-]{}\^t dt’ \[ T\_[4]{}(k,p,q) G\^(k,t-t’) C\^(p,t,t’) C\^[u]{}(q,t,t’)\ & & + T\_[5]{}(k,p,q) G\^(p,t-t’) C\^(k,t,t’) C\^[u]{}(q,t,t’) \] \[eqn:Spsipsi\] where $T_i(k,p,q)$’s are given by T\_1(k,p,q) & = & -kp ( (d-3)z + (d-2)xy +2 z\^3+ 2 x y z\^2 + x\^2 z )\ T\_2(k,p,q) & = & kp ( (d-3)z + (d-2)xy +2 z\^3+ 2 x y z\^2 + y\^2 z )\ T\_3(k,p,q) & = & kq (x z - 2 x y\^2 z - y z\^2 )\ T\_4(k,p,q) & = & k\^2 (1 - y\^2 )\ T\_5(k,p,q) & = & -k p ( z + xy ) We assume the relaxation time for $C^u(k)$ and $C^{\psi}(k)$ to be $(\nu(k) k^2)^{-1}$ and $(\kappa(k) k^2)^{-1}$ respectively, i.e., C\^[u]{}(k,t,t’) & = & (- (k) k\^2 (t-t’) ) C\^[u]{}(k,t,t)\ C\^(k,t,t’) & = & (- (k) k\^2 (t-t’) ) C\^(k,t,t) With this assumption, Eqs. (\[eqn:Suu\], \[eqn:Spsipsi\]) reduce to \^[u]{}(k\_0) & = & \_[k&gt;k\_0]{} \_[p&lt;k\_0]{}\ & & \[ T\_[1]{}(k,p,q) C\^[u]{}(p) C\^[u]{}(q) +T\_[2]{}(k,p,q) C\^[u]{}(k) C\^[u]{}(q) +T\_[3]{}(k,p,q) C\^[u]{}(k) C\^[u]{}(p) \] \[eqn:Pi\_u\]\ \^(k\_0) & = & \_[k&gt;k\_0]{} \_[p&lt;k\_0]{}\ & & \[ T\_[4]{}(k,p,q) C\^(p) C\^[u]{}(q) +T\_[5]{}(k,p,q) C\^(k) C\^[u]{}(q) \] \[eqn:Pi\_psi\] For $C^u(k)$ we substitute Eqs. (\[eqn:Cu\_k\], \[eqn:Eu\_k\]), while for $C^{\psi}$ we substitute [@Lesl:book] C\^(k) & = & k\^[-(d-1)]{} E\^[u]{}(k),\ \[eqn:Cpsi\_k\] E\^(k) & = & K\^ \^ (\^u)\^[-1/3]{} k\^[-5/3]{} \[eqn:Epsi\_k\] where $K^{\psi}$ is called the Batchelor’s constant. The renormalized viscosity and diffusivity in the inertial range are (k) & = & (K\^u)\^[1/2]{} (\^u)\^[1/3]{} k\^[-4/3]{} \^\* \[eqn:nuk\]\ (k) & = & (K\^u)\^[1/2]{} (\^u)\^[1/3]{} k\^[-4/3]{} \^\* . \[eqn:kappak\] The substitution of the above quantities, and the change of variables k=; p= v; q= w yield the following nondimensional version of the flux equations [@FourFris]: 1 & = & (K\^u)\^[3/2]{} \[eqn:Piu\]\ 1 & = & K\^ (K\^u)\^[1/2]{} \[eqn:Pipsi\] where $\alpha$ is angle between vectors ${\bf p}$ and ${\bf q}$, and the integrals $F^{u,\psi}(v,w)$ are F\^[u]{} & = & \[t\_[1]{}(v,w) (v w)\^[-d-]{} +t\_[2]{}(v,w) w\^[-d-]{} +t\_[3]{}(v,w) v\^[-d-]{} \]\ F\^ & = & \[t\_[4]{}(v,w) (v w)\^[-d-]{} +t\_[5]{}(v,w) w\^[-d-]{} \] Here $t_i(v,w) = T_i(k,kv,kw)/k^2$. The terms in the square brackets of Eq. (\[eqn:Piu\], \[eqn:Pipsi\]) (denoted by $I^{u,\psi}$) involve integrals. We compute them using gaussian quadrature. The integrals converge for all dimensions $d \ge 2$. Once the integrals are known, Kolmogorov’s and Batchelor’s constants ($K^u$ and $K^{\psi}$ respectively) can be computed. The computed values are given in Table 1. In our calculation Batchelor’s constant $K^{\psi}$ in three dimension is 1.25. Due to uncertainties in the value of $\nu^*$ and $\kappa^*$, the error in the constant could be of the order of 0.1. Earlier, Kraichnan had estimated the constant to be 0.2. Yakhot and Orszag [@YakhOrsz] obtained $K^{\psi}=1.16$ by their $\epsilon$-based renormalization group analysis. Canuto and Dubovikov [@Canu1] and Canuto et al. [@Canu3] estimated $K^{\psi} = (5/3)*0.72 = 1.2$ using their RG calculation. Lin et al. [@Lin] find the constant to be close to 0.3. Our result is in very good agreement with the theoretical predictions of Yakhot and Orszag [@YakhOrsz] and Canuto et al. [@Canu3], as well as to the experimental values ($\approx 1.2-1.4$, see Monin and Yaglom [@MoniYagl2:book]). It is also interesting to note that both $K^{u,\psi}$ are proportional to $d^{1/3}$, consistent with the predictions of Fournier and Frisch [@FourFris] for fluid turbulence. This result implies that the cascade rated $\Pi^{u,\psi}$ will decrease with dimensions as $d^{-1/2}$. Conclusions =========== In this paper we employed field-theoretic techniques to calculate the cascade rates of scalar turbulence. Our calculation is to first order. From this formalism we also calculate Batchelor’s constant. In three dimensions, we find Batchelor’s constant to be 1.25, which is in very good agreement with the theoretical predictions of Yakhot and Orszag [@YakhOrsz] and Canuto et al. [@Canu3], and the experimental values. In higher space dimensions the constant varies as $d^{1/3}$. Our calculation of cascade rate requires the renormalized viscosity and diffusivity. We have extended the RG calculations of Zhou and Vahala [@Zhou:scalar] for higher dimensions. Our calculations show that for higher dimensions, the renormalized viscosity and diffusivity vary with dimensions as $d^{-1/2}$, and the turbulent Prandtl number approaches unity. [10]{} D. Forster, D. R. Nelson, and M. J. stephen, Phys. Rev. A [**16**]{}, 732 (1977). V. Yakhot and S. A. Orszag, J. Sci. Comput. [**1**]{}, 3 (1986). W. D. McComb, [*The Physics of Fluid Turbulence*]{} (Claredon, Oxford University Press, 1990). Y. Zhou and G. Vahala, Phys. Rev. E [**48**]{}, 4387 (1993). M. K. Verma, Phys. Rev. E [**64**]{}, 26305 (2001). D. C. Leslie, [*Development in the Theory of Turbulence*]{} (Claredon, Oxford University Press, 1973). B.-S. Lin, C. C. Chang, and C.-T. Wang, Phys. Rev. E [**63**]{}, 16304 (2000). L. T. Adzhemyan, A. N. Vasil’ev, and M. Gnatich, Theor. Math. Phys. (USSR) [ **58**]{}, 47 (1984). C. DeDominicis and P. C. Martin, Phys. Rev. A [**19**]{}, 419 (1979). H. W. Wyld, Ann. Phys. [**14**]{}, 143 (1961). V. M. Canuto and M. S. Dubovikov, Phys. Fluids [**8**]{}, 571 (1996). V. M. Canuto and M. S. Dubovikov, Phys. Fluids [**8**]{}, 599 (1996). R. H. Kraichnan, J. Fluid Mech. [**5**]{}, 497 (1959). M. Nelkin, nlin.CD/0103046 (2001). M. K. Verma, Phys. Plasma [**8**]{}, 3945 (2001). J. D. Fournier and U. Frisch, Phys. Rev. A [**17**]{}, 747 (1979). M. M. Stani[s]{}i[ć]{}, [*Mathematic Theory of Turbulence*]{} (Springer-Verlag, New York, 1988). M. K. Verma, Phys. Plasmas (submitted), nlin.CD/0103033 (2001). G. Dar, M. K. Verma, and V. Eswaran, Physica D [**157**]{}, 207 (2001). A. S. Monin and A. M. Yaglom, [*Statistical Fluid Mechanic: Mechanics of Turbulence*]{} (MIT Press, Cambridge, 1975), Vol. 2. $d$ $\nu^*$ $\kappa^*$ $Pr_{turb}$ $K^u$ $K^{\psi}$ ----- --------- ------------ ------------- ------- ------------ 3 0.36 0.85 0.42 1.53 1.25 4 0.42 0.69 0.61 1.60 1.39 7 0.38 0.48 0.80 1.76 1.65 10 0.34 0.39 0.87 1.94 1.83 25 0.22 0.24 0.94 2.43 2.44 50 0.16 0.16 1.0 3.1 3.0 100 0.093 0.095 0.98 3.4 3.4 : The computed values of renormalized viscosity $\nu^*$, diffusivity $\kappa^*$, turbulent Prandtl number $Pr_{turb}$, Kolmogorov’s constant $K^u$ and Batchelor’s constants $K^{\psi}$ for various space dimensions $d$.[]{data-label="tab:scalar"} [^1]: email: mkv@iitk.ac.in
--- abstract: 'Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, [<span style="font-variant:small-caps;">BigBird</span>]{}, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that [<span style="font-variant:small-caps;">BigBird</span>]{}is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, [<span style="font-variant:small-caps;">BigBird</span>]{}drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.' author: - | Manzil Zaheer manzilz@google.com\ Guru Guruganesh gurug@google.com\ Google Research,\ Mountain View, CA, USA Avinava Dubey avinavadubey@google.com\ Joshua Ainslie, Chris Alberti, Santiago Ontanon,\ Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang,\ Amr Ahmed\ Google Research, USA bibliography: - 'ref.bib' title: 'Big Bird: Transformers for Longer Sequences' ---
--- abstract: 'We discuss some aspects of the Horava-Lifshitz cosmology with different matter components considered as dominants at different stages of the cosmic evolution (each stage is represented by an equation of state pressure/density=constant). We compare cosmological solutions from this theory with their counterparts of General Relativity (Friedmann cosmology). At early times, the Horava- Lifshitz cosmology contains a curvature-dependent dominant term which is stiff matter-reminiscent and this fact motivates to discuss, in some detail, this term beside the usual stiff matter component (pressure=density) if we are thinking in the role that this fluid could have played early in the framework of the holographic cosmology. Nevertheless, we show that an early stiff matter component is of little relevance in Horava-Lifshitz cosmology.' address: 'Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4950, Valparaíso, Chile.' author: - Samuel Lepe and Joel Saavedra title: 'On Hořava-Lifshitz Cosmology' --- Introduction ============ Searching for a quantum theory of gravity has been a fruitful field for theoretical physics in the last century. In this sense, Hořava proposed in Ref. [@Horava:2008jf] a new quantizable theory of gravity, using the ideas from solid state physics. This theory was originally called Hořava-Lifshitz quantum gravity, that is a power-counting renormalizable theory with consistent ultraviolet (UV) behavior and, on the other hand, the theory has one fixed point in the infrared limit (IR) namely General Relativity (GR) [@Horava:2008jf; @Horava:2008ih; @Horava:2009if; @Horava:2009uw]. Thus, this theory leads to a modification of the Einstein’s general relativity at high energies producing interesting features in cosmology. The first ideas about this subject have been presented in Refs. [@Calcagni:2009ar; @Kiritsis:2009sh; @Takahashi:2009wc] where the cosmological consequences of Hořava-Lifshitz (HL) cosmology were studied in a vast detail, see also Refs. [@Nojiri:2010wj; @Clifton:2011jh; @Brandenberger:2009yt; @Mukohyama:2009gg; @Cai:2009dx; @Saridakis:2009bv; @Mukohyama:2009zs; @Mukohyama:2009mz; @Cai:2009in; @Wang:2009azb; @Leon:2009rc; @Minamitsuji:2009ii; @Carloni:2009jc; @Gao:2009wn; @Kobayashi:2010eh; @Maeda:2010ke; @Saridakis:2011pk; @Bertolami:2011ka]. The new theory has virtues and defects, and its basis ideas are not free of controversy [@Henneaux:2009zb]. In this article we would like to address an interesting point concerning HL cosmology that is still an open problem. In particular, we are seeing a behavior of HL cosmology for different kinds of matter content of the universe such as dust, cosmological constant and phantom matter. The present paper is organized as follows. In Sec. I we discuss the HL cosmology for different kinds of the matter and also we present general settings and constrains in the theory. In Sec. II we analyze the role of stiff-like matter taken as the dominant at early times. Finally, in Sec. III we give some final remarks. Hořava-Lifshitz Cosmology ========================= In this section we would like to review the main results of Hl cosmology and its implications for different kind of mater. Also, we are doing the comparison between the behaviors of HL and GR cosmology. Let us first to introduce a cosmological model under consideration based on Ref. [@Mukohyama:2010xz]. It is well-known that in Hořava-Lifshitz gravity we do not have the Hamiltonian constraint and therefore, from the formal cosmological point of view, also we do not have one Friedmann equation as in standard cosmology. In this theory the starting point is the dynamical equation for flat Friedmann-Robertson-Walker (FRW) spacetime $$\eta \left( 2\dot{H}+3H^{2}\right) =-p, \label{eq.1}$$where we introduced a parameter $$\eta =\frac{1}{2}\left( 3\lambda -1\right) , \label{eq.2}$$and $\lambda $ represents a dimensionless constant. The value of $\lambda$ is fixed by the diffeomorsphism invariance of four-dimensional general relativity. At this point, we would like to emphasize that in HL cosmology any value of the constant $\lambda$ is consistent with a foliation that preserves the diffeomorphism invariance. For example, it was shown in Ref. [@Gumrukcuoglu:2011xg] that for $1/3<\lambda <1$ (or equivalently $0<\eta <1$) the scalar graviton is a ghost and is ghosts-free if $\lambda <1/3$ (that is $\eta <0$) and $\lambda >1$ (or $\eta >1$). The field equation for matter field is described satisfies a non-conservation equation, and at high energy limit, or early times limit, we can write it in the following form $$\dot{\rho}+3H(\rho +P)=-Q, \label{eq.3}$$ where $Q$ represents the rate of energy non-conservation, and the low energy limit can be recovered only if $Q\rightarrow 0 $. We can conclude that the energy non-conservation is an effect of high-energy physics. Now, equations (\[eq.1\]) and (\[eq.3\]) contain enough information to describe a cosmological evolution of the spacetime, together with the equation of state (EoS) $p=\omega \rho $. Indeed, using this dynamical equation plus the non-conservation equation we can obtain one first integral in the form $$3H^{2}=\frac{1}{\eta }\rho +\frac{C\left( t\right) }{a^{3}}, \label{eq.4}$$and this result can be interpreted as a Friedmann equation in HL gravity where the integration “constant” is given by $$C\left( t\right) =C_{0}+\frac{1}{\eta }\int_{t_{0}}^{t}d\tau a^{3}\left( \tau \right) Q\left( \tau \right) . \label{eq.c}$$In order to obtain a cosmological description from this kind of theories, we would like to discuss first some well known results from GR in the context of HL cosmology. The subscript $0$ indicates quantities today ($t_{0}$). **$\bullet$ Dust matter: $p=0\longleftrightarrow \omega =0$** In this case from (\[eq.1\]) we find exactly the same solution for the Hubble parameter as in GR, i.e., $$H\left( t\right) =H_{0}\left[ 1+\frac{3}{2}H_{0}\left( t-t_{0}\right) \right] ^{-1}, \label{eq.5}$$and for the cosmic scale factor reads $$a\left( t\right) =a_{0}\left[ 1+\frac{3}{2}H_{0}\left( t-t_{0}\right) \right] ^{2/3}. \label{eq.6}$$It is well known in GR, when we have $\omega =0$, an evolution is described by an energy density proportional to $a^{-3}$ ( this result is obtained from the conservation energy density equation $\dot{\rho}+3H(\rho +P)=0$) and the formal solution of the cosmological evolution in GR is identical to Eqs. (\[eq.5\]) and (\[eq.6\]), obtained for HL cosmology. Although the formal solutions for the Hubble parameter and the scale factor from HL and GR are identical in this case, the dynamical behavior of the energy density is absolutely different, because, in HL cosmology the energy density is not fixed by the non-conservation equation (\[eq.3\]). Finally for the dust matter case, we would like to mention that ([eq.5]{}) and (\[eq.6\]) are also solutions of (\[eq.1\]) if we set $\rho =0$ and $\eta $ is finite. Thus, these equations can also be seen as a self-decelerated evolution. In a nutshell, both $\omega =0$ and $\rho =0$ lead to the same solutions described by Eqs. (\[eq.5\]) and (\[eq.6\]). Therefore, in case of dust matter we found that the effect of high energy physics is reflected in the fact that during the early stages, the rate of energy non-conservation is not constant $Q(t)\neq 0$, and we have an interchange of energy the between the mater of the universe and some unknown source. **$\bullet$ Cosmological constant: *$\omega =-1$.*** In this case, the conservation law $$\dot{\rho}=-Q, \label{constant1}$$ therefore the behavior of cosmological constant at early universe would be different that the respective one in GR. Then, the interchange of energy between the matter ( described in this case for the cosmological constant kind) and the unknown source gives a non constant behavior for the energy density for our cosmological constant type. We note that the GR limit is recovered if we take $Q\left( a\rightarrow \infty \right) \rightarrow 0$, i.e., $\rho =const.$. Then the “role” of the cosmological constant in HL gravity is different with the respective one in GR. **$\bullet$ Phantom Evolution.** In this case, from (\[eq.1\]) we can write the following formal solution for the Hubble parameter $$H^{2}\left( a\right) =\frac{1}{a^{3}}\left( a_{0}^{3}H_{0}^{2}-\frac{\omega }{\eta }\int_{a_{0}}^{a}daa^{2}\rho \left( a\right) \right) , \label{eq.16s}$$and if we take one phantom Ansatz for the energy density given by $\rho \left( a\right) =\rho _{0}\left( a/a_{0}\right) ^{\beta }$ with $\beta >1$, we get the explicit form of the formal solution $$H^{2}\left( a\right) =H_{0}^{2}\left( \frac{a_{0}}{a}\right) ^{3}\left[ 1-\frac{\omega \rho _{0}/3H_{0}^{2}}{\eta \left( 1+\beta /3\right) }\left[ \left( \frac{a}{a_{0}}\right) ^{\beta +3}-1\right] \right] , \label{eq.18s}$$and the future behavior of the Hubble parameter reads as follow $$H^{2}\left( a\rightarrow \infty \right) =-\frac{\omega \rho _{0}/3}{\eta \left( 1+\beta /3\right) }\left( \frac{a_{0}}{a}\right) ^{\beta }.$$ One realistic model implies the following restriction for the barotropic index, $\omega <0$. If we use the following setting $$\left\vert \omega \right\vert \rho _{0}/3H_{0}^{2}=\left( 1+\beta /3\right) \eta , \label{eq.19s}$$we obtain one phantom solution for the scale factor given by $$a\left( t\right) =a_{0}\left( \frac{2}{\beta H_{0}}\right) ^{2/\beta }\left( t_{s}-t\right) ^{-2/\beta }\text{ \ \ }and\text{ \ \ }t_{s}=t_{0}+\frac{2}{\beta H_{0}}. \label{eq.20s}$$It is worthwhile noticing that the solution (\[eq.20s\]) has a new region for the phantom evolution for the barotropic index $\omega <0$ instead of the usual, more restrictive one $\omega <-1$ that appears in GR. We also notice that if $\eta \rightarrow \infty$, then $\beta \rightarrow -3$ and we can write $$H^{2}\left( a\right) _{\eta \rightarrow \infty }\rightarrow H_{0}^{2}\left( \frac{a_{0}}{a}\right) ^{3},$$ so that the phantom approach has no sense. According to $$\eta =\left\vert \omega \right\vert \left( \rho _{0}/3H_{0}^{2}\right) \left( 1+\beta /3\right) ^{-1}>1, \label{lol}$$ or $$\left\vert \omega \right\vert \left( \rho _{0}/3H_{0}^{2}\right) >1+\beta /3,$$ we conclude that the observational data do not allow fulfill the last inequality. On the other hand, if $0<\eta <1$ (ghost scalar graviton) we have $\left\vert \omega \right\vert \left( \rho _{0}/3H_{0}^{2}\right) <1+\beta /3$ and in this case we can have a phantom phase provided we can accept an existence of a ghost scalar graviton. **$\bullet$ General Settings.** We are now interested in the limit where the matter sector is decoupled from the gravity sector, where dark matter as an integration constant dominated the evolution of the universe, that its $\eta \rightarrow \infty \left( \lambda \rightarrow \infty \right) $ [@Gumrukcuoglu:2011xg], in this limit we can distinguish several situations. For instance, if $p$ is finite and $\eta \rightarrow \infty $, from (\[eq.1\]) we obtain exactly the same solution given by (\[eq.5\]) and (\[eq.6\]) and the Hubble parameter becomes in this limit $$3H^{2}=\frac{C_{0}}{a^{3}}, \label{heta}$$ thus, a dust-like evolution but $p$ and the barotropic index $\omega$ do not vanish. Also, we would like to notice that if $t\rightarrow \infty $ and $\eta $ is finite then $$3H^{2}\left( t\rightarrow \infty \right) \rightarrow \frac{1}{\eta }\rho \left( t\rightarrow \infty \right) +\frac{1}{a^{3}\left( t\rightarrow \infty \right) }\left[ C_{0}+\frac{1}{\eta }\int_{t_{0}}^{\infty }d\tau a^{3}\left( \tau \right) Q\left( \tau \right) \right] =\frac{1}{\eta }\rho \left( t\rightarrow \infty \right) , \label{hgrand}$$ if at least $Q\left( t\right) $ decreases as $a^{-4}\left( t\right) $ when $a\left( t\rightarrow \infty \right) \rightarrow \infty $. This condition on $Q$ is consistent with the recovering of local invariance for the matter sector, where it is demanded that $Q\left( a\rightarrow \infty \right) \rightarrow 0$ and $C\left( t\rightarrow \infty \right) \rightarrow C_{0}$ (see [@Gumrukcuoglu:2011xg]). So, we have the following relation between observational parameters $$3H_{0}^{2}=\frac{1}{\eta }\rho _{0}+\frac{C_{0}}{a_{0}^{3}}.$$ On the other hand, from (\[eq.1\]) and (\[eq.4\]) we can write $$\dot{H}=-\frac{1}{2}\frac{C_{0}}{a^{3}}-\frac{1}{2\eta }\left[ \left( 1+\omega \right) \rho +\frac{1}{a^{3}}\int_{t_{0}}^{t}d\tau a^{3}\left( \tau \right) Q\left( \tau \right) \right] , \label{hdot}$$ so that if we take the limit $\eta \rightarrow \infty $, and keep finite second term in (\[hdot\]), we obtain $$H\left( a\right) =H_{0}\sqrt{1+\frac{C_{0}a_{0}^{-3}}{3H_{0}^{2}}\left[ \left( \frac{a_{0}}{a}\right) ^{3}-1\right] }. \label{hdot2}$$ Furthermore, if $C_{0}a_{0}^{-3}/3H_{0}^{2}<1$, we find $$H\left( a\rightarrow \infty \right) \rightarrow H_{0}\sqrt{1-\frac{C_{0}a_{0}^{-3}}{3H_{0}^{2}}}<H_{0}. \label{hlimit}$$ To conclude, in case with $\eta \rightarrow \infty $ we find a de Sitter phase at late times. We also notice that this behavior can also be obtained if we choose $\omega =-1$ but keep $\eta $ finite, that is, $$\dot{H}=-\frac{1}{2}\frac{C\left( t\right) }{a^{3}}\rightarrow \dot{H}\left( t\rightarrow \infty \right) \rightarrow 0,$$ given that in this limit $C\left( t\rightarrow \infty \right) \rightarrow C_{0}$. In another words, taking either $\eta \rightarrow \infty $, for all $t$, and or $t\rightarrow \infty $ and $\eta $ finite, we arrive to a de Sitter phase at late times. To complete this point we would like to comment about the acceleration of universe. By using the expression for $\dot{H}$, as well as (\[eq.4\]), we can write $$\dot{H}+H^{2}=-\frac{1}{6\eta }\left[ \left( 1+3\omega \right) \rho +\eta \frac{C\left( t\right) }{a^{3}}\right] ,$$ and it is straightforward to check that $\eta =1$ plus $C\left( t\right) =0$ implies the standard expression for the acceleration in GR. Also, if $\dot{H}+H^{2}>0$ we must have $\omega <-1/3$ , to avoid violating the weak energy condition (WEC) $\rho >0$. Then, we can have quintessence, cosmological constant or phantom schemes. Now, if $\dot{H}+H^{2}<0$ then $\omega >-1/3$ the WEC is fulfilled, then we have an evolution driven by dark matter. Finally when $\dot{H}+H^{2}=0$ we obtain the solution $$H\left( t\right) =H_{0}\left[ 1+H_{0}\left( t-t_{0}\right) \right] ^{-1},$$ i.e., $\omega =-1/3$ (string gas) as in GR. As a curiosity we notice that $$\left( \dot{H}+H^{2}\right) _{\omega =-1/3}=\frac{1}{3}\left( \dot{H}\right) _{\omega =-1}.$$ Finally, by setting $\dot{H}+H^{2}=0$, we can write the following expression for the density of energy $$\rho \left( t\right) =\rho _{0}\left( a/a_{0}\right) ^{-3}+\frac{1}{\eta \left\vert 1+3\omega \right\vert }\frac{1}{a^{3}}\int_{t_{0}}^{t}d\tau a^{3}\left( \tau \right) Q\left( \tau \right) ,$$ where $C_{0}=\left\vert 1+3\omega \right\vert \rho _{0}a_{0}^{-3}$ and $\omega <-1/3$, and we find that $\rho \left( t\rightarrow \infty \right) \rightarrow \rho _{0}\left( a/a_{0}\right) ^{-3}$, i. e., a dust-like behavior at late times is driven by a $\omega $ that is dark energy-like. Stiff-like matter as the dominant term at early times ===================================================== When the higher curvature terms are included in the cosmological evolution the dynamic equation for the Hubble parameter is given by [@Mukohyama:2010xz], $$\eta \left( 2\dot{H}+3H^{2}\right) =-\omega \rho +\frac{\alpha k^{3}}{a^{6}} +\frac{\alpha' k^{2}}{a^{4}}-\frac{k}{a^{2}}+ \Lambda, \label{eq234}$$ where $\alpha$ and $\alpha'$ are constants and $k$ is the spatial curvature. If we consider the early time limit, the dominant contribution is the term of the form $\sim a^{-6}$, which could be interpreted as stiff matter like, and also would be explain by the supposition that stiff matter could be one important matter content at the very early universe [@Banks:2004eb; @Banks:2008ep]. We write (\[eq234\]) in the form $$\eta \left( 2\dot{H}+3H^{2}\right) =-\omega \rho +\frac{\alpha k^{3}}{a^{6}}\left[ 1+\frac{a^{2}}{\alpha }\left( \alpha `k-a^{2}\right) \right] , \label{eq234b}$$and the incorporation of $\Lambda $ will be discussed later. Therefore, we have a new equation to handle that is $$3\eta H^{2}=\rho -\frac{\alpha k^{3}}{a^{6}}. \label{eq.32ss}$$ Similarly as before, the parameter $\eta $ can take any value. The dynamical equation then can be written as $$\eta \left( 2\dot{H}+3H^{2}\right) =-\omega \rho +\frac{\alpha k^{3}}{a^{6}}, \label{eq.33ss}$$The aproximation at early times is justified if the scale factor is kept around $a^{2}\sim \left\vert \alpha `\right\vert $, if $k=1$ and for $k=-1$ we have a more restrictive condition: $a^{2}\left( \alpha `+a^{2}\right) <<\alpha $ . From Eqs. (\[eq.32ss\]) and (\[eq.33ss\]) we obtain the conservation law (the term proportional to $C\left( t\right) /a^{3}$, i.e., proportional to $Q$, is not present here given that only we are holding the dominant contribution $\sim a^{-6}$) $$\dot{\rho}+3H\left( 1+\omega \right) \rho =0, \label{eq.34ss}$$as well as the equation $$\dot{H}+3H^{2}=\frac{1}{2\eta }\left( 1-\omega \right) \rho, \label{eq.35ss}$$and from these last equations we note that if we do $\omega =1$, we obtain the usual solution as in GR, i.e. $\rho \sim a^{-6}$ and. We find the following solution for the Hubble parameter and the scale factor, respectively, $$H\left( t\right) =H_{0}\left[ 1+3H_{0}\left( t-t_{0}\right) \right] ^{-1}, \label{eq.36ss}$$and $$a\left( t\right) =a_{0}\left[ 1+3H_{0}\left( t-t_{0}\right) \right] ^{1/3}. \label{eq.37ss}$$ In order to see the implications of this result, we are comparing it to standard GR, and the equations $3H^{2}=\rho $ and $\dot{\rho}+6H\rho =0$ ($\omega =1$) that lead (\[eq.36ss\]) and (\[eq.37ss\]). Thus replacing (\[eq.36ss\]) and (\[eq.37ss\]) in (\[eq.32ss\]), we obtain for $\omega =1$ $$3\eta H^{2}=\left( \rho _{0}-\frac{\alpha k^{3}}{a_{0}^{6}}\right) \left( \frac{a_{0}}{a}\right) ^{6}, \label{eq.38ss}$$and for $k=1$ we must to have $\rho _{0}a_{0}^{6}>\alpha $ and $\eta >0$ (do not forget that if $0<\eta <1$ we have a ghost scalar graviton and If $\eta >1$ we are ghosts-free). If $\rho _{0}a_{0}^{6}<\alpha $, then $\eta <0$ and we are ghosts-free. For $k=-1$ we must have $\eta >0$. We want now to discuss an early universe that contains a mixture of stiff matter-like and cosmological constant, $$3\eta H^{2}=\rho -\frac{\alpha k^{3}}{a^{6}}+\Lambda , \label{s0}$$and $$\eta \left( 2\dot{H}+3H^{2}\right) =-\omega \rho +\frac{\alpha k^{3}}{a^{6}}+\Lambda . \label{s1}$$Then we obtain the $k$-independent equation $$\dot{H}+3H^{2}=\frac{1}{2\eta }\left[ \left( 1-\omega \right) \rho +2\Lambda \right] , \label{s2}$$and for $\omega =1$ and $\eta >0$, the formal solution is $$H\left( t\right) =\sqrt{\Lambda /3\eta }\left( \frac{1+\Delta _{0}\exp \left[ -2\sqrt{3\Lambda /\eta }\left( t-t_{0}\right) \right] }{1-\Delta _{0}\exp \left[ -2\sqrt{3\Lambda /\eta }\left( t-t_{0}\right) \right] }\right) , \label{s3}$$where we have denoted $$\Delta _{0}=\left( H_{0}-\sqrt{\Lambda /3\eta }\right) \left( H_{0}+\sqrt{\Lambda /3\eta }\right) ^{-1}, \label{s4}$$and we observe that $\Delta _{0}=0\rightarrow H=\sqrt{\Lambda /3\eta }$, i.e., an usual early de Sitter phase (old inflation-like). If $\Delta _{0}\neq 0$, the solution (40) is a reasonable one solution for $t<<t_{0}$ (very early times), i. e., $H\left( t<<t_{0}\right) \rightarrow $constant and we note also that there is no singularity in $H\left( t\right) $ for $\Delta _{0}>0$. For completeness, with $\omega =1$ and $\eta <0$, the Eq. (39) has the solution $$H\left( t\right) =\sqrt{\Lambda /3\left\vert \eta \right\vert }\tan \left[ \arctan \left( \frac{H_{0}}{\sqrt{\Lambda /3\left\vert \eta \right\vert }}\right) -6\sqrt{\Lambda /3\left\vert \eta \right\vert }\left( t-t_{0}\right) \right] , \label{s5}$$and this solution will be reasonable only if $t<<t_{0}$ (very early times and, for instance, $0<\arctan \left( H_{0}/\sqrt{\Lambda /3\left\vert \eta \right\vert }\right) +6\sqrt{\Lambda /3\left\vert \eta \right\vert }t_{0}<\pi /2\Longrightarrow H\left( t\right) >0$), i. e., $H\left( t<<t_{0}\right) \rightarrow $constant. Thus, even the presence of $\omega =1$, is a cosmological constant the dominant component at early times. In other words, even accepting the presence of stiff matter at early times, in HL cosmology this is a little relevant fact (nevertheless, a fluid for which $p=\rho $ could play a relevant role at early times if we are thinking in to build an holographic approach to cosmology, see \[29\] and \[30\]). Finally, we compare the deceleration parameter as given in GR and HL cosmology. This parameter is defined by $q=-\left( 1+\dot{H}/H^{2}\right) $ so that, in GR reads $$q=\frac{1}{2}\left( 1+3\omega \right) -\frac{1}{2}\left( 1+\omega \right) \frac{\Lambda }{H^{2}}, \label{sl1}$$ and in HL cosmology $$q=\frac{1}{2}\left( 1+3\omega \right) -\left( 1-\omega \right) \frac{\alpha k^{3}}{\eta H^{2}a^{6}}-\frac{1}{2}\left( 1+\omega \right) \frac{\Lambda }{\eta H^{2}}, \label{sl2}$$ and for $\omega =1$ we have $q=2-\Lambda /H^{2}$ and $q=2-\Lambda /\eta H^{2} $, respectively, and we note that when $\omega =1$, the curvature term $\sim \alpha k^{3}$ disappears. Final Remarks ============= We have studied some aspects of the HL cosmology where the emphasis has been put on discuss some cosmological solutions which are present too in GR, in particular, an evolution driven by dust is the same in both theories (flat case). A possible phantom stage has been discussed where we have found a condition less restrictive over the $\omega $-parameter ($\omega <0$, not $\omega <-1$). At late times, the energy density exhibits a like dust behavior nevertheless the $\omega $-parameter satisfies the inequality $\omega <-1/3$. If we do infinite the parameter which preserves the diffeomorphism invariance in the present theory, a late de Sitter phase can be obtained without consider the usual scheme $\omega =-1$. The combined effect of a curvature-dependent term, which is stiff matter reminiscent and dominant at early times, beside the usual stiff matter component ($\omega =1$, possible important role at early evolution) and a cosmological constant has been discussed and solutions has been found which exhibit an early de Sitter phase and this fact shows that in HL cosmology the role of stiff matter is of little relevance. This work has been supported by COMISIÓN NACIONAL DE CIENCIAS Y TECNOLOGÍA through FONDECYT Grants 1110076 (JS and SL), 1090613 and 1110230 (JS). This work was also partially supported by PUCV-VRIEA grant No. 037.492/2013 (SL) and PUCV grant No. 123.713/2012. (JS). [1104.2087]{} P. Horava, Phys. Lett. B [**694**]{}, 172 (2010) \[arXiv:0811.2217 \[hep-th\]\]. P. Horava, JHEP [**0903**]{}, 020 (2009) \[arXiv:0812.4287 \[hep-th\]\]. P. Horava, Phys. Rev. Lett.  [**102**]{}, 161301 (2009) \[arXiv:0902.3657 \[hep-th\]\]. P. Horava, Phys. Rev. D [**79**]{}, 084008 (2009) \[arXiv:0901.3775 \[hep-th\]\]. G. Calcagni, JHEP [**0909**]{}, 112 (2009) \[arXiv:0904.0829 \[hep-th\]\]. E. Kiritsis and G. Kofinas, Nucl. Phys. B [**821**]{}, 467 (2009) \[arXiv:0904.1334 \[hep-th\]\]. T. Takahashi and J. Soda, Phys. Rev. Lett.  [**102**]{}, 231301 (2009) \[arXiv:0904.0554 \[hep-th\]\]. S. ’i. Nojiri and S. D. Odintsov, Phys. Rept.  [**505**]{}, 59 (2011) \[arXiv:1011.0544 \[gr-qc\]\]. T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, Phys. Rept.  [**513**]{}, 1 (2012) \[arXiv:1106.2476 \[astro-ph.CO\]\]. R. Brandenberger, Phys. Rev. D [**80**]{}, 043516 (2009) \[arXiv:0904.2835 \[hep-th\]\]. S. Mukohyama, JCAP [**0906**]{}, 001 (2009) \[arXiv:0904.2190 \[hep-th\]\]. R. -G. Cai, B. Hu and H. -B. Zhang, Phys. Rev. D [**80**]{}, 041501 (2009) \[arXiv:0905.0255 \[hep-th\]\]. E. N. Saridakis, Eur. Phys. J. C [**67**]{}, 229 (2010) \[arXiv:0905.3532 \[hep-th\]\]. S. Mukohyama, K. Nakayama, F. Takahashi and S. Yokoyama, Phys. Lett. B [**679**]{}, 6 (2009) \[arXiv:0905.0055 \[hep-th\]\]. S. Mukohyama, Phys. Rev. D [**80**]{}, 064005 (2009) \[arXiv:0905.3563 \[hep-th\]\]. Y. -F. Cai and E. N. Saridakis, JCAP [**0910**]{}, 020 (2009) \[arXiv:0906.1789 \[hep-th\]\]. A. Wang, D. Wands and R. Maartens, JCAP [**1003**]{}, 013 (2010) \[arXiv:0909.5167 \[hep-th\]\]. G. Leon and E. N. Saridakis, JCAP [**0911**]{}, 006 (2009) \[arXiv:0909.3571 \[hep-th\]\]. M. Minamitsuji, Phys. Lett. B [**684**]{}, 194 (2010) \[arXiv:0905.3892 \[astro-ph.CO\]\]. S. Carloni, E. Elizalde and P. J. Silva, Class. Quant. Grav.  [**27**]{}, 045004 (2010) \[arXiv:0909.2219 \[hep-th\]\]. X. Gao, Y. Wang, W. Xue and R. Brandenberger, JCAP [**1002**]{}, 020 (2010) \[arXiv:0911.3196 \[hep-th\]\]. T. Kobayashi, Y. Urakawa and M. Yamaguchi, JCAP [**1004**]{}, 025 (2010) \[arXiv:1002.3101 \[hep-th\]\]. K. -i. Maeda, Y. Misonoh and T. Kobayashi, Phys. Rev. D [**82**]{}, 064024 (2010) \[arXiv:1006.2739 \[hep-th\]\]. E. N. Saridakis, Int. J. Mod. Phys. D [**20**]{}, 1485 (2011) \[arXiv:1101.0300 \[astro-ph.CO\]\]. O. Bertolami and C. A. D. Zarro, Phys. Rev. D [**84**]{}, 044042 (2011) \[arXiv:1106.0126 \[hep-th\]\]. M. Henneaux, A. Kleinschmidt and G. Lucena Gomez, Phys. Rev. D [**81**]{}, 064002 (2010) \[arXiv:0912.0399 \[hep-th\]\]. S. Mukohyama, Class. Quant. Grav. **27**, 223101 (2010) \[arXiv:1007.5199 \[hep-th\]\]. A. E. Gumrukcuoglu and S. Mukohyama, Phys. Rev. D [**83**]{}, 124033 (2011) \[arXiv:1104.2087 \[hep-th\]\]. T. Banks and W. Fischler, hep-th/0412097. T. Banks, J. Phys. A [**42**]{}, 304002 (2009) \[arXiv:0809.3951 \[hep-th\]\].
--- abstract: 'The nascent field of compressed sensing is founded on the fact that high-dimensional signals with “simple structure” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity to low-rankness. However, two fundamental questions have been left unanswered, namely: What are the general abstract meanings of “structure” and “simplicity”? And do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we address these two questions. Using algorithmic information theory tools such as the Kolmogorov complexity, we provide a unified definition of structure and simplicity. Leveraging this new definition, we develop and analyze an abstract algorithm for signal recovery motivated by Occam’s Razor. Minimum complexity pursuit (MCP) requires just $O(3 \kappa)$ randomized samples to recover a signal of complexity $\kappa$ and ambient dimension $n$. We also discuss the performance of MCP in the presence of measurement noise and with approximately simple signals.' author: - 'Shirin Jalali, Arian Maleki, Richard G. Baraniuk[^1][^2]' bibliography: - 'myrefs.bib' title: | **Minimum Complexity Pursuit\ for Universal Compressed Sensing[^3]** --- Introduction {#sec:intro} ============ Compressed sensing (CS) refers to a body of techniques that undersample high-dimensional signals, and yet recover them accurately by exploiting their intrinsic “structure” or “compressibility" [@Donoho1; @CaRoTa06]. This leads to more efficient sensing systems that have proved to be valuable in many applications, including cameras [@DuDaTaLaTiKeBa08], magnetic resonance imaging (MRI) [@LuDoSaPa08] and radar [@BaSt07; @HeSt09; @AnMaBa12], to name a few. While the promise of CS has been to undersample “structured" signals, its premise is still limited to specific instances of “structure" such as sparsity and low-rankness. These notions are important in their own right. However, the concept of “structure" and “compressibility" is of course much more general than these specific instances. Several interesting extensions of sparsity and low-rankness that have been proposed in the last several years are testimonies to this claim [@RichModelbasedCS; @ChRePaWi10; @VeMaBl02; @ReFaPa10; @ShCh11; @HeBa12; @HeBa11; @DoKaMe06]. The goal of this paper is to develop a general and fundamental notion of structure for recovering signals from an undersampled set of linear measurements. In particular, we aim to answer the following question: Can we recover a given “structured” signal $\mathbf{x} \in \mathbb{R}^n$ from an undersampled set of linear measurements? Note that, unlike the other work in CS, the structure of the signal has not been specified. Therefore, to answer this question we introduce a [*universal*]{} notion of structure that distinguishes between “structured” and “unstructured” signals without employing any specific signal model. Towards this end, we use [*Kolmogorov complexity*]{}, which is a measure of complexity for finite-alphabet sequences introduced by Solomonoff [@Solomonoff], Kolmogorov [@KolmogorovC] and Chaitin [@Chaitin:66]. We argue that Kolmogorov complexity, if employed directly for the real-valued signals, is a restricted notion of complexity and does not cover well-knownn structures such as sparsity. Hence, based on Kolmogorov complexity, we define the *Kolmogorov information dimension* (KID) of a real-valued signal as the growth rate of the complexity of its quantized version as the quantization becomes finer. Note that, similar to Kolmogorov complexity, KID is defined for individual sequences and is free of any signal modeling assumptions. Therefore, it provides a [*universal*]{} notion of structure. We prove that if the KID of a signal is much smaller than its ambient dimension, then it can be recovered from fewer measurements than its ambient dimension. Furthermore, we show that KID of many well-studied structured signals is small compared to their ambient dimensions, while the KID of well-known unstructured signals are “close” to their ambient dimensions. To demonstrate that approximate recovery of such structured signals is possible, we propose the *minimum complexity pursuit* (MCP) recovery algorithm. Based on Occam’s razor [@occam], MCP approximates the simplest object (in the Kolmogorov complexity sense) that satisfies the measurement constraints. Roughly speaking, we prove that MCP is able to recover a signal with “complexity” $\kappa$ using no more than $3 \kappa$ measurements. Finally, we establish the robustness of MCP to noise on both the measurements and the signal. The structure of the paper is as follows. Section \[sec:def\] describes the notation used in the paper and introduces the KID. Section \[sec:contrib\] summarizes our main contributions and their implications. Section \[sec:examp\] bounds the KIDs of several popular classes of signals in CS. Section \[sec:related\] makes a comparison of our work with the related papers in the literature. Section \[sec:proofs\] provides the proofs of our main results. Finally, Section \[sec:conclusion\] concludes the paper. Background {#sec:def} ========== Notation -------- Calligraphic letters such as $\Ac$ and $\Bc$ denote sets. For a set $\Ac$, $|\Ac|$ and $\Ac^c$ denote its size and its complement, respectively. For a sample space $\Omega$ and an event set $\Ac\subseteq \Omega$, $\ind_{\Ac}$ denotes the indicator function of the event $\Ac$. Boldfaced letters denote vectors. Throughout the paper, $\triangleq$ denotes equality by definition. For a vector $\xv \in \mathds{R}^n$, $x_i$, $\|{{\bf x}}\|_p\triangleq (\sum_{i=1}^n|x_i|^p)^{1/p}$, and $\|{{\bf x}}\|_{\infty}\triangleq\max_{i}|x_i|$ denote the $i^{\rm th}$ component, $\ell_p$ norm and $\ell_{\infty}$ norm of $\xv$, respectively. For $1\leq i\leq j \leq n$, $x_i^j \triangleq (x_i,x_{i+1},\ldots,x_j)$. Also, to simplify the notation, $x^j$ denotes $x_1^j$. Uppercase letters are used for both matrices and random variables, and hence their usage will be clear from the context. For integer $n$, $I_n$ denotes the $n\times n$ identity matrix. Let $\{0,1\}^*$ denote the set of all finite-length binary sequences, $\{0,1\}^*\triangleq\cup_{n\geq 1}\{0,1\}^n$. Similarly, $\{0,1\}^{\infty}$ denotes the set of infinite-length binary sequences. For a real number $x\in[0,1]$, let $[x]_m$ denote its $m$-bit approximation that results from taking the first $m$ bits in the binary expansion of $x$. In other words, if $x=\sum_{i=1}^{\infty}2^{-i}(x)_i$, where $(x)_i\in\{0,1\}$, then $$[x]_m\triangleq\sum_{i=1}^{m}2^{-i}(x)_i.$$ Similarly, for a vector ${{\bf x}}\in[0,1]^n$, define $$[\xv]_m\triangleq ([x_1]_m,\ldots,[x_n]_m).$$ Throughout the paper, the basis of the logarithms is assumed to be ${{\rm e}}$ unless otherwise specified. Kolmogorov complexity {#sec:kolm} --------------------- The prefix Kolmogorov complexity of a finite-length binary sequence $\xv$ with respect to a universal computer ${\tt U}$ is defined as the minimum length over all programs that print $\xv$ and halt.[^4] For ${{\bf x}}\in\{0,1\}^*$, let $K_{\tt U}(\xv)$ denote the Kolmogorov complexity of sequence $\xv$ with respect to the universal computer ${\tt U}$. Given an optimal universal computer ${\tt U}$ and any computer ${\tt A}$, there exists a constant $c_{\tt A}$ such that $K_{\tt U}({{\bf x}})\leq K_{\tt A}({{\bf x}})+c_{\tt A}$, for all strings ${{\bf x}}\in\{0,1\}^*$ [@book_vitanyi; @cover]. This result is known as the [*invariance theorem*]{} in the field of algorithmic complexity. Note that the constant $c_{{\tt A}}$ is independent of the length of the sequence, $n$, and hence can be neglected for sufficiently long $\xv$. As suggested in [@cover], we drop the subscript ${\tt U}$, and let $K({{\bf x}})$ denote the Kolmogorov complexity of the binary string $\xv$ . For two finite alphabet sequences $\xv $ and $\yv $, $K(\xv \ |\ \yv )$ is defined as the length of the shortest program that prints $\xv $ and halts, given that the universal computer ${\tt U}$ has access to the sequence $\yv $.[^5] Similarly, the Kolmogorov complexity of an integer $n\in\mathds{N}$, $K(n)$, is defined as the Kolmogorov complexity of its binary representation. The following theorem summarizes some of the properties of the Kolmogorov complexity that will be used throughout the paper. Define $$\log^*n \triangleq \lceil \log_2 n\rceil + 2\log_2 \max(\lceil \log_2 n\rceil ,1).$$ \[thm:properties\] Let $\xv , \yv $ be binary strings of lengths $\ell({{\bf x}})$ and $\ell({{\bf y}})$, respectively. Furthermore, let $m,n \in \mathds{N}$. The Kolmogorov complexity satisfies the following properties: - $K({{\bf x}}\, | \, \ell({{\bf x}}) ) \leq \ell({{\bf x}}) + c $, - $K({{\bf x}},{{\bf y}}) \leq K({{\bf x}})+ K({{\bf y}})+c$, - $K({{\bf x}}\ | \; {{\bf y}}) \leq K(\xv)+c$, - $K({{\bf x}}) \leq K({{\bf x}}\ | \ \ell({{\bf x}}))+K(\ell({{\bf x}})) + c$, - $K(n) \leq \log^* n+c$, - $K(n+m) \leq K(n)+ K(m)+c$, where $c$ is a constant independent of ${{\bf x}}, {{\bf y}}, n$ and $m$, but might be different from one appearance to another. While the proofs of different parts of this theorem can be found in [@book_vitanyi; @cover], for the sake of completeness, we present a brief summary of the proofs in Appendix \[app:proof\_thmprop\]. Kolmogorov complexity provides a universal measure for compressibility of sequences. It can be proved that an infinite length binary sequence $\xv$ is “random” if and only if there exists a constant $c$ such that $$K(x_1, x_2, \ldots,x_n)> n-c$$ for all $n$. (See [@book_vitanyi] and its Theorem 3.6.1 for the exact definition of randomness and the proof of this result.) Furthermore, if the Kolmogorov complexity of $\xv$ is smaller than the ambient dimension, then it means that we can compress $\xv$ (represent it with fewer bits); the encoder returns the shortest program that has generated $\xv$ and the decoder is the universal Turing machine that generates $\xv$, from this short program. Problem statement ================= Compressed sensing versus compression ------------------------------------- Algorithmic information theory is mainly concerned with finding the shortest description of binary (or finite alphabet) sequences with respect to a universal computer. Similarly, in data compression the goal is to provide “efficient” representations of sequences, such that a decoder can recover them from their descriptions. However, in this paper we are interested in the problem of CS, where the goal is to reconstruct a signal ${{\bf x}}_o \in \mathds{R}^n$ from its lower dimensional linear projections $\yv_o = A \xv_o$, where $A \in \mathds{R}^{d \times n}$ with $d < n$. This problem has two distinguishing features. First, since the system of equations is underdetermined, perfect reconstruction is not always possible. Therefore some knowledge of the structure of $\xv_o$ is required for recovering it from the measurements $\yv_o$. Second, the problem is different from the traditional problem of algorithmic information theory that considers the compression in terms of bits. Hence, this problem requires a new perspective on the Kolmogorov complexity of real-valued signals. Kolmogorov information dimension -------------------------------- Following the ideas in algorithmic information theory, one can consider the “structure" of a binary sequence to be the shortest program that generates it [@DoKaMe06]. The shorter the program, the more structured the signal. Consider ${{\bf x}}_o \in [0,1]^n$, and define the Kolmogorov complexity of ${{\bf x}}_o$ as Kolmogorov complexity of the the binary sequence derived from the concatenation of binary expansions of the components of ${{\bf x}}_o$. Using this definition, except for a set of measure zero, all signals in $ [0,1]^n$ have infinite Kolmogorov complexity. Therefore, this notion does not capture many well-known structures for real-valused signals such as sparsity. The first step to remedy this issue is to calculate the Kolmogorov complexity of a “quantized” version of $\xv_o $. For $\xv=(x_1,x_2,\ldots,x_n)\in [0,1]^n$, define the Kolmogorov complexity of $\xv$ at resolution $m$ as $$\begin{aligned} \label{eq:quantizedKol} K^{[\cdot]_m}(\xv) \triangleq \inf_{\uv\in[0,1]^n} \left\{ K(\uv \ | \ n,\, m) \ | \ \|\xv- \uv \|_{\infty} \leq 2^{-m}\right\}.\end{aligned}$$ We can provide an upper bound for $K^{[\cdot]_m}(\xv)$ by considering certain instances of $\mathbf{u}$. For example, $ \|\xv- [\xv]_m \|_{\infty} \leq 2^{-m}$, therefore, $$K^{[\cdot]_m}(\xv) \leq K([\xv]_m \ | \ m,n).$$ Note that $K^{[\cdot]_m}(\xv)$ is defined as the Kolmogorov complexity of the “quantized” version of $\xv$ conditioned on $m$ and $n$, because it is natural to assume that the encoder and decoder have access to both the ambient dimension $n$ and the quantization level $m$. For most real valued signals this quantity goes to infinity as $m$ approaches infinity. But, the growth rate is proportional to $m$. Therefore, in this paper we consider a normalized version of the Kolmogorov complexity. The [*Kolmogorov information dimension (KID)*]{} of $(x_1, x_2, \ldots, x_n)\in[0,1]^n$ at resolution $m$ is defined as $$\kappa_{m,n}({{\bf x}}) \triangleq \frac{K^{[\cdot]_{m}}(x_1,x_2, \ldots, x_n)}{m}.$$ In general the number of quantization levels $m$ may depend on the ambient dimension $n$. The division of $K^{[\cdot]_m}(\xv)$ by the resolution level $m$ ensures that for a fixed value of $n$ this quantity is always finite. \[lem:upperkol\] Let ${{\bf x}}\in [0,1]^n$. Then we have $$\kappa_{m,n}({{\bf x}}) \leq n + \frac{c}{m},$$ where $c$ is a positive constant independent of $m$, $n$, and $\xv$ . In particular, $$\lim \! \! \sup_{m \rightarrow \infty} \kappa_{m,n}({{\bf x}}) \leq n.$$ [*Proof:*]{} We first note that $$\begin{aligned} K^{[\cdot]_m}(\xv) &=& \inf_{\uv\in[0,1]^n} \left\{ K(\uv \; | \; n,\, m) \ | \ \|\xv- \uv \|_{\infty} \leq 2^{-m} \right\} \nonumber \\ & \leq & K\left([\xv]_m|m,n\right).\end{aligned}$$ Now, we derive an upper bound on $K([\xv]_m|n,m)$ by providing a program that describes $[\xv]_m$ conditioned on knowing $m$ and $n$. Consider the program that first explains the structure of the sequence as consisting of $n$ $m$-bit subsequences and then identifies the bits. Since the computer has access to $m$ and $n$, a constant number of bits (independent of $m$ or $n$) is sufficient for specifying the structure, and it then requires $mn$ more bits to specify each component $[x_i]_m$. Therefore, overall $$\begin{aligned} \kappa_{m,n}({{\bf x}}) & \leq \frac{K([x_1]_m, [x_2]_m, \ldots, [x_n]_m \, | \, m,n)}{m} \leq \frac{nm+ c}{m}. \end{aligned}$$ The second part of theorem is a straightforward result of the first part. $\hfill \Box$ \[remark:1\] Note that the existence of a finite upper bound on $K^{[\cdot]_m}({{\bf x}})$ ensures that the infimum in is achieved. This is due to the fact that the number of sequences $(u_1, u_2, \ldots, u_n)$ that have $K(u_1,u_2, \ldots, u_n) \leq mn+c$ is finite. In the rest of the paper we denote the minimizing vector by $\phi_m({{\bf x}})$, i.e., $$\begin{aligned} \phi_m({{\bf x}}) \triangleq \arg \min_{\uv \in [0,1]^n} \left\{ K(\uv \ | \ n,\, m) \ | \ \|\xv- \uv \|_{\infty} \leq 2^{-m}\right\}.\label{eq:def-of-phi-m} \end{aligned}$$ The following examples clarify some of the properties of the KID. \[ex:sparse\] (Sparse signals) Consider a $k$-sparse signal ${{\bf x}}\in [0,1]^n$. That is, $\xv$ has at most $k$ nonzero coefficients. For any given $\d >0$, the KID of $\xv$ at resolution $m$, for large enough values of $m$, is upper bounded by $2k(1 + \delta)$. See Section \[sec:sparsity\] for the proof of this claim. \[ex:lowrank\] (Low-rank matrices) Let $X$ denote a $M\times N$ real-valued matrix such that $\sigma_{\rm max}(X)\leq 1$.[^6] For any given $\delta>0$, the KID of $X$ at resolution $m$ is upper bounded by $r(M+N+1)(1+ \delta)$, for sufficiently large values of $m$. See Section \[sec:lowrank\] for the proof of this claim. Let $U[a,b]$ denote the uniform distribution between $a$ and $b$. Also, let $X \sim {\rm Bern}(p)$ represent a Bernoulli random variable with $\P(X=1) = 1- \P(X=0)=p$. The following proposition lets us construct the third example that represents an unstructured signal. \[prop:uniform\] Let $\{X_i\}_{i=1}^{\infty} \overset{i.i.d.}{\sim} U[0,1]$. Then, for any $n\geq 1$, $$\lim_{m\to\infty}\frac{1}{mn}K^{[\cdot]_m}(X_1, X_2, \ldots X_n) = 1$$ in probability. [*Proof:*]{} For $i\in\{1,2,\ldots\}$, let $X_i=\sum_{j=1}^{\infty} (X_i)_j2^{-j}$, where $(X_i)_j\in\{0,1\}$. Then $\{(X_i)_j\}_{j=1}^{\infty} \overset{i.i.d.}{\sim} \Bern(1/2)$ [@Marsaglia12]. Let $U^n\triangleq\phi_m(X^n)$. Since $|U_{i}-X_i|\leq2^{-m}$, then, for $j<m-1$, $(X_i)_j = (U_i)_j$. Therefore, $$\begin{aligned} \label{eq:Kl1} \frac{K(U^n\ | \ m, \, n )}{m} & \geq \frac{K({\{((U_i)_1,\ldots,(U_i)_m) \}_{i=1}^{n}} \ | \ m, \, n )-c}{m} \nonumber \\ & = \frac{K({\{((X_i)_1,\ldots,(X_i)_m) \}_{i=1}^{n}} \ | \ m, \, n )-c}{m}.\end{aligned}$$ Theorem 14.5.3 in [@cover] states that the normalized Kolmogorov’s complexity of a sequence of i.i.d. $\Bern(1/2)$ bits converges to $1$ in probability. In other words, $$\begin{aligned} \lim_{m\to\infty}{K(\{(X_i)_1,(X_i)_2,\ldots,(X_i)_m\}_{i=1}^n\, | \, m, n) \over mn}= 1,\label{eq:K1}\end{aligned}$$ in probability. Therefore, combining , Lemma \[lem:upperkol\] and yields the desired result. $\hfill \Box$\ \[example:uniform\] If the random variables $\{X_i\}_{i=1}^{n} \overset{i.i.d.}{\sim} U[0,1]$, then $$\lim_{m \rightarrow \infty} \frac{K^{[\cdot]_m}(X_1, X_2, \ldots, X_n)}{m} = n$$ in probability. The proof follows directly from Proposition \[prop:uniform\]. These examples demonstrate that, at least in cases where the ambient dimension is fixed and the quantization levels grow without bound, the KID is much smaller than the ambient dimension for the two well-known structured signals in Examples \[ex:sparse\] and \[ex:lowrank\], and is equal to the ambient dimension for the unstructured signal in Example \[example:uniform\]. We present more examples of structured signals and the corresponding upper bounds on their KID in Section \[sec:examp\]. Minimum complexity pursuit -------------------------- Consider the problem of recovering a structured real-valued signal $\xv_o=(x_{o,1},x_{o,2},\ldots)$ with $\kappa_{m,n}(x_o^n) = O(n^{1-\alpha})$, for some $\alpha>0$ and proper choice of $m$, from an underdetermined set of linear equations $\yv_o = A\xv_o $, where $\yv_o \in \mathds{R}^d$ and $d<n$. We follow Occam’s Razor and among all the solutions of $\yv_o = A\xv_o $, seek the solution that has the minimum complexity, i.e., $$\begin{aligned} &&\arg \min \quad K^{[\cdot]_m}(\xv)\nonumber \\ &&{\rm s.t.}\quad \ \ \;\;\;\; A\xv = \yv_o.\label{eq:alg}\end{aligned}$$ We call this algorithm [*minimum complexity pursuit*]{} or MCP. MCP has a free parameter $m$ whose effect on the performance of the algorithm will be discussed in detail later. We will show that MCP can recover $\xv_o $ from fewer measurements than the ambient dimension of the signal. This result extends the scope of CS from the class of sparse signals or the class of low-rank matrices to the class of all signal with small KID. In this paper we ignore the practical issues of approximating the MCP algorithm. In an independent work, [@BaDu11; @BaDu12] have considered a practical version of this algorithm and provided promising results in that direction. Note that the model that is considered in [@BaDu11; @BaDu12] is restricted to the stochastic signals that are drawn from an unknown distribution. Such restrictions might be required for obtaining practical algorithms. Further investigation of the practical issues is left as an avenue for future research. Our contributions {#sec:contrib} ================= Recovery in the noiseless setting {#sec:A} --------------------------------- Suppose that $A\in\mathds{R}^{d\times n}$, $\xv_o\in\mathds{R}^n$ and $\yv_o=A\xv_{o}$. We are interested in recovering $\xv_o$ from its linear measurements $\yv_o$. Let ${\mathbf{\hat{x}}}_o={\mathbf{\hat{x}}}_o({\yv }_o,A)$ denote the output of to the inputs $\yv_o$ and $A$. The following theorem states that having enough number of measurements, succeeds in recovering $\xv_o$. \[thm:1\] Let ${{\bf x}}_o\in[0,1]^n$, and let $\kappa_{m,n}=\kappa_{m,n}({{\bf x}}_o)$ denote the information dimension of ${{\bf x}}_o$ at resolution $m$. Also, let ${{\bf \hat{x}}}_{o}$ denote the solution of to $\yv_o=A{{\bf x}}_o$, where $A_{ij}$ are i.i.d. $\Nc(0,1)$. Then, for any $t\in(0,1)$, we have $$\begin{aligned} \P &\left(\| {{\bf x}}_{o}-{\mathbf{\hat{x}}}_{o}\|_2> \left({1\over\sqrt{1-t}}\left(\sqrt{n\over d}+2\right) +1\right){\sqrt{n}\over 2^m}\right) \nonumber \\ &\leq 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {\rm e}^{- \frac{d}{2} }. \end{aligned}$$ The proof is presented in Section \[sec:proofthm1\]. Note that $\kappa_{m,n}$ in Theorem \[thm:1\] is both a function of $m$ and $n$. Next, we consider several interesting corollaries of this theorem for high dimensional problems. \[cor:noiseless\_unnormerror\] Assume that ${{\bf x}}_o\in[0,1]^{n}$ and $m = \lceil \log n \rceil$. Let $\kappa_n\triangleq \kappa_{m,n}(x_o^n)$ and $d = \lceil \kappa_n \log n\rceil$. Assume that $d\leq n$. Then, $$\P\Big(\|{{\bf x}}_{o}-{\mathbf{\hat{x}}}_{o}\|_2> {20\over \sqrt{d}} \Big) < 2{{\rm e}}^{-{d\over 2}}.$$ [*Proof:*]{} For $m= \lceil \log n \rceil$, $2^{-m}\sqrt{n}\leq n^{-0.5}$. Choosing $t=0.965$, we get $$\begin{aligned} \left((1-t)^{-0.5}\left(\sqrt{nd^{-1}}+2\right) +1\right)2^{-m}\sqrt{n} &\leq {1\over \sqrt{(1-t)d}}+{1+2(1-t)^{-0.5}\over \sqrt{n}} \nonumber\\ &\leq {20\over \sqrt{d}},\end{aligned}$$ where the last step follows since $d\leq n$. Therefore, by Theorem \[thm:1\], $$\begin{aligned} \P\left(\|x_{o}^n-\xh_{o}^n\|_2^2> {20 \over \sqrt{d}} \right) &\leq 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {{\rm e}}^{-{d\over 2}}\nonumber\\ &\leq 2 {\rm e}^{- \frac{d}{2}}.\end{aligned}$$ $\hfill \Box$ According to Corollary \[cor:noiseless\_unnormerror\], if the complexity of the signal is less than $\kappa$, then the number of linear measurements required for its asymptotically perfect recovery is roughly speaking on the order of $\kappa \log n$. In other words, the number of measurements is proportional to the complexity of the signal and only logarithmically proportional to its ambient dimension. \[cor:noiseless\_normerror\] Assume that ${{\bf x}}_o\in[0,1]^{n}$, $m =2 \lceil \log n \rceil$ and $\kappa_n=\kappa_{m,n}(x_o^n)$. Then, for $d = 3 \kappa_n $, we have $$\P\left(\|{{\bf x}}_{o}-{{\bf \hat{x}}}_{o}\|_2> {4\over d} \right) <{{\rm e}}^{-0.1 \kappa_n\log n}+{{\rm e}}^{-0.5d} .$$ [*Proof:*]{} Setting $t=1-{1\over n}$, $m = 2\lceil \log n \rceil$, and $d=\lceil 3 \kappa_n \rceil$, we have $$\begin{aligned} 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )}&\leq 2^{2 \kappa_{n} \log n} {\rm e}^{1.5 \kappa_n(1-\log n )}\nonumber\\ &<{{\rm e}}^{-0.1 \kappa_n\log n},\end{aligned}$$ for $n$ large enough. Also, $$\begin{aligned} \left((1-t)^{-0.5}\left(\sqrt{nd^{-1}}+2\right) +1\right)2^{-m}\sqrt{n} &\leq {1+\sqrt{n}(2+\sqrt{n/d})\over n\sqrt{n}}\nonumber\\ &<{3\over n} + {1\over \sqrt{nd}} <{4\over d}.\end{aligned}$$ $\hfill \Box$ It is worth noting that, while $m$ is set to $O(\log n)$ in Corollaries \[cor:noiseless\_unnormerror\] and \[cor:noiseless\_normerror\], it can be considered as a free parameter of the MCP algorithm. Theorem \[thm:1\] describes the trade-off of the parameters. If we fix all the other parameters in Theorem \[thm:1\], then increasing $m$ is equivalent to decreasing the reconstruction mean square error. But also it decreases the probability of correct recovery. Recovery in the presence of Gaussian noise in measurements {#ssec:stochnoise} ---------------------------------------------------------- In the previous section, we considered the case of recovering low-complexity signals from their noise-free linear measurements. In this section, we extend these results to the case of noisy measurements, where $\yv_o = A\xv_o + {\mathbf{w}}$, with ${\mathbf{w}}\sim \Nc(0, \sigma^2 I_d)$. Assuming that the complexity of the signal is known at the reconstruction stage, we consider the following reconstruction algorithm: $$\begin{aligned} \label{eq:recover_noisy} && \arg \min \;\;\; \|A{{\bf x}}-\yv_o\|_2, \nonumber \\ &&{\rm s.t.}\;\;\;\;\;\;\;\;\; \ K^{[\cdot]_{m}}({{\bf x}}) \leq \kappa_{m,n} m.\end{aligned}$$ Note that $\kappa_{m,n} m$ is an upper bound on the Kolmogorov complexity of $\xv_o $ at resolution $m$. We call this algorithm [*low-complexity least squares*]{} (LLS). Our quest in this section is to find the number of measurements required to make the LLS algorithm specified by robust to noise. \[thm:noisysetting\] Consider ${{\bf x}}_o\in[0,1]^{n}$. Let $m=\lceil\log n\rceil $, $\kappa_n=\kappa_{m ,n}(x_o^n)$ and $d=\lceil8r\kappa_{n}m\rceil$, where $r>1$. Also let ${{\bf \hat{x}}}_o$ denote the solution of LLS to input $\yv_o=A\xv_o+{\mathbf{w}}$, where $\{A_{ij}\}_{i,j}$ are i.i.d. distributed as $\Nc(0,1)$ and $\{w_i\}_i$ are i.i.d. distributed as $\Nc(0,\sigma^2)$. Then, $$\begin{aligned} \label{eq:mse_noisy} \P\left(\|{{\bf x}}_{o}-{{\bf \hat{x}}}_{o}\|_2^2 > {9\sigma^2\over r} \right) < 6{{\rm e}}^{-0.01d}+ {{\rm e}}^{-0.3m\kappa_n},\end{aligned}$$ for $d$ and $n$ large enough and $\sigma>0$. The proof is presented in Section \[sec:proofthmnoisy\]. Note that, since the elements of the matrix $A$ are i.i.d. $\Nc(0,1)$, as the ambient dimension $n$ grows, so does the signal-to-noise (SNR) ratio per measurement. In order to have fixed SNR ratio per measurement, one can draw the elements of $A$ i.i.d. from $\Nc(0, 1/n)$. In this case, it is not difficult to see that the normalized mean square error $\|{{\bf x}}_{o}-{{\bf \hat{x}}}_{o}\|_2^2/n \leq {9\sigma^2 \over r}$, in probability. Recovery in the presence of deterministic noise {#sec:result_deterministic} ----------------------------------------------- Consider again the measurement system we introduced in the last section: $\yv_o = A\xv_o + {\mathbf{w}}$, where ${\mathbf{w}}$ represents measurement noise. Unlike the previous section, assume that the noise is deterministic and has bounded $\ell_2$-norm, i.e., $\|{\mathbf{w}}\|_2 \leq e$. This type of noise provides a good model for quantization noise on the measurements, among other practical nonidealities. Note that unlike the case of stochastic noise, deterministic noise can be adversarial. We prove that the LLS algorithm provides a sufficiently accurate estimate of $\xv_o $ even in the presence of such noise. \[thm:3\] Let ${{\bf x}}_o=(x_{o,1},\ldots,x_{o,n})\in[0,1]^n$ and $\yv_o = A\xv_o + {\mathbf{w}}$, where $\|{\mathbf{w}}\|_2 \leq e$. Let $\kappa_{m,n}=\kappa_{m,n}({{\bf x}}_o)$ denote the information dimension of ${{\bf x}}_o$ at resolution $m$. Then, for any $t\in(0,1)$, we have $$\begin{aligned} \lefteqn{\P \left(\|{{\bf x}}_{o}-{\bf \xh}_{o}\|_2> \left({1\over \sqrt{1-t}}\Big(\sqrt{n\over d}+2\Big) +1\right)2^{-m}\sqrt{n}+ \frac{e}{\sqrt{(1-t)d}}\right)} \nonumber \\ &\leq& 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {\rm e}^{- \frac{d}{2} }. \hspace{4cm} \nonumber\end{aligned}$$ Since the proof of this theorem is very similar to the proof of Theorem \[thm:1\], it is not included in the paper. Here the probability of accurate recovery is the same as in Theorem \[thm:1\], and under similar conditions this probability converges to one. The reconstruction error has two terms. The first term is again similar to Theorem \[thm:1\] and under similar conditions converges to zero. The second term in the error, $\frac{e}{\sqrt{(1-t) d}}$, is due to the noise in the measurements. As the number of measurements increases, $\frac{e}{ \sqrt{(1-t) d}}$ converges to zero. This is due to the fact that since $A_{i,j} \sim \Nc(0,1)$ as we increase the number of measurements, the energy of the signal per measurement is fixed. But since the total amount of energy of the noise is considered to be constant the average noise per measurement decreases by $1/\sqrt{d}$. Recovery of approximately low-complexity signals {#sec:result_approx} ------------------------------------------------ In Sections \[sec:A\]-\[sec:result\_deterministic\], we considered recovering “low-complexity” signals from their linear (noisy or noise-free) projections. However, most applications feature signals that are not of exactly low-complexity but rather are “close” to low-complexity signals. An example is the class of power-law “compressible” signals, discussed in Section \[sec:power\], which are a popular model in the CS literature and are more realistic than sparse signal models. In this section, we discuss this more general setting. Assume that the original signal $\xv_o $ is not low-complexity but is close to the low-complexity signal ${\mathbf{\tilde{x}}}$, i.e., $\|\xv_o - {\mathbf{\tilde{x}}}\|_2 \leq \epsilon_n$ with $\epsilon_n = o(1)$. Again, let $\yv_o=A\xv_o$. Consider the following reconstruction algorithm for recovering $\xv_o $ from its noisy linear measurements $\yv_o$: $$\begin{aligned} &&\min\quad \|\yv_o-A{{\bf x}}\|_2^2 \nonumber\\ &&{\rm s.t.}\quad \ \ K^{[\cdot]_m}(\xv) \leq \kappa_{m,n} m .\label{eq:alg_model_mismatch}\end{aligned}$$ Assume that $A\in\mathds{R}^{d\times n}$ and $A_{ij}$ are i.i.d. $\Nc(0,1)$. Let ${\mathbf{\hat{x}}}_o={\mathbf{\hat{x}}}_o(\yv_o,A)$ denote the solution of . \[thm:4\] Assume that there exists ${\mathbf{\tilde{x}}}_o\in\mathds{R}^n$ such that $\|\xv_o -{\mathbf{\tilde{x}}}_o\|_2 \leq \epsilon_n$, and ${K^{[\cdot]_m}({\mathbf{\tilde{x}}}_o)}\leq \kappa_{m,n}m$. Let $\yv=A\xv_o$, where $A$ is a $d\times n$ matrix with i.i.d. $\Nc(0,1)$ entries, and let ${\mathbf{\hat{x}}}_o$ denote the minimizer of . Then, for any $0<t<1$, $$\begin{aligned} \P& \Big(\|{{\bf x}}_{o}-{\mathbf{\hat{x}}}_{o}\|_2> {1\over \sqrt{1-t}}(\sqrt{n\over d}+2)(2^{-m}\sqrt{n}+2\e_n) +2^{-m}\sqrt{n}\Big) \nonumber \\ &\leq 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {\rm e}^{- \frac{d}{2} }. \hspace{4cm} \label{eq:8}\end{aligned}$$ The proof is presented in Section \[sec:proof2\]. There are two main error terms in . The first one is the reconstruction error due to the quantization performed in the calculation of Kolmogorov complexity. The second term is due to the fact that the signal $\xv_o $ is not of exactly low-complexity. The following corollary simplifies the statement of the theorem for some special useful cases. \[cor:approxsparse\] Consider ${{\bf x}}_o \in [0,1]^n$ and assume that there exists ${\mathbf{\tilde{x}}}_o \in [0,1]^{n}$, such that $\|{{\bf x}}_o-{\mathbf{\tilde{x}}}_o\|\leq\e_n$. Let $m= \lceil \log n\rceil$ and $\kappa_n=\kappa_{m_n,n}({\mathbf{\tilde{x}}}_o)$, $d = \lceil \kappa_n \log n \rceil$, $\yv_o=A{{\bf x}}_o$, where $A$ is a $d\times n$ matrix with i.i.d. $\Nc(0,1)$ entries, and ${{\bf \hat{x}}}_o=\operatorname*{arg\,min}_{K^{[\cdot]_m}(\xv) \leq \kappa_{n} m}\|\yv_o-A\xv\|$. Then, $$P\Big(\|{{\bf x}}_o - {{\bf \hat{x}}}_o\|_2^2 > 25\e_n\sqrt{n\over d}\;\Big) < 2{{\rm e}}^{-0.5d},$$ for $d<n$ large enough. [*Proof:*]{} Setting $t=0.965$ $$\begin{aligned} 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {\rm e}^{- \frac{d}{2} } &< 2^{d} {\rm e}^{\frac{d}{2} (0.965 +\log 0.035 )} + {\rm e}^{- \frac{d}{2} } \nonumber\\ &< 2{{\rm e}}^{-0.5d}. \end{aligned}$$ Also, for the same value of $t$ and $m_n=\lceil\log n\rceil$, $$\begin{aligned} {1\over \sqrt{1-t}}(\sqrt{n\over d}+2)(2^{-m}\sqrt{n}+2\e_n) +2^{-m}\sqrt{n}& < 6(\sqrt{n\over d}+2)({1\over\sqrt{n}}+2\e_n) +{1\over \sqrt{n}}\nonumber\\ &< 25\e_n\sqrt{n\over d},\end{aligned}$$ where at the last step we have assumed that $d<n$, and both of them are large enough. $\hfill \Box$ Other measurement matrices -------------------------- For the sake of clarity, the results presented so far have focused on i.i.d. Gaussian measurement matrices. However, the results can be extended to the more general class of i.i.d. subgaussian matrices. A random variable $X$ is called [*subgaussian*]{} if and only if there exist two constants $c_1, c_2>0$ such that $$\P(|X|> t) \leq c_1 {\rm e}^{-c_2 t^2}.$$ Such a random variable is denoted by $ {\rm SG}(c_1,c_2)$. Our goal in this section is to show how our results can be extended to the problem of CS with i.i.d. subgaussian measurement matrices. Our main conclusion is that the results presented for Gaussian matrices continue to hold for subgaussian matrices except for slight changes in the constants. However, as will be discussed later in Section \[sec:proofsubgauss\], the proof techniques are different from those for Gaussian matrices. To show these differences we extend the result of Theorem \[thm:1\] to subgaussian matrices. Similar arguments can be used for other extensions. As before we consider the problem of recovering $\xv_o $ from linear measurements $\yv_o = A \xv_o $, where the elements of the matrix are i.i.d. ${\rm SG}(c_1,c_2)$. \[thm:sub-Gaussian\] Let ${{\bf x}}_o\in[0,1]^{n}$. For integers $m$ and $n$, let $\kappa_{m,n}=\kappa_{m,n}({{\bf x}}_o)$. Assume that $\yv_o=A{{\bf x}}_o$, where $A$ is a $d\times n$ matrix, such that its entries are i.i.d. distributed as ${\rm SG}(c_1,c_2)$, and $\E[A_{ij}]=0$ and $\E[A_{ij}^2]=1$. Then, there exist three constants $c'_1$, $c'_2$, and $c_3$ depending only on $c_1$ and $c_2$ such that for any $1- \frac{c_3}{c_2}<\tau <1$ $$\begin{aligned} \lefteqn{\P \left(\|{{\bf x}}_{o}-{\mathbf{\hat{x}}}_{o}\|_2> ({ \tau^{-1}(\sqrt{ (c'_2+1){n}/{d}}+1) +1} )2^{-m}\sqrt{n}\right)} \nonumber \\ &\leq& 2^{2 \kappa_{m,n} m} {\rm e}^{- \frac{dc_2^2(\tau^2-1)^2}{16 c_3}} +{\rm e}^{- c'_1 n}. \hspace{4cm} \nonumber\end{aligned}$$ Theorem \[thm:sub-Gaussian\] shows that, by choosing $m = \lceil \log n \rceil$, $O(\kappa_{m,n} \log n)$ measurements remain sufficient for asymptotically accurate recovery. But, as expected, the constants might be different from those in Theorem \[thm:1\]. Discussion {#sec:discussion} ---------- The LLS algorithms proposed in and , corresponding to the cases when noise is present either in the signal or in the measurements, both assume the knowledge of an upper bound on the complexity of the signal. While such knowledge might be available or estimated in some applications, in many cases it is not straightforward to acquire it. In those cases, one might change the formulation of the MCP as follows: $$\begin{aligned} &&\operatorname*{arg\,min}\quad K^{[\cdot]_m}({{\bf x}})\nonumber \\ &&{\rm s.t.}\quad \ \ \;\;\;\; \|A{{\bf x}}- {\yv }_o\|_2 \leq z_n.\label{eq:MCP-zn}\end{aligned}$$ We call this new algorithm [*relaxed MCP*]{} or R-MCP. In this new optimization problem the challenge is to set parameter $z_n$ properly. The value of this parameter should be set according to the noise level present in the system. For instance, if we employ $z_{n} = (\sqrt{n}+ (t+1)\sqrt{d})\epsilon_n$ and $z_n = e$ for the approximately low-complexity signals case (corresponding to Section \[sec:result\_approx\]) and exactly sparse signal in the presence of deterministic noise (corresponding to Section \[sec:result\_deterministic\]), respectively, then we obtain results that are exactly the same as those stated in Theorems \[thm:3\] and \[thm:4\]. Since the proofs are very similar to the proofs of Theorems \[thm:3\] and \[thm:4\], we skip them here. In the case of stochastic noise (corresponding to Section \[ssec:stochnoise\]), it is not clear if this new formulation provides a bound similar to Theorem \[thm:noisysetting\]. This problem is deferred to future research. Kolmogorov dimension of certain classes of functions {#sec:examp} ==================================================== In this section, we explore the implications of our results for several signal classes to which CS has been successfully applied. We show that the number of measurements MCP requires for the accurate recovery is within the same order of the other well-known recovery algorithms. To achieve this goal we need to calculate KID for certain signals. It is well known that the Kolmogorov complexity of a sequence is not computable (See [@cover], Section 14.7). However, it is often possible to provide upper bounds on the Kolmogorov complexity. In this section, we consider several standard classes of functions and provide upper bounds on their KID. Based on these upper bounds, one can use Theorems \[thm:1\] and \[thm:4\] to calculate the number of linear measurements required by the MCP to recover them. These examples demonstrate the connection between the results of Section \[sec:contrib\] and the CS framework explained in the Introduction. Sparse signals {#sec:sparsity} -------------- A class of signals that has played a key role in CS is the class of $k$-sparse signals. The following proposition provides an upper bound on the KID of such signals. \[prop:sparse\] Let the signal $\xv_o =(x_{o,1}, x_{o,2}, \ldots,x_{o,n})$ be $k$-sparse, $\|\xv_o\|_0\leq k$. Then $$\kappa_{m,n}(\xv_o ) \leq k+ \frac{nh({k\over n})+0.5\log n+c}{m}.$$ [*Proof:*]{} Consider the following program for describing $[\xv_o ]_m$. First, use a program of constant length to describe the structure of the signal as “sparse” and the ordering of the rest of information, and the length of the sequence and the resolution.[^7] Next, spend $nh({k\over n})+0.5\log n+c'$ bits, where for $\a\in[0,1]$, $h(\a)\triangleq -\a\log_2\a-(1-\a)\log_2(1-\a)$, to code string of length $n$ that contains the locations of the $k$ non-zero elements [@cover]. Finally, use $km$ more bits to describe the quantized magnitudes of the non-zero coefficients. Therefore, overall, we have $$\begin{aligned} \label{eq:kc_poly} \lefteqn{K^{[\cdot]_{m}}(x_{o,1},x_{o,2},\ldots,x_{o,n}) }\nonumber \\ &\leq& km+ {nh\Big({k\over n}\Big)+0.5\log n+c},\end{aligned}$$ where $c$ is a constant independent of $\xv_o$, $m$ and $n$. $\hfill \Box$ In most of our analysis in this paper we consider the case of $m = \lceil \log n\rceil$. It is straightforward to confirm that in this case, for $n,k$ sufficiently large and $k \ll n$ we have $$\kappa_{m,n}(\xv_o ) \leq k+{n\over \log n}h\Big({k\over n}\Big)+1 \leq 2k(1+ \delta),$$ where $\delta$ is a small fixed number. It is straightforward to plug this upper bound in Corollary \[cor:noiseless\_normerror\] and prove that, for large values of $n$, $6k(1+ \delta)$ measurements are sufficient for the “successful” recovery of $k$-sparse signals. This is still larger than $2k$ measurements required by the $\ell_0$ minimization. However, the source of the discrepancy is not clear to the authors at this point. Power law compressible signals {#sec:power} ------------------------------ While sparse signals have played an important role in the theory of CS, it is well-known that they rarely occur in practice. More accurate models assume that either the signal’s coefficients decay at a specified rate, or the signal belongs to an $\ell_p$ ball with $p<1$ [@Donoho1], i.e., the signal belongs to the set $$\mathcal{B}_p^n \triangleq \left\{ {{\bf x}}\in \mathds{R}^n \ : \ \| {{\bf x}}\|_p \leq 1 \right\}.$$ For ${{\bf x}}_o\in \mathcal{B}_p^n$, let $(x_{o,(1)},x_{o,(2)}, \ldots,x_{o,(n)})$ denote the permuted version of ${{\bf x}}_o$ such that $|x_{o,(1)}|\geq |x_{o,(2)}|\geq \ldots \geq |x_{o,(n)}|$. It is straightforward to show that $|x_{o,(i)}| \leq i^{-\frac{1}{p}}$, i.e., it is power law compressible. Therefore, if we just keep the $k$ largest coefficients of this signal and set the rest to zero, the resulting $k$-sparse vector ${\mathbf{\tilde{x}}}_o$ satisfies: $\|{{\bf x}}_o- {\mathbf{\tilde{x}}}_o\|_2 \leq k^{-\frac{1}{p}+\frac{1}{2}} $. In Section \[sec:sparsity\], we derived an upper bound for the KID of $\tilde{{{\bf x}}}_o$. Proposition \[prop:6\] follows from this bound and Corollary \[cor:approxsparse\]. \[prop:6\] Let ${{\bf x}}_o \in \Bc_p^{n}$, ${\yv }_o=A{{\bf x}}_o$, where $A$ is a $d\times n$ random matrix with i.i.d. $\Nc(0,1)$ entries. Set $d = \lceil 3n^{p/2}\log n \rceil$. Let ${{\bf \hat{x}}}_o$ denote the minimizer of with $m = \lceil \log n \rceil$ and $\kappa_{m,n} = 3n^{p/2}$. Then, $$\P\Big(\|{{\bf x}}_{o}-{{\bf \hat{x}}}_{o}\|_2> {7\over \sqrt{\log n}}\Big) \leq 2 {\rm e}^{- 0.5d },$$ for sufficiently large $n$. [*Proof:*]{} Let ${\mathbf{\tilde{x}}}_o$ denote the $k$-sparse approximation of ${{\bf x}}_o$ derived by keeping the $k = n^{p/2}$ largest coefficients of ${{\bf x}}_o$, and setting the rest to zero. Then, $\|{{\bf x}}_o-{\mathbf{\tilde{x}}}_o\|_2\leq \e_n= n^{-\frac{1}{2}+\frac{p}{4}} $. According to Proposition \[prop:sparse\], for $n$ large enough, the KID of ${\mathbf{\tilde{x}}}_o$ at resolution $m = \log n$ is upper bounded by $$k+ \frac{nh({k\over n})+0.5\log n+c}{\log n} <2k(1 + \d),$$ where $\d>0$, can be made arbitrary small for $n$ large enough. By setting $\delta = 0.5$ we obtain $\kappa_{m,n}({\mathbf{\tilde{x}}}_o) \leq 3 n^{\frac{p}{2}}$. Also, for $t= 0.965$, $$\begin{aligned} {1\over \sqrt{1-t}}(\sqrt{n\over d}+2)(2^{-m}\sqrt{n}+2\e_n) +2^{-m}\sqrt{n} <{7\over \sqrt{ \log n}},\end{aligned}$$ for $d<n$ large enough. Therefore, Theorem \[thm:4\], yields the desired result. $\hfill \Box$ It is interesting to note that, as the power $p$ decreases, the number of measurements required for successful recovery decreases. Piecewise polynomial functions {#sec:polynomial} ------------------------------ Let ${\rm Poly}_N^Q$ denote the class of piecewise polynomial functions $f:[0,1] \rightarrow [0,1]$ with at most $Q$ singularities[^8] and maximum degree of $N$. For $f\in{\rm Poly}_N^Q$, let $(x_{o,1}, x_{o,2}, \ldots,x_{o,n})$ be the samples of $f$ at $$0, {1 \over n}, \ldots,{n-1\over n}.$$ Let $\{a_i^{\ell}\}_{i=0}^{N_{\ell}}$ denote the set of coefficients of the $\ell^{\rm th}$ polynomial of $f$, where $N_{\ell}\leq N $ denotes its degree. For the notational simplicity, we assume that the coefficients of each polynomial belong to the $[0,1]$ interval and that $\sum_{i=0}^{N_{\ell}} a^{\ell}_i <1$, for every $\ell$. Define $$\mathcal{P} \triangleq \left\{{{\bf x}}_o \in \mathds{R}^n \ | \ x_{o,i} = f(i/n), \ f \in {\rm Poly}_N^Q \right\}.$$ \[prop:polynomial\] For every signal ${{\bf x}}_o \in \mathcal{P}$, we have $$\begin{aligned} k_{m,n}({{\bf x}}_o) &\leq& (Q+1)(N+ 1)+\frac{(Q+1)(N+ 1)\lceil\log_2(N+1)\rceil}{m}\nonumber \\ &&+~\frac{\log^*n+ \log^*N + \log^* k + Q\log^*n+ c_1+c_2}{m}.\end{aligned}$$ [*Proof:*]{} Consider the following program for describing the quantized version of$\xv_o$. The code first specifies the signal model as samples of a “piecewise polynomial” function with parameters $(n,Q,N)$. This requires $\log^*N + \log^* Q + c$ bits. Then, for each singularity point, the code first specifies the largest sampling point $i/n$ that is smaller than it. Since there are at most $Q$ singularity points, describing this information requires at most $Q \log^* n$ bits. The next step is to describe the coefficients of each polynomial. Using an $m'$-bit uniform quantizer for each coefficient, the induced error is bounded as $$\begin{aligned} \left|\sum_{i=0}^{N_{\ell}} a^{\ell}_i t^n-\sum_{i=0}^{N_{\ell}} [a^{\ell}_i ]_{m'} t^n \right| &\leq \sum_{i=0}^{N_{\ell}} |a^{\ell}_i- [a^{\ell}_i]_{m'}|\nonumber\\ & \leq (N_{\ell}+1) 2^{-m'}\leq (N+1) 2^{-m'}. \end{aligned}$$ To ensure reconstructing the samples at resolution $m$, we require $(N+1) 2^{-m'}<2^{-m}$. Therefore, to describe the coefficients of the polynomials, at most, $(Q+1)(N+1) (m+\lceil\log_2(N+1)\rceil)$ extra bits are required. Hence, overall, it follows that $$\begin{aligned} \label{eq:kc_pp} {K^{[\cdot]_{m}}(x_{o,1},x_{o,2},\ldots,x_{o,n}) \over m}\leq& (Q+1)(N+ 1)+\frac{(Q+1)(N+ 1)\lceil\log_2(N+1)\rceil}{m}\nonumber \\ &+\frac{ \log^*N + \log^* Q + Q\log^*n+ c}{m}. \end{aligned}$$ $\hfill \Box$ It is straightforward to plug in Corollary \[cor:noiseless\_normerror\] and prove that, for large values of $n$, $O((Q+1)(N+ 2) )$ measurements are sufficient for the successful recovery of the piecewise polynomial functions. Smooth functions ---------------- Suppose that $x_1, x_2, \ldots, x_n$ are equispaced samples of a smooth function $f:[0,1] \rightarrow [0,1]$. Let $\mathcal{S}^{\beta}$ represent the class of $\beta+1$ times differentiable functions. For the notational simplicity we assume that $|f^{(m)}(t)| \leq m!$ for every $m \leq \beta+1$. This function is not necessarily a low-complexity signal, but it can be well-approximated by a piecewise polynomial function. To show this, consider partitioning the $[0,1]$ interval into subintervals of size $r_n$, and approximating the function $f$ with a polynomial of degree $\beta$ in each subinterval. Let $\hat{f}_{\beta}(x)$ denote the resulting piecewise polynomial function. It is straightforward to prove that $\|f-\hat{f}_{\beta}\|_{\infty} \leq r_n^{\beta+1}$. Hence, if ${{\bf x}}_o$ and $\hat{{{\bf x}}}_o$ denote the vectors consisting of the equispaced samples of the original signal and its piecewise polynomial approximation, respectively, it follows that $\|\hat{{{\bf x}}}_o-{{\bf x}}_o\|_2 \leq \sqrt{n} r_n^{\beta+1}$. We can summarize our discussion in the following proposition. ![The representation of a smooth function (solid black curve) and its piecewise polynomial approximation (dashed red). As the subinterval size $r_n$ becomes smaller, the approximation become more accurate.[]{data-label="fig:smoothfunction"}](FigureSmoothFunction.pdf) For $n\in\mathds{N}$, let $\xv_o\in\mathds{R}^n $ denote the vector of $n$ equispaced samples of $f \in \mathcal{S}^{\beta}$. Let $\yv=A\xv_o$, where $A$ is a $d\times n$ random matrix with i.i.d. $\Nc(0,1)$ entries. Also, let ${\mathbf{\hat{x}}}$ denote the solution of low-complexity least square algorithm in , with $m = \log n$ and $\kappa_{m,n} = 2 (2+ \beta) (n^{\frac{2}{2\beta+3}}+1)$. Then, for $n$ large enough and $d = \lceil \kappa_{m,n} \log n \rceil$ and any $\epsilon_1, \e_2>0$, we have $$\P(\|\xv_o -{\mathbf{\hat{x}}}_o\|_2 > {c\over \sqrt{\log n}}) \leq 2{{\rm e}}^{-0.5d},$$ where $c$ is a constant independent of $n$. [*Proof:*]{} Partition the $[0,1]$ interval into subintervals of size $r_n = n^{-\frac{1}{\beta+3/2}}$, and approximate the function $f$ with a polynomial of degree $\beta$ in each subinterval. Let $\hat{f}_{\beta}$ denote the resulting piecewise polynomial function. According to Proposition \[prop:polynomial\], for $n$ sufficiently large, the KID of the samples of $\hat{f}_{\beta}$, ${\bf \xt}_o$, at resolution $m=\lceil\log n\rceil$, is less than $(n^{\frac{1}{\beta+3/2}}+1)(\beta+2)(1+ \delta)$, for any $\delta>0$. Set $\delta=1$ and assume that $n$ is large enough for this result to hold. By Theorem \[thm:4\], $$\begin{aligned} \P& \Big(\|{{\bf x}}_{o}-{\mathbf{\hat{x}}}_{o}\|_2> {1\over \sqrt{1-t}}(\sqrt{n\over d}+2)({1\over{\sqrt{n}}}+2\e_n) +{1\over{\sqrt{n}}}\Big) \nonumber \\ &\leq 2^{ \kappa_{m,n} m} {\rm e}^{\frac{d}{2} (t +\log(1-t) )} + {\rm e}^{- \frac{d}{2} }. \hspace{4cm} \end{aligned}$$ Furthermore, as described before, $\epsilon_n = \|{\mathbf{\tilde{x}}}-{{\bf x}}_o\|_2 \leq \sqrt{n} r_n^{\beta+1}= n^{-\frac{1}{2}+\frac{1}{2\beta+3}}$. Plugging in $t=0.965$, $d = \lceil \kappa_{m,n} \log n \rceil$, and $\epsilon_n = n^{-\frac{1}{2}+\frac{1}{2\beta+3}}$ completes the proof. $\hfill \Box$ Low-rank matrices {#sec:lowrank} ----------------- Let $\mathcal{C}_r(M,N)$ be the class of $M \times N$ real-valued rank-$r$ matrices $X$ with $\sigma_{\rm max}(X) \leq 1$. The following theorem characterizes the KID of a matrix in this class at resolution $m$. \[prop:low-rank\] Let $X \in \mathcal{C}_r(M,N)$. Then $$\kappa_{m,n}(X) \leq r(M+N+1) + \frac{\log^* r + r(M+N+1)\log(3r) - r+c}{m}.$$ [*Proof:*]{} Having access to the values of $M$, $N$ and the resolution level $m$, consider the program that describes $X$ through its singular value decomposition as follows. Denote the singular value decomposition of the matrix $X$ as $X=U \Sigma V^T$ where $U \in \mathds{R}^{M \times r}$, $V \in \mathds{R}^{N \times r}$ and $\Sigma \in \mathds{R}^{r \times r}$ is a diagonal matrix. Note that $U^T U = I_r$ and $V^TV =I_r$. To describe $X$, first we use a constant number of bits to describe the structure of the data as a matrix of rank $r$, and also our coding strategy, which is describing the quantized versions of $U$, $\Sigma$, and $V$. To describe the rank $r$, the code uses $\log^* r$ bits. The next step is to describe the quantized versions of $U$, $\Sigma$ and $V$. Let $m_{u}$, $m_v$, and $m_{\sigma}$ denote the resolution levels used in the uniform quantization of the elements of $U$, $V$, and $\Sigma$, respectively. Hence, the quantized matrices can be described using $r M m_{u}+ r N m_v + r m_{\sigma}$ bits. Let $\Uh$, $\Vh$ and $\hat{\Sigma}$ denote the quantized version of $U$, $V$ and $\Sigma$ at the specified resolutions, respectively. Let $\hat{X} \triangleq \hat{U} \hat{\Sigma} \hat{V}$. By the triangle inequality, $$\begin{aligned} |X_{ij} - \hat{X}_{ij} | &= |\uv_i^T \Sigma \vv_j - \hat{\uv}_i^T \hat{\Sigma} \hat{\vv}_j| \nonumber\\ &\leq |\uv_i^T \Sigma \vv_j - \hat{\uv}_i^T {\Sigma} {\vv}_j| + |\hat{\uv}_i^T \Sigma \vv_j - \hat{\uv}_i^T \hat{\Sigma} {\vv}_j| + |\hat{\uv}_i^T \hat{\Sigma} \vv_j - \hat{\uv}_i^T \hat{\Sigma} \hat{\vv}_j|,\end{aligned}$$ where $\uv_i^T, \vv_i^T, \hat{\uv}_i^T$, and $\hat{\vv}_i^T$ denote the $i^{\rm th}$ rows of $U$, $V$, $\hat{U}$ and $\hat{V}$, respectively. Note that $|U_{ij}| \leq 1$, $|V_{ij}|\leq1$, for all $i,j$. Also by assumption, $\sigma_{\rm max}(\Sigma)\leq1$, and therefore $0 \leq \Sigma_{ii} <1$, for $i=1,\ldots,r$. Moreover, $|U_{ij}-\hat{U}_{ij}|<2^{-m_u+1}$, $|V_{ij}- \hat{V}_{ij}|<2^{-m_v+1}$, and finally $|\Sigma_{ii}- \hat{\Sigma}_{ii}|<2^{-m_{\sigma}}$. Therefore, $$\begin{aligned} |X_{i,j} - \hat{X}_{i,j} | &\leq |\uv_i^T \Sigma \vv_j - \hat{\uv}_i^T {\Sigma} {v}_j| + |\hat{\uv}_i^T \Sigma \vv_j - \hat{\uv}_i^T \hat{\Sigma} {v}_j| + |\hat{\uv}_i^T \hat{\Sigma} \vv_j - \hat{\uv}_i^T \hat{\Sigma} \hat{\vv}_j| \nonumber \\ &\leq \|\uv_i- \hat{\uv}_i\|_2 \| \Sigma \vv_j\|_2 + \|\uv_i\|_2 \| (\Sigma- \hat{\Sigma}) \vv_j\|_2 + \|\hat{\uv}_i\|_2 \| \hat{\Sigma} (\vv_j- \hat{\vv}_j)\|_2 \nonumber \\ & \leq \|\uv_i- \hat{\uv}_i\|_2 \sigma_{\rm max}(\Sigma) \|\vv_j\|_2 + \|\uv_i\|_2 \sigma_{\max}(\Sigma- \hat{\Sigma}) \|\vv_j\|_2 + \|\hat{\uv}_i\|_2 \sigma_{\max} (\hat{\Sigma})\| (\vv_j- \hat{\vv}_j)\|_2 \nonumber \\ & \leq \sqrt{r 2^{-2m_u+2}} \sqrt{r} + \sqrt{r} 2^{- m_{\sigma}} \sqrt{r}+ \sqrt{r} \sqrt{r 2^{-2m_v+2}}\nonumber\\ & \leq r 2^{-m_u+1} + r2 ^{-m_\sigma} + r2^{-m_v+1}.\end{aligned}$$ To ensure reconstructing the samples at resolution $m$, we have $$r 2^{-m_u+1} + r2 ^{-m_\sigma} + r2^{-m_v+1} \leq 2^{-m+1}.$$ Setting $m_u = m_v = m_\sigma+1$, we obtain $m_\sigma \geq m + \log (3r)-1$. Therefore, the KID at resolution $m$ of $X$ is upper bounded as follows: $$\begin{aligned} \kappa_{m,M,N} &\leq \frac{\log^* r + rM m_u + rN m_v + r m_\sigma +c}{m} \nonumber \\ &\leq \frac{\log^*r+rM(m+ \log(3r)) + rN(m+ \log(3r)) + r(m+ \log(3r)-1) )}{m} \nonumber \\ &\leq r(M+N+1) + \frac{\log^* r + r(M+N+1)\log(3r) - r+c}{m}. \nonumber \end{aligned}$$ $\hfill \Box$ Consider $m = \lceil \log n \rceil$. If we assume that $M,N, r$ are all sufficiently large while $r \ll M,N$, then we can upper bound $\kappa_{m,M, N} \leq r(M+N+1)(1+ \delta)$, where $\delta$ is small fixed number. It is straightforward to plug this upper bound in Corollary \[cor:noiseless\_normerror\] and prove that, for large values of $M,N$, $3r(M+N+1)(1+ \delta))$ measurements are sufficient for the “successful” recovery of the low-rank matrices. Related work {#sec:related} ============ Kolmogorov complexity and applications -------------------------------------- This paper is inspired by [@DonohoKS2002] and [@DoKaMe06]. [@DonohoKS2002] considers the well-studied problem of estimating $\boldsymbol{\theta}\in\mathds{R}^n$ from its noisy observation $\sv = \boldsymbol{\theta} + \zv$, where $\zv$ represents the noise in the system. It suggests using the *minimum Kolmogorov complexity estimator* (MKCE) and proves that if $\{\theta_i\}_{i=1}^n\overset{i.i.d.}{\sim} \pi$, then under several scenarios for the signal and noise, the average marginal distribution of the estimate derived by MKCE tends to the actual posterior distribution. [@DoKaMe06] considers the problem of CS over real-valued sequences with finite Kolmogorov complexity and defines the Kolmogorov complexity of a real-valued sequence ${{\bf x}}=(x_1,\ldots,x_n)$ as the length of the program that prints the binary representation of ${{\bf x}}$ and halts. Consider the set of all real-valued sequences with Kolmogorov complexity less than or equal to $k_0$, $$\Sc(k_0)\triangleq\{\xv: K(\xv)\leq k_0\}.$$ Let $A$ denote a $d\times n$ binary matrix, $\xv_o=(x_1,x_2,\ldots,x_n)^T$, $\yv_o=A\xv_o$. [@DoKaMe06] proposes the following algorithm for recovering $\xv_o$ from its linear measurements $\yv_o$: $$\begin{aligned} {\bf \xh}(\yv_o,A) &\triangleq \operatorname*{arg\,min}_{\yv_o=A\xv} K(\xv).\end{aligned}$$ It proves that $2k$ random linear measurements are sufficient for recovering sequences in $\Sc(k_0)$ with high probability. This result does not consider any non-ideality in the signal or the measurements. Furthermore, note that $\Sc(k_0)$ covers none of the classes of signals of interest in CS, such as sparse vectors or low-rank matrices. Almost all such signals have infinite Kolmogorov complexity, and therefore are not covered by the framework proposed in [@DoKaMe06]. Our generalizations require completely different proof techniques. Our paper settles both issues. In independent work, [@BaDu11] and [@BaDu12] have explored the performance of an algorithm like MCP for CS problems. Replacing the Kolmogorov complexity with the empirical entropy, they propose a Markov chain Monte Carlo approach similar to [@JalaliW:08; @JalaliW:12; @BaWe11] to solve the recovery problem. The empirical results provided in [@BaDu12] are very promising. Our theoretical results explain why such algorithms perform well in practice. Finally, we should mention that Kolmogorov complexity has proved to be useful in other applications such as similarity detection [@CiVi07; @Vitanyi12], density estimation [@BaCo91] and compression and denoising [@VeVi10]. For more information on the progress in these areas, see [@book_vitanyi]. Stochastic models ----------------- In this paper, we considered deterministic signal models. While deterministic signal models are the most popular models in CS, stochastic models have been also extensively explored; see [@HeCa09; @Schniter2010; @BaGuSh09; @RaFlGo10; @DoMaMoNSPT; @DoMaMo09; @DoTa09; @WuVe10; @MaAnYaBa11; @DoJoMaMoEllp; @MaDo09sp] and the references therein for more information. The most relevant work, to ours is [@WuVe10]. It considers the problem of recovering a memoryless process from a linear set of measurements and proves a connection between the number of measurements required and the Rényi information dimension. The upper information dimension of a random vector $(X_1, X_2, \ldots, X_n)$ is defined as $$\bar{d}(X_1, \ldots, X_n) \triangleq \lim \sup_{m \rightarrow \infty} \frac{H([X_1]_m, \ldots, [X_n]_m)}{m}.$$ There is a connection between the KID of a sequence and its Rényi information dimension [@cover] (Theorem 14.3.1). In spite of such connections, there are several important differences between our work and the work of [@WuVe10]. First, the results in [@WuVe10] are asymptotic, and the amount of error and the probability of correct recovery for finite dimensional signals have not been established there. Second, the stochastic approach proposed in [@WuVe10] considers a specific distribution that is assumed to be known in the recovery process while we are considering universal schemes in this paper. Universal schemes and minimum entropy coder ------------------------------------------- Our work has some connections with the minimum entropy decoder proposed by Csiszar in [@Csiszar82]. He suggests a universal minimum entropy decoder for reconstructing an i.i.d. signal from its linear measurements at a rate determined by the entropy of the source. For more information, see [@CaShVe03; @CoMeEf05] and the references therein. Finally we should emphasize that universal algorithms (that perform “optimally” without knowing the distribution of the data) have been explored extensively in information theory and are popular in many applications, including compression [@ZiLe78; @JalaliW:12], denoising [@WeOrSeVeWe05], prediction [@FeMeGu92], and many more. However, to the best of our knowledge our results provide the first universal approach for CS. Signal models ------------- As mentioned in the Introduction, in this paper we have addressed a central problem in the field of CS. Since the early days of CS, there have been many efforts to push the limits of the technique beyond sparsity. This line of work has resulted in a series of papers each of which either generalizes the signal model or reduces the required number of measurements by introducing more structure on the signal; see, for example, [@RichModelbasedCS; @ChRePaWi10; @VeMaBl02; @ReFaPa10; @ShCh11; @HeBa12; @HeBa11]. As proved in Section \[sec:examp\], some of these models can be considered as subclasses of the general model we consider here. However, it is worth noting thatm even though the MCP algorithm proposed here is universal, since the Kolmogorov complexity is not computable, it is not immediately useful for practical purposes. Proofs of the main results {#sec:proofs} ========================== Useful lemmas ------------- The following lemmas are frequently used in our proofs. \[lemma:chi\] Fix $\tau>0$, and let $Z_i\sim\Nc(0,1)$, $i=1,2,\ldots,d$. Then, $$\begin{aligned} \P\left( \sum_{i=1}^d Z_i^2 <d(1- \tau) \right) \leq {\rm e} ^{\frac{d}{2}(\tau + \log(1- \tau))}\end{aligned}$$ and $$\begin{aligned} \label{eq:chisq} \P\left( \sum_{i=1}^d Z_i^2 > d(1+\tau) \right) \leq {\rm e} ^{-\frac{d}{2}(\tau - \log(1+ \tau))}.\end{aligned}$$ [*Proof:*]{} Employing the Chernoff bound, for any $\lambda>0$, we have $$\begin{aligned} \P\left( \sum_{i=1}^d Z_i^2-d < - d \tau \right) &= \P\left(- \sum_i Z_i^2+d > d\tau \right)\nonumber\\ & \leq {\rm e}^{-\lambda d \tau} \E\left[ \rm{e}^{\lambda(d- \sum Z_i^2)} \right] \nonumber \\ &= {\rm e}^{-\lambda d \tau + \lambda d} \left( \E [{\rm e}^{-\lambda Z_1^2}] \right)^d \nonumber\\ &= {\rm e}^{-\lambda d \tau + \lambda d} \left(1+ 2\lambda \right)^{-d/2},\label{eq:chisquarupperbound}\end{aligned}$$ where the last line follows from the characteristic function of a Chi-square of degree $d$ [@hoggintroduction]. We optimize over $\lambda$ to obtain $$\lambda^* = \frac{\tau}{2(1- \tau)}. \label{eq:optlambda}$$ Plugging into , we obtain . $\hfill \Box$ \[lemma:gaussian-vectors\] Let $\Xv$ and $\Yv$ denote two independent Gaussian vectors of length $n$ with i.i.d. elements. Further, assume that for $i=1,\ldots,n$, $X_i\sim\Nc(0,1)$ and $Y_i\sim\Nc(0,1)$. Then the distribution of $\Xv^T\Yv=\sum_{i=1}^nX_iY_i$ is the same as the distribution of $\|\Xv\|_2G$, where $G\sim\Nc(0,1)$ is independent of $\|\Xv\|_2$. [*Proof:*]{} Note that $$\begin{aligned} {\Xv^T\Yv \over \|\Xv\|_2}&=\sum_{i=1}^n{X_i\over \|X^n\|_2}Y_i.\end{aligned}$$ Given $\Xv/\|\Xv\|_2={\bf a}$, $$\sum_{i=1}^n{X_i\over \|X^n\|_2}Y_i\sim \Nc(0,1),$$ because $\|\mathbf{a}\|_2^2=1$. Therefore, since the distribution of $\Xv^T\Yv/\|\Xv\|_2$ given $\Xv/\|\Xv\|_2={\bf a}$ is independent of the value of ${\bf a}$, the unconditional distribution of $\Xv^T\Yv/\|\Xv\|_2$ is also $\Nc(0,1)$. To prove independence, note that $\Xv/\|\Xv\|_2$ and $\Yv$ are both independent of $\|\Xv\|_2$. $\hfill \Box$ The following lemma is adapted from [@Vershyninnotes] (Proposition 5.10). \[lem:sub-Gaussiansum\] Let $Z_1, Z_2, \ldots, Z_n$ be i.i.d. zero-mean ${\rm SG}(c_1,c_2)$ random variables. Let $\mathbf{a}= (a_1,a_2, \ldots, a_n) \in \mathds{R}^n$ be a vector satisfying $\|\mathbf{a}\|_2^2=1$. Then $$\P \left(\left|\sum_{i=1}^n a_i Z_i \right| > t \right) \leq c_1 {{\rm e}}^{-c_2 t^2}.$$ In other words $\sum_{i=1}^n a_i Z_i $ is also ${\rm SG}(c_1,c_2)$. A random variable $X$ is called [*subexponential*]{}, denoted by ${\rm SE}(c_1,c_2)$, if and only if $$\P(|X|>t) \leq c_1 {{\rm e}}^{-c_2t}.$$ Slightly modified versions of the proofs we provide in the rest of this section can be found in [@Vershyninnotes]. For the sake of clarity and uniformity we state these lemmas with their proofs here. \[lem:subexp\_moments\] Let $Z $ be a ${\rm SE}(c_1,c_2)$ random variable. Then, it follows that $$\begin{aligned} \E[|Z|^p] &\leq \frac{2c_1 p!}{c_2^p}.\end{aligned}$$ [*Proof:*]{} Here we prove this lemma for the case where $p$ is even. The other case follows the same approach. Let $F(z)$ denote the cumulative distribution function of the random variable $Z$ $$\begin{aligned} \E[|Z|^p] &= \int_0^{\infty} z^p dF(z) + \int_{-\infty}^{0}z^p dF(z) \overset{(a)}{=} \int_{0}^{\infty} p z^{p-1} \int_{z}^{\infty} dF(x) dz - \int_{-\infty}^{0} p z^{p-1} \int_{-\infty}^{z} dF(x)dz \nonumber \\ & \leq \int_{0}^{\infty} p z^{p-1} c_1 {{\rm e}}^{-c_2 z}dz - \int_{-\infty}^{0} p z^{p-1} c_1 {{\rm e}}^{c_2 z}dz = \frac{2c_1 (p !)}{c_2^p}.\end{aligned}$$ Equality (a) is the result of integration by parts. $\hfill \Box$ \[lem:exponentialexpo\] Let $Z$ be a zero-mean ${\rm SE}(c_1,c_2)$ random variable. Then we have $$\begin{aligned} \E\left[{{\rm e}}^{\lambda Z}\right] &\leq {{\rm e}}^{4c_1\lambda^2/c_2^2}, \ \ \ \ \forall \lambda < c_2/2.\end{aligned}$$ [*Proof:*]{} We prove this theorem by expanding the exponential function ${{\rm e}}^{\lambda Z}$ and bounding the moments using Lemma \[lem:subexp\_moments\] as follows: $$\begin{aligned} \E\left[{{\rm e}}^{\lambda Z}\right] &= \E \left[1+ X + \sum_{k=2}^{\infty} \frac{\lambda ^k X^k}{K!}\right] = \E \left(1+ \sum_{k=2}^{\infty} \frac{\lambda ^k X^k}{K!} \right) \nonumber \\ &\leq 1+ 2c_1\left( \left(\frac{\lambda}{c_2}\right)^2 + \left(\frac{\lambda}{c_2}\right)^3+ \ldots \right) \leq 1+ 2c_1 \left(\frac{\lambda}{c_2}\right)^2 \left(\frac{1}{1- \lambda/c_2}\right).\end{aligned}$$ Assuming that $\frac{\lambda}{c_2}<\frac{1}{2}$, we obtain $$\E\left[{{\rm e}}^{\lambda Z}\right] \leq 1+4 c_1 \left(\frac{\lambda}{c_2}\right)^2 \leq {{\rm e}}^{4c_1\lambda^2/c_2^2},$$ where the last inequality is due to the fact that $1+x \leq {{\rm e}}^x$ for $x\geq 0$. $\hfill \Box$ \[lem:subexponentialsquare\] Let $Z_1, Z_2, \ldots, Z_n$ be i.i.d. ${\rm SG}(c_1,c_2)$ random variables with mean zero and variance 1. Then we have $$\P\left( \left|\sum_{i=1}^n ( Z^2_i-1)\right| > nt\right) \leq 2 {{\rm e}}^{-nc_2^2 t^2/16c_3}, \ \ \ {\rm for} \ t \in (0, \frac{c_3}{c_2}),$$ where $c_3 \triangleq \max({{\rm e}}^{c_2}, c_1 {{\rm e}}^{-c_2})$. [*Proof:*]{} Define $X_i \triangleq Z_i^2-1$. It is straightforward to confirm that for all $t>1$, $$\label{eq:upperexpon} \P(|X_i| >t ) \leq c_1 {{\rm e}}^{-c_2(t+1)}.$$ Define $c_3 \triangleq \max({{\rm e}}^{c_2}, c_1 {{\rm e}}^{c_2})$. If we combine the fact that $\P(|X_i|>t) \leq 1$ for $0\leq t \leq 1$ with , we obtain $$\P(|X_i|>t) \leq c_3 {{\rm e}}^{-c_2t}.$$ We have $$\begin{aligned} \label{eq:markovupper} \P\Big(\sum_i X_i >nt \Big) = \P\Big({{\rm e}}^{\lambda \sum_i X_i} > {{\rm e}}^{\lambda nt} \Big) \leq {{\rm e}}^{-\lambda nt } \left( \E\left[{{\rm e}}^{\lambda X_1}\right]\right)^n \leq {{\rm e}}^{-\lambda n t +4n c_3 \lambda^2/c_2^2},\end{aligned}$$ where the last inequality is the result of Lemma \[lem:exponentialexpo\]. Assuming $t < \frac{c_3}{c_2}$ and setting $\lambda = tc_2^2 /(8c_3)$, we obtain $$\P\left(\sum_{i=1}^n X_i >nt \right) \leq {{\rm e}}^{\frac{-n(c_2t)^2}{16c_3}}.$$ Using the same argument, we find a similar upper bound for $\P(\sum_{i=1}^n X_i <-nt)$. $\hfill \Box$ \[lem:subgaussspectrum\] Let $A$ be a $d \times n$ matrix with i.i.d. ${\rm SG}(c_1,c_2)$ elements, and suppose that the elements satisfy $\E(A_{ij})=0$ and $\E(A_{ij}^2)=1$. Then there exist two constants $c'_1,c'_2$ depending only on $c_1$ and $c_2$ such that with probability at least $1- {{\rm e}}^{-c'_2 t^2}$, $$\sigma_{\rm max}(A) \leq \sqrt{d} + c'_1\sqrt{n}+t.$$ [*Proof:*]{} See Theorem 5.39 in [@Vershyninnotes] for more information on the proof and the constants that are involved. $\hfill \Box$ Proof of Theorem \[thm:1\] {#sec:proofthm1} -------------------------- Let ${\mathbf{\hat{x}}}_o$ denote the solution of MCP, and let ${\mathbf{\hat{q}}}_m \triangleq {\mathbf{\hat{x}}}_o-\phi_m({\mathbf{\hat{x}}}_o)$ denote the quantization error of the reconstructed signal at resolution $m$, where for $\xv\in[0,1]^n$, $\phi_m(\xv)$ is defined in Remark \[remark:1\]. Since both $A\xv_o =\yv_o$ and $A{\mathbf{\hat{x}}}_o=\yv_o$, it follows that $$\begin{aligned} A\xv_o&=A(\phi_m({\mathbf{\hat{x}}}_o)+{\mathbf{\hat{q}}}_m)\nonumber\end{aligned}$$ and $$\begin{aligned} A(\xv_o-\phi_m({\mathbf{\hat{x}}}_o))&=A{\mathbf{\hat{q}}}_m.\end{aligned}$$ On the other hand, by definition, $\|{\mathbf{\hat{q}}}_m\|_{\infty}\leq 2^{-m}$, and therefore $$\|{\mathbf{\hat{q}}}_m\|_2\leq 2^{-m}\sqrt{n}.$$ Hence, $$\begin{aligned} \|A(\xv_o- \phi_m({\mathbf{\hat{x}}}_o))\|_2&=\|A{\mathbf{\hat{q}}}_m\|_2 \nonumber\\ &\leq \sigma_{\max}(A) 2^{-m}\sqrt{n},\label{eq:upper_bd}\end{aligned}$$ where $\sigma_{\rm max}(A)$ is the maximum singular value of matrix $A$. By definition, $K^{[\cdot]_{m}}(\xv_o) \leq \kappa_{m,n}m$, and since ${\mathbf{\hat{x}}}_o$ is the solution of , we have $$\begin{aligned} K^{[\cdot]_{m}}({\mathbf{\hat{x}}}_{o}) \leq K^{[\cdot]_{m}}({{\bf x}}_{o})\label{eq:kol_xho} \leq \kappa_{m,n}m.\end{aligned}$$ Define set $\mathcal{S}$ as $$\Sc\triangleq\left\{\xv_o-\phi_m({\mathbf{\tilde{x}}}_o): \; {\mathbf{\tilde{x}}}_o \in[0,1]^n,\, K(\phi_m({\mathbf{\tilde{x}}}_o)) \leq \kappa_{m,n}m \right\}.$$ Define event $\Ec_1^{(n)}$ as $$\begin{aligned} \Ec_1^{(n)}\triangleq\{ \forall \ \mathbf{h} \in \Sc \, : \, \|A \mathbf{h}\|_2 > \sqrt{d(1-t)} \|\mathbf{h}\|_2 \},\label{eq:E1}\end{aligned}$$ and, event $\Ec_2^{(n)}$ as $$\begin{aligned} \Ec_2^{(n)}\triangleq \left\{\sigma_{max}(A) - \sqrt{d} - \sqrt{n} < \sqrt{d} \right\}.\label{eq:E2}\end{aligned}$$ Conditioned on $\Ec_1^{(n)}\cap \Ec_2^{(n)}$, we have $$\begin{aligned} \label{eq:pf_thm2_bound} \|\xv_o - {\mathbf{\hat{x}}}_o\|_2 &= \left\|\xv_o- \phi_m({\mathbf{\hat{x}}}_o)-{\mathbf{\hat{q}}}_m\right\|_2 \nonumber\\ & \leq\left\|\xv_o - \phi_m({\mathbf{\hat{x}}}_o)\|_2 + \| {\mathbf{\hat{q}}}_m\right\|_2 \nonumber\\ & \overset{(a)}{ \leq} {\| A(\xv_o - \phi_m({\mathbf{\hat{x}}}_o)) \|_2 \over \sqrt{d(1-t)}}+ 2^{-m}\sqrt{n} \nonumber\\ &\overset{(b)}{ \leq} {\sigma_{\max}(A)2^{-m}\sqrt{n} \over \sqrt{d(1-t)}}+2^{-m}\sqrt{n}\nonumber\\ &\overset{(c)}{ \leq} ({\sqrt{n}+2\sqrt{d}\over \sqrt{d(1-t)}}+1)2^{-m}\sqrt{n}\nonumber\\ &{\leq} \left((1-t)^{-0.5}\left(\sqrt{n/d}+2\right) +1\right)2^{-m}\sqrt{n}.\end{aligned}$$ Inequality (a) holds since due to $\Ec_1^{(n)}$, $ \| A(\xv_o - \phi_m({\mathbf{\hat{x}}}_o)) \|_2 \geq \sqrt{(1-t)d} \| (\xv_o - \phi_m({\mathbf{\hat{x}}}_o)) \|_2$. Inequality (b) is a result of , and inequality (c) is due to $\Ec_2^{(n)}$. Hence, $$\begin{aligned} \label{eq:pconditional} \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e,\Ec_1^{(n)} \cap\Ec_2^{(n)}\right)=0,\end{aligned}$$ where $\e\triangleq ((1-t)^{-0.5}(\sqrt{nd^{-1}}+2) +1)2^{-m}\sqrt{n}$. Using these definitions and the union bound, we have $$\begin{aligned} \label{eq:proberror} \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e\right) & = \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e,\Ec_1^{(n)} \cap\Ec_2^{(n)}\right) + \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e,\Ec_1^{(n), c} \cup\Ec_2^{(n), c}\right)=\nonumber \\ & = \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e \ | \ \Ec_1^{(n), c} \cup\Ec_2^{(n), c}\right) \P \left(\Ec_1^{(n), c} \cup\Ec_2^{(n), c}\right) \nonumber\\ & \leq \P\left(\Ec_1^{(n),c}\right)+\P\left(\Ec_2^{(n),c}\right).\end{aligned}$$ On the other hand, by Lemma \[lemma:chi\], for fixed ${{\bf x}}\in\mathds{R}^n$, $$\begin{aligned} \P\left(\|A{{\bf x}}\|_2 \leq \sqrt{(1-t)d} \|{{\bf x}}\|_2\right) &=\P\left(\left\|A\frac{{{\bf x}}}{\|{{\bf x}}\|_2}\right\|_2^2 \leq (1-t) d \right)\nonumber\\ &= \P\left(\sum_{i=1}^d Z_i^2 \leq (1-t) d \right)\nonumber\\ & \leq {{\rm e}}^{\frac{d}{2}(t+ \log (1-t)) },\end{aligned}$$ where, for $i=1,\ldots,d$, $Z_i\triangleq\|{{\bf x}}\|^{-1}_2\sum_{j=1}^{n} A_{i,j}x_{j}$. Therefore, since $|\Sc|\leq 2^{\kappa_{m,n}n}$, by the union bound, it follows that $$\begin{aligned} \label{eq:event1_nless} &\P\left(\Ec_1^{(n),c}\right) \leq 2^{\kappa_{m,n}m} {{\rm e}}^{\frac{d}{2}(t+ \log (1-t)) }.\end{aligned}$$ Finally, using the results on the concentration of Lipschitz functions of a Gaussian random vector [@CaTa05], $$\begin{aligned} \label{eq:event2_nless} \P\left(\Ec_2^{(n),c}\right) &= \P\left(\sigma_{max}(A) - \sqrt{d} - \sqrt{n} > \sqrt{d} \right)\nonumber\\ &\leq {\rm e}^{-d/2}.\end{aligned}$$ Plugging , , and into completes the proof. $\hfill \Box$ Proof of Theorem \[thm:noisysetting\] {#sec:proofthmnoisy} ------------------------------------- Remember that ${\mathbf{\hat{x}}}_o=\xh_o^n$ denotes the solution of $$\begin{aligned} \min &\;\;\; \;\; \|A{{\bf x}}-\yv_o\|_2, \nonumber \\ {\rm s.t.} &\;\;\;\;\; K^{[\cdot]_{m_n}}({{\bf x}}) \leq \kappa_n m.\label{eq:alg_noisy}\end{aligned}$$ By the assumption of the theorem, $K^{[\cdot]_m}(\xv_o ) \leq k_{n} m$. Therefore, $\xv_o $ is a feasible point in , and we have $$\begin{aligned} \|A{\mathbf{\hat{x}}}_o-\yv_o\|^2_2&\leq \|A \xv_o -\yv_o\|^2_2\nonumber\\ &=\|A \xv_o -A\xv_o -{\mathbf{w}}\|^2_2=\|{\mathbf{w}}\|^2_2.\label{eq:basic-ineq}\end{aligned}$$ Expanding $\|A{\mathbf{\hat{x}}}_o-\yv_o\|^2_2=\|A{\mathbf{\hat{x}}}_o-A\xv_o -{\mathbf{w}}\|^2_2$ in , it follows that $$\begin{aligned} &\|A({\mathbf{\hat{x}}}_o-\xv_o )\|^2_2 + \|{\mathbf{w}}\|^2_2 -2{\mathbf{w}}^TA({\mathbf{\hat{x}}}_o-\xv_o ) \leq \|{\mathbf{w}}\|^2_2.\label{eq:basic-ineq-expanded}\end{aligned}$$ Canceling $ \|{\mathbf{w}}\|^2_2$ from both sides of , we obtain $$\begin{aligned} \|A({\mathbf{\hat{x}}}_o-\xv_o )\|^2_2 &\leq 2{\mathbf{w}}^TA({\mathbf{\hat{x}}}_o-\xv_o )\leq 2\left|{\mathbf{w}}^TA({\mathbf{\hat{x}}}_o-\xv_o )\right|.\end{aligned}$$ Let ${\mathbf{\hat{q}}}_m\triangleq{\mathbf{\hat{x}}}_o-\phi_m({\mathbf{\hat{x}}}_o)$, where $\phi_m(\cdot)$ is defined in . Using this definition and the Cauchy-Schwartz inequality, we derive a lower bound on $\|A({\mathbf{\hat{x}}}_o-\xv_o )\|^2_2$ as $$\begin{aligned} \|A&({\mathbf{\hat{x}}}_o-\xv_o )\|^2_2 \nonumber\\ &= \|A(\phi_m({\mathbf{\hat{x}}}_o)+{\mathbf{\hat{q}}}_m - \xv_o)\|^2_2 \nonumber\\ &= \|A(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o )+A{\mathbf{\hat{q}}}_m\|^2_2 \nonumber\\ &\geq \|A(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o )\|_2^2-2\left| {\mathbf{\hat{q}}}_m ^TA^TA\left(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o \right)\right|\nonumber\\ &\geq \|A(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o)\|_2^2 -2\left\|A{\mathbf{\hat{q}}}_m \right\|_2\left\|A\left(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o\right)\right\|_2. \label{eq:lower}\end{aligned}$$ On the other hand, again using our definitions plus the Cauchy-Schwartz inequality, we find an upper bound on $|{\mathbf{w}}^TA({\mathbf{\hat{x}}}_o-\xv_o )|$ as $$\begin{aligned} \left|{\mathbf{w}}^TA({\mathbf{\hat{x}}}_o-\xv_o )\right|&=\left|(\phi_m({\mathbf{\hat{x}}}_o) - \xv_o +{\mathbf{\hat{q}}}_m)^TA^T{\mathbf{w}}\right|\nonumber\\ &\leq \left|(\phi_m({\mathbf{\hat{x}}}_o) - \xv_o )^TA^T{\mathbf{w}}\right|+\left|{\mathbf{\hat{q}}}_m^TA^T{\mathbf{w}}\right|\nonumber\\ &\leq \left|( \phi_m({\mathbf{\hat{x}}}_o) - \xv_o)^TA^T{\mathbf{w}}\right|+\|{\mathbf{\hat{q}}}_m\|_2\|A^T{\mathbf{w}}\|_2.\label{eq:upper}\end{aligned}$$ By definition, $\|{\mathbf{\hat{q}}}_m\|_{\infty}\leq 2^{-m}$. Therefore, $$\begin{aligned} \|{\mathbf{\hat{q}}}_m\|_2\leq2^{-m} \sqrt{n}.\label{eq:ell2-error}\end{aligned}$$ Define $\Delta\triangleq\| \phi_m({\mathbf{\hat{x}}}_o)- \xv_o\|_2$, and $$\uv\triangleq {A(\phi_m({\mathbf{\hat{x}}}_o)- \xv_o)\over \Delta}.$$ By this definition, combining and yields $$\begin{aligned} \|\uv\|_2^2\Delta^2 \leq 2(\left\|A{\mathbf{\hat{q}}}_m \right\|_2\|\uv\|_2 + \left|{\mathbf{w}}^T\uv\right|)\Delta+2\|{\mathbf{\hat{q}}}_m\|_2\|A^T{\mathbf{w}}\|_2. \end{aligned}$$ For $t_1,t_2,t_3,t_4,t_5>0$, define events $\Ec_1^{(n)},\ldots,\Ec_5^{(n)}$ as $$\Ec_1^{(n)}\triangleq \{\|\uv\|^2_2 \geq d(1-t_1) \},$$ $$\Ec_2^{(n)}\triangleq \{\|\uv\|^2_2 \leq d(1+t_2) \},$$ $$\Ec_3^{(n)}\triangleq \{|{\mathbf{w}}^T \uv|\leq \sigma\sqrt{(1+t_3)d} \},$$ $$\Ec_4^{(n)}\triangleq \left\{\sigma_{\max}(A) <\sqrt{d}+\sqrt{n}+t_4 \right\},$$ and $$\Ec_5^{(n)}\triangleq\{\|A^T\mathbf{w}\|_2^2 \leq nd(1+t_5)\sigma^2 \}.$$ First, we find an upper bound on $\P((\Ec_1^{(n)}\cap\ldots\cap\Ec_5^{(n)})^c)$. Define the set $\Sc$ as follows $$\Sc\triangleq\left\{\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o: \; {\mathbf{\tilde{x}}}_o \in[0,1]^n,\, K(\phi_m({\mathbf{\tilde{x}}}_o)) \leq \kappa_{n}m \right\}.$$ Note that $|\Sc|\leq 2^{\kappa_{n}m }$. Given $\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o\in\Sc$, $A(\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o)/\|\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o\|_2$ is a vector of length $d$ with i.i.d. entries distributed as $\Nc(0,1)$. Therefore, by Lemma \[lemma:chi\] and the union bound, we obtain $$\begin{aligned} \P(\Ec_1^{(n),c})\leq 2^{\kappa_{n}m }{\rm e}^{{d\over 2} (t_1+\log(1-t_1))},\label{eq:E1}\end{aligned}$$ and $$\begin{aligned} \P(\Ec_2^{(n),c})\leq 2^{\kappa_{n}m }{\rm e}^{-{d\over 2}(t_2-\log(1+t_2))}.\label{eq:E2}\end{aligned}$$ To bound $\P(\Ec_3^{(n),c})$, for $\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o\in\Sc$, let $\tilde{\uv}\triangleq {A(\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o)\over \|\phi_m({\mathbf{\tilde{x}}}_o)- \xv_o\|_2}$. By Lemma \[lemma:gaussian-vectors\], ${\mathbf{w}}^T\tilde{\uv}$ is distributed as $ \|{\mathbf{w}}\|_2G$, where $G\sim\Nc(0,1)$ and is independent of $\|{\mathbf{w}}\|_2$. Therefore, $$\begin{aligned} \P(|{\mathbf{w}}^T \tilde{\uv}|\geq \sigma\sqrt{(1+t_3)d} ) &=\P\left(|{\mathbf{w}}^T \tilde{\uv}|\geq \sigma\sqrt{(1+t_3)d}, \|{\mathbf{w}}\|_2\geq \sigma\sqrt{(1+\tau)d} \right)\nonumber\\ &\;\;\;\; +\P\left(|{\mathbf{w}}^T \tilde{\uv}|\geq \sigma\sqrt{(1+t_3)d}, \|{\mathbf{w}}\|_2< \sigma\sqrt{(1+\tau)d}\right)\nonumber\\ &\leq\P\left( \|{\mathbf{w}}\|_2\geq \sigma\sqrt{(1+\tau)d} \right)\nonumber\\ &\quad+\P\left(\left\|{\mathbf{w}}\right\|_2 G\geq \sigma\sqrt{(1+t_3)d}\left| \|{\mathbf{w}}\|_2< \sigma\sqrt{(1+\tau)d}\right. \right)\nonumber\\ &\leq \P\left(\|{\mathbf{w}}\|_2\geq \sigma\sqrt{(1+\tau)d} \right)+\P\left(G\geq \sqrt{1+t_3 \over 1+\tau}\right)\nonumber\\ &\leq {{\rm e}}^{-{d\over 2}(\tau-\log(1+\tau))}+{{\rm e}}^{-{1+t_3\over 2(1+\tau)}}.\label{eq:cond-w}\end{aligned}$$ Hence, by the union bound, and the fact that $|\Sc|\leq 2^{\kappa_{n}m}$, we obtain $$\begin{aligned} \P(\Ec_3^{(n),c}) \leq 2^{\kappa_{n}m}\left( {{\rm e}}^{-{d\over 2}(\tau-\log(1+\tau))}+{{\rm e}}^{-{1+t_3\over 2(1+\tau)}}\right).\label{eq:E3}\end{aligned}$$ For $\Ec_4$, it can be shown that [@CaTa05] $$\begin{aligned} \P\left(\Ec_4^{(n),c}\right) =\P\left(\sigma_{\max}(A) < \sqrt{d}+\sqrt{n}+t_4\right) \leq {\rm e}^{- t_4^2/2}. \label{eq:E4}\end{aligned}$$ Finally, to bound $\Ec_5^{(n),c}$, note that given ${\mathbf{w}}$, $A^T{\mathbf{w}}$ is an $n$-dimensional i.i.d. zero-mean variance $\|{\mathbf{w}}\|_2^2$ normal vector. Therefore, similar to the derivation of , we have $$\begin{aligned} \P(\Ec_5^{(n),c})&=\P\left(\|A^T{\mathbf{w}}\|_2^2 \geq nd(1+t_5)\sigma^2 \right)\nonumber\\ &\leq \P\left(\|A^T{\mathbf{w}}\|_2^2 \geq nd(1+t_5)\sigma^2 \left| \|{\mathbf{w}}\|_2^2\leq d\sigma^2(1+\tau')\right. \right)\nonumber\\ &\;\;+\P\left( \|{\mathbf{w}}\|_2^2\geq d\sigma^2(1+\tau') \right)\nonumber\\ &\leq {\rm e} ^{-\frac{n}{2}(t_5 - \log(1+ t_5))} + {\rm e} ^{-\frac{d}{2}(\tau' - \log(1+ \tau'))},\label{eq:E5}\end{aligned}$$ where $t_6>0$ satisfies $1+t_6= (1+t_5)/(1+\tau')$. Choosing $t_1=0.5$, from and the fact that $d=8r\kappa_{n}m$, which yields $\kappa_nm\leq d/8$, we obtain $$\P(\Ec_1^{(n),c}) \leq {\rm e}^{d(\log 2/8+ 0.5 (0.5+\log 0.5))} \leq {{\rm e}}^{-0.01d}.$$ For $t_2=1.25$, since again $\kappa_nm\leq d/8$, $$\P(\Ec_2^{(n),c}) \leq {\rm e}^{d(\log 2/8-0.5 (1.25+\log 1.25))} < {{\rm e}}^{-0.1d}.$$ For $\tau=1$, and $t_3=4m\kappa_{n}-1$, from , we obtain $$\begin{aligned} \P(\Ec_3^{(n),c})& \leq 2^{\kappa_{n}m}\Big( {{\rm e}}^{-{d\over 2}(1-\log 2)}+{{\rm e}}^{-m\kappa_n}\Big)\\ &< {{\rm e}}^{-0.06d}+ {{\rm e}}^{-0.3m\kappa_n}.\end{aligned}$$ Choosing $t_4=\sqrt{d}$, from , $\P\left(\Ec_4^{(n),c}\right) < {{\rm e}}^{-0.5d}$. Finally, setting $\tau'=1$ and $t_5=3$, yields $$\P(\Ec_5^{(n),c}) \leq {\rm e} ^{-\frac{n}{2}(3 - \log4)} + {\rm e} ^{-\frac{d}{2}(1- \log2)} <{{\rm e}}^{-0.8n}+{{\rm e}}^{-0.15d}.$$ Therefore, combining all the bounds, it follows that $$\begin{aligned} &\P\left((\Ec^{(n)}_1\cap \Ec^{(n)}_2\cap\Ec^{(n)}_3\cap\Ec^{(n)}_4\cap\Ec^{(n)}_5)^c\right)\nonumber\\ &< 6{{\rm e}}^{-0.01d}+ {{\rm e}}^{-0.3m\kappa_n}.\end{aligned}$$ On the other hand, conditioned on $\Ec_1^{(n)}\cap\ldots\cap\Ec_5^{(n)}$, we have $$\begin{aligned} (1-t_1)d\Delta^2-2\Delta (2^{-m}\sqrt{n}(\sqrt{d}+\sqrt{n}+t_4)\sqrt{d(1+t_2)}+\sigma \sqrt{(1+t_3)d})-2\sigma2^{-m}n\sqrt{(1+t_5)d}\leq 0,\end{aligned}$$ or, inserting the values of $t_1,\ldots,t_5$ and noting that $d=8rm\kappa_n$, $$\begin{aligned} \Delta^2-2\Delta ({6 \over \sqrt{n}}+{3 \over \sqrt{d}}+ \sigma\sqrt{2\over r}\;)-{8\sigma\over \sqrt{d}}\leq 0,\label{eq:main-2nd-order-ineq}\end{aligned}$$ where we have also used the fact that for $m=\lceil\log n\rceil$, $n2^{-m}\leq 1$. Inequality involves a quadratic function in $\Delta$, which has a positive root and a negative root. Hence, for to hold, we need $\Delta$ to be smaller than its positive root, which yields $$\Delta \leq ({6 \over \sqrt{n}}+{3 \over \sqrt{d}}+ \sigma\sqrt{2\over r}\;) +\sqrt{({6 \over \sqrt{n}}+{3 \over \sqrt{d}}+ \sigma\sqrt{2\over r}\;)^2+{8\sigma\over \sqrt{d}}}.$$ Finally, $$\begin{aligned} \|\xv_o-{\mathbf{\hat{x}}}_o\|_2&\leq \|\xv_o-\phi_m({\mathbf{\hat{x}}}_o)\|_2+\|\phi_m({\mathbf{\hat{x}}}_o)- {\mathbf{\hat{x}}}_o\|_2\nonumber\\ &\leq \Delta+\sqrt{n} 2^{-m}.\end{aligned}$$ Therefore, for $n$ and $d$ large enough, $$\|\xv_o-{\mathbf{\hat{x}}}_o\|_2 \leq {3\sigma \over \sqrt{r}}.$$ This completes the proof of Theorem \[thm:noisysetting\]. $\hfill \Box$ Proof of Theorem \[thm:4\] {#sec:proof2} -------------------------- Since the proof of this theorem is similar to the proof of Theorem \[thm:1\], we skip most of the steps and only emphasize the main differences. Let ${\mathbf{\hat{x}}}_o$ denote the solution of . Define ${\mathbf{\hat{q}}}_m$ as the quantization error ${\mathbf{\hat{x}}}_o$, i.e., ${\mathbf{\hat{q}}}_m \triangleq {\mathbf{\hat{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o)$. Since $ \|A{\mathbf{\tilde{x}}}_o -{\yv }_o\|_2 =\| A({\mathbf{\tilde{x}}}_o-\xv_o ) \|_2 \leq \sigma_{max}(A) \epsilon_n$ and ${\mathbf{\hat{x}}}_o$ is the minimizer of , it follows that $\|A{\mathbf{\hat{x}}}_o-{\yv }_o \| \leq \sigma_{max}(A) \epsilon_n$. Therefore, $$\begin{aligned} \|A{\mathbf{\tilde{x}}}_o - A{\mathbf{\hat{x}}}_o \|_2&=\|A{\mathbf{\tilde{x}}}_o-{\yv }_o - (A{\mathbf{\hat{x}}}_o-{\yv }_o) \|_2 \nonumber\\ &\leq 2\sigma_{\max}(A)\e_n.\label{eq:normeq1}\end{aligned}$$ Again, by the triangle inequality, $$\begin{aligned} \lefteqn{\|A{\mathbf{\tilde{x}}}_o - A{\mathbf{\hat{x}}}_o \|_2} \nonumber\\ &=&\|A {\mathbf{\tilde{x}}}_o- A( \phi_m({\mathbf{\hat{x}}}_o)+{\mathbf{\hat{q}}}_m) \|_2 \nonumber\\ & \geq& \|A({\mathbf{\tilde{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o))\|_2 - \|A{\mathbf{\hat{q}}}_m\|_2\nonumber\\ & \geq& \|A({\mathbf{\tilde{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o))\|_2 - \sigma_{max}(A) \|{\mathbf{\hat{q}}}_m\|_2\nonumber\\ & \geq& \|A({\mathbf{\tilde{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o))\|_2 - \sigma_{max}(A) 2^{-m}\sqrt{n}.\label{eq:normeq2}\end{aligned}$$ Combining and , it follows that $$\begin{aligned} \|A({\mathbf{\tilde{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o))\|_2 \leq \sigma_{max}(A) 2^{-m}\sqrt{n}+2\sigma_{\max}(A)\e_n.\end{aligned}$$ We also have: $K^{[\cdot]_m}({\mathbf{\hat{x}}}_o) \leq m \kappa_{m,n}$ and $ K^{[\cdot]_m}({\mathbf{\tilde{x}}}_o) \leq m \kappa_{m,n}$. Define the events $\Ec^{(n)}_1$ and $\Ec^{(n)}_2$ as done in and in the proof of Theorem \[thm:1\]. Then, applying the argument used there, it follows that $$\begin{aligned} \P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e\right) \leq &\P\left(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 >\e,\Ec_1^{(n)} \cap\Ec_2^{(n)}\right)\nonumber\\ &+\P\left(\Ec_1^{(n),c}\right)+\P\left(\Ec_2^{(n),c}\right).\end{aligned}$$ The rest of the proof is exactly the same as that for Theorem \[thm:1\]. $\hfill \Box$ Proof of Theorem \[thm:sub-Gaussian\] {#sec:proofsubgauss} ------------------------------------- Let ${\mathbf{\hat{x}}}_o$ be the solution of the MCP algorithm and ${\mathbf{q}}_m \triangleq \xv_o - \phi_m(\xv_o)$ and ${\mathbf{\hat{q}}}_m \triangleq {\mathbf{\hat{x}}}_o- \phi_m({\mathbf{\hat{x}}}_o)$ denote the quantization errors of the original and the reconstructed signals at resolution $m$, respectively. Following exactly the same steps as the proof of Theorem \[thm:1\], we obtain $$\begin{aligned} K^{[\cdot]_m} ({\mathbf{\hat{x}}}_o) \leq K^{[\cdot]_m} (\xv_o ) \leq \kappa_{m,n} m\end{aligned}$$ and $$\begin{aligned} \|A(\phi_m(\xv_o)- \phi_m({\mathbf{\hat{x}}}_o)) = \sigma_{\rm max}(A) \sqrt{n2^{-2m+2}}.\end{aligned}$$ Since we are dealing with subgaussian random matrices, we define slightly different events here. Let the set $\mathcal{S}_o$ as $$\Sc_o\triangleq\left\{\mathbf{h} \, : \, \mathbf{h}= \phi_m({\mathbf{\hat{x}}}_o)- \phi_m(\xv_o), \, {\mathbf{\hat{x}}}_o,\xv_o\in[0,1]^n,\, K(\phi_m({\mathbf{\hat{x}}}_o)) \leq \kappa_{m,n}m, \, K(\phi_m(\xv_o)) \leq \kappa_{m,n}m \right\},$$ and define $$\begin{aligned} \Ec_1^{(n)} &\triangleq&\{ \nexists \ \mathbf{h} \in \Sc_o \, ; \|A (\mathbf{h})\|_2 < \tau \sqrt{d} \|\mathbf{h}\|_2 \}, \ \ \ \ \label{eq:E1-sg} \\ \Ec_2^{(n)}&\triangleq& \left\{\sigma_{max}(A) < \sqrt{d} + (c'_2+1)\sqrt{n} \right\},\label{eq:E2-sg}\end{aligned}$$ where $c'_2$ is the constant introduced in Lemma \[lem:subgaussspectrum\]. $\P(\|\xv_o - {\mathbf{\hat{x}}}_o\|> \epsilon)$ can be upper bounded by $$\P(\|\xv_o - {\mathbf{\hat{x}}}_o\|> \epsilon) \leq \P(\| \xv_o - {\mathbf{\hat{x}}}_o\|_2> \epsilon, \Ec_1^{(n)} \cap \Ec_2^{(n)}) + \P(\Ec_1^{(n)}) + \P(\Ec_2^{(n)}).$$ If $A \in \Ec_1^{(n)} \cap \Ec_2^{(n)}$, then similar to we can prove $$\|\xv_o - {\mathbf{\hat{x}}}_o\|_2 \leq \left(\tau^{-1} \left(\sqrt{(c'_2+1)nd^{-1}+1}\right)+1 \right)\sqrt{n2^{-2m+2}}.$$ Hence, $$\P(\| \xv_o -{\mathbf{\hat{x}}}_o\|_2 > \epsilon, \Ec_1^{(n)} \cap \Ec_2^{(n)}) =0.$$ Also, according to Lemma \[lem:subgaussspectrum\], $\P(\Ec_2^{(n)}) \leq {{\rm e}}^{-c'_1 n}$. Therefore, the main difference is in the calculation of $\P(\Ec_1^{(n)})$: $$\begin{aligned} \P\left(\|A {{\bf x}}\|_2 \leq \tau \sqrt{d} \| {{\bf x}}\|_2\right) &=\P\left(\left\|A\frac{{{\bf x}}}{\| {{\bf x}}\|_2}\right\|_2^2 \leq \tau^2 d \right)\nonumber\\ &= \P\left(\sum_{i=1}^d Z_i^2 \leq \tau^2 d \right),\end{aligned}$$ where for $i = 1, 2, \ldots, d$, $Z_i = \|{{\bf x}}\|_2^{-1}\sum_j A_{ij} x_j$. Therefore, by Lemma \[lem:sub-Gaussiansum\] we obtain $$P(|Z_i|> t) \leq c_1 {{\rm e}}^{-c_2 t}.$$ According to Lemma \[lem:subexponentialsquare\] we have $$\P\left(\sum_{i=1}^d Z_i^2 \leq \tau^2 d \right)< {{\rm e}}^{-\frac{dc_2^2(\tau^2-1)^2}{16c_3} },$$ where $c_3 \triangleq \max (c_1 {{\rm e}}^{-c_2}, {{\rm e}}^{c_1})$ and $1- \tau^2< c_3/c_2$. Finally, the union bound proves that $$\P(\Ec_1^{(n)}) \leq 2^{\kappa_{m,n}m} {{\rm e}}^{-\frac{dc_2^2(\tau^2-1)^2}{16c_3}},$$ which completes the proof. $\hfill \Box$ Conclusions {#sec:conclusion} =========== In this paper, we have considered the problem of recovering structured signals from underdetermined linear measurements. We have used the Komogorov complexity of the quantized signal as a universal measure of complexity to both unify many of the models explored in the CS literature, and also provide a framework to analyze future structured signal models. We have shown that, if we consider low-complexity signals, then the minimum complexity pursuit (MCP) scheme inspired by Occam’s razor recovers the simplest solution of a set of random linear measurements. In fact, we have proved that MCP successfully recovers a signal of “complexity” $\kappa_n$ at ambient dimension of $n$ from only $3\kappa_n$ random linear measurements. We have also considered more practical scenarios where the signal is not exactly low complexity but rather is “close” to a low complexity signal. We have shown that, even in such cases, the MCP algorithm provides a good estimate of the signal from much fewer samples than the ambient dimension of the signal. As mentioned above, Kolmogorov complexity of a sequence is not computable. However, currently we are working on deriving implementable schemes by replacing the Kolmogorov complexity by computable measures such as minimum description length [@Rissanen86]. Review of prefix Kolmogorov Complexity {#app:kol} -------------------------------------- In an effort to formalize the concept of computability of functions, Turing introduced the notion of [*Turing machine*]{} [@Turing:36]. A Turing machine is a device that has a finite number of states, a memory that is in the form of a tape, and a head that at each time points to one of the blocks on the tape. The tape consists of adjacent blocks, each of which can store one of the three symbols $\mathcal{I}= \{0,1, B \}$, where $B$ represents a blank. Initially the code $\sv\in\{0,1\}^*$ is written on the tape in adjacent blocks, and the rest of the tape is filled with blanks. The machine starts from the leftmost non-blank symbol on the tape, and it works in discrete time steps. At every time instance, it reads the symbol from the tape that the head is pointing to, and based on its current state and the acquired information from the tape, it performs the following actions: 1. update the state, 2. write one symbol from $\mathcal{I}$ onto the tape at the location the head is pointing to, 3. move the head one block to the right. The process continues until the machine enters the halting state. The output of a Turing machine $ {\tt T}$ given is defined as follows. If the machine does not halt, then $ {\tt T}(\sv)$ is not defined. If $ {\tt T}$ halts, then the tape contains a binary string that is surrounded by blanks. $ {\tt T}(\sv)$, the output of $ {\tt T}$ given $\sv$, is set to this binary string. If the output string contains blanks between the binary symbols, then they are replaced by zeros to make the output a binary sequence. Note that by construction, if both $ {\tt T}(\sv_1)$ and $ {\tt T}(\sv_2)$ are defined, then none of them can be a prefix of the other one. There are alternative constructions of Turing machines that do not guarantee this property [@book_vitanyi], but in this paper we only consider those that have this property. One of the most fundamental results in the algorithmic information theory is the existence of [ universal machines]{} that are additively optimal (see Theorem 2.1.1 in [@book_vitanyi]). A [*universal machine*]{} ${\tt U}$ is a machine that is able to imitate the behavior of all Turing machines on any input string. A universal machine ${\tt U}$ is (additively) optimal, if for every Turing machine $ {\tt T}$, there exists a constant $c_ {\tt T}$ that only depends on $ {\tt T}$, such that $$\min\{\ell(\sv): {\tt U}(\sv)=\xv\} \leq \min\{\ell(\sv'): {\tt T}(\sv')=\xv\} +c_{ {\tt T}}.$$ The existence of optimal universal Turing machines is a result of the fact that any Turing machine can be uniquely specified with a finite number of bits. (Refer to Chapter 1 of [@book_vitanyi; @CoGaGr89] for more information on the universal Turing Machines.) Given optimal universal machine $ {\tt U}$, the [*prefix Kolmogorov complexity*]{} of $\xv\in\{0,1\}^{*}$ with respect to $ {\tt U}$ is defined as $${K}_{{\tt U}} ({{\bf x}}) \triangleq \min\{\ell(\sv): {\tt U}(\sv)=\xv\}.$$ Proof of Theorem \[thm:properties\] {#app:proof_thmprop} ----------------------------------- - The following program prints $\xv$ : Print the following bit sequence $x_1, x_2, \ldots, x_{\ell({{\bf x}})}$. The first part that explains the structure has a constant length, c, and then the bits themselves require $\ell({{\bf x}})$ bits. Therefore, the length of the program is less than $\ell({{\bf x}})+c$. - Let ${\mathbf{p}}_{{\bf x}}$ and ${\mathbf{p}}_{{\yv }}$ denote the shortest programs that print ${{\bf x}}$ and ${\yv }$ respectively. The following program prints $({{\bf x}},{\yv })$: Print a concatenation of two numbers and the programs for these two numbers are ${\mathbf{p}}_{{\bf x}}$ and ${\mathbf{p}}_{{\yv }}$. Note that since the programs are assumed to be prefix free, after the explanation, “Print a concatenation of two numbers”, the machines continues until it goes into the halting state. At this point it has already printed $\xv$ . But since it knows that we expect another number, it again starts to read the bits and therefore will print ${\yv }$ as well. - The proof of this part is also straightforward, since using a constant number of bits, the code can be required to ignore the extra information ${\yv }$, and then use the code that achieve $K(x)$. - We use the same program that we used in Part 1. Notice that since the machine does not know $\ell({{\bf x}})$ we should spend $K(\ell({{\bf x}}))$ bits to describe this number as well. Hence, overall we require $K({{\bf x}}\, | \, \ell({{\bf x}})) + K(\ell({{\bf x}}))+c$ bits. - First note that the length of the binary representation of $n$ which is denoted by $\ell(n)$ is $\log n$. According to Part iv we have $$\begin{aligned} K(n) &\leq K(n \, | \, \ell(n)) + K(\ell(n))+c \leq \ell(n)+ 2 \max(\log(\log n) ,1) + c' \nonumber \\ &\leq \log n + 2 \max(\log\log n,1) + c'. \end{aligned}$$ - The proof is very similar to the proof of Part ii, and hence we skip it. [^1]: S. Jalali is with the Center for Mathematics of Information, California Institute of Technology, Pasadena, CA, [shirin@caltech.edu]{} [^2]: A. Maleki and R. G. Baraniuk are with the Digital Signal Processing group, Rice University, Houston, TX, [$\{$arian.maleki, richb$\}$@rice.edu]{} [^3]: This paper was presented in part at Allerton Conference on Communication, Control and Computing, 2011 and at IEEE International Symposium on Information Theory, Cambridge, MA, 2012. [^4]: In Appendix \[app:kol\], we review some basic definitions of prefix Kolmogorov complexity. See [@book_vitanyi] for more details on the subject and also on the difference between prefix Kolmogorov complexity and its non-prefix version. [^5]: Note that $K(\xv \ | \yv)$ is often defined as $K(\xv \ | \yv, {\mathbf{p}}_{\yv })$ where ${\mathbf{p}}_{\yv }$ is the shortest program that generates $\yv $. This formulation provides symmetry in the definition of algorithmic mutual information. But we will not use this definition in this paper. [^6]: As long as all the singular values are upper bounded by a constant the statement of this example holds. For the notational simplicity we choose $1$ as the upper bound for the singular values. [^7]: Note that in calculating the information dimension we assume that $n$ and $m$ are given to the universal computer. Otherwise we would need $\log^*n$ and $\log^*m$ bits to describe them to the machine. [^8]: A singularity is a point at which the function is not infinitely differentiable.
--- abstract: 'Due to rising mobility worldwide, a growing number of people utilizes cellular network services while on the move. Persistent urbanization trends raise the number of daily commuters, leading to a situation where telecommunication requirements are mainly dictated by two categories of users: 1) Static users inside buildings, demanding instantaneous and virtually bandwidth-unlimited access to the Internet and Cloud services; 2) moving users outside, expecting ubiquitous and seamless mobility even at high velocity. While most work on future mobile communications is motivated by the first category of users, we outline in this article a layered cellular network architecture that has the potential to efficiently support both user groups simultaneously. We deduce novel transceiver architectures and derive research questions that need to be tackled to effectively maintain wireless connectivity for the envisioned Society in Motion.' author: - | Stefan Schwarz$^\dagger$  and Markus Rupp$^\ddagger$ \ $^\dagger$ Christian Doppler Laboratory for Dependable Wireless Connectivity for the Society in Motion\ $^\dagger\,^\ddagger$ Institute of Telecommunications, Technische Universität (TU) Wien, Austria\ Email: {sschwarz,mrupp}@nt.tuwien.ac.at [^1] bibliography: - 'Network\_SiM.bib' title: Cellular Network Architectures for the Society in Motion --- Introduction {#sec:Introduction} ============ Two important global trends are currently challenging many metropolises worldwide: 1) Growing urbanization leads to increased mobility of people commuting to and within cities, regularly overloading public and private transportation systems [@UN_urbanization_2014]; 2) ever-increasing data traffic demands, caused by popular online applications and services, drive cellular networks to their limits [@Ericsson2015]. Since people increasingly utilize their mobile devices for online activities, such as, shopping, entertainment and socializing, while commuting and traveling, these two trends will come together to cause a future challenging situation for wireless communications, where high data rate connectivity must be provided to a large number of potentially highly-mobile users in networks that are already crowded with quasi-static (nomadic) users. Machine-type communication will further aggravate this problem, as road and rail vehicles are expected to employ mobile networks to provide broadband services to their customers, to support applications such as remote sensing and maintenance, and to exchange messages to improve road traffic safety and efficiency. Even though a basic support of few users with speeds as high as 500km/h is foreseen in the , the network is not designed to efficiently serve large numbers of high-mobility users. Following the ongoing progress of 5G, it is observed that most research work is motivated by achieving highest peak data rates and large network capacities for quasi-static (nomadic) users, by reducing network latency to enable novel “tactile Internet” applications, and by enhancing energy efficiency to decrease the global energy footprint of mobile communications [@5GPPP_2015]. Albeit enhancing mobility is commonly considered a (possibly secondary) 5G goal, it is actually not so much the high-mobility support that needs to be improved for the envisioned Society in Motion, but rather the network capacity for very large numbers of mobile users, which is to a large extent a network-level issue. Considering ultra-dense networks that employ small cells “on every lamp-post” to cover areas of mere tens to hundreds of square meters and carrier-frequencies that constantly increase to alleviate bandwidth scarcity, even users with relatively low velocities (say 30km/h) have to be considered as highly mobile from a network perspective, further exacerbating the addressed problematic. *Contribution:* In this article, we outline and describe a feasible layered cellular network architecture that encompasses many established enabling 5G technologies to not only support commonly agreed 5G targets, but also to enhance network capacity for highly-mobile users. We explain the functions that the individual layers have to implement and how these layers have to cooperate to enable efficient network operation. We furthermore deduce transceiver architectures that play an important key role to realize the envisioned network structure. Network Architecture {#sec:Net} ==================== Macro base stations, when operating in the lower micro wave frequency bands (e.g., 800MHz - 2GHz), are the method of choice for providing coverage over large geographical areas with comparatively small number of base station sites. Hence, such macro base stations will also form the backbone of future mobile networks as illustrated in \[fig:Net\]. It has, however, already been recognized during the development of mobile networks, that macro base stations alone can neither maintain increasing network capacity demands, nor can they support energy efficiency requirements imposed on systems [@5GPPP_2015]. Thus, network densification was a central theme in the development of cellular networks and will continue being so in , in order to sustain ever-increasing network capacity demands [@Damnjanovic2011]. In , mostly autonomous small cells are employed to provide coverage and capacity in indoor and hot-spot locations. Small cells, however, are not a satisfactory solution for supporting mobile users, since coverage areas are small, implying frequent time-consuming and error-prone hand-overs between different autonomous cells. This has been recognized by the and countermeasures have been taken by including dual-connectivity within Release12. Yet, this a-posteriori fix is not the most efficient and reliable solution, because the intrinsic macro-diversity provided by multiple neighboring small cells cannot be exploited, since small cells do not support advanced joint transmission techniques due to the lack of a powerful backhaul. Hence, will in part outstrip small cells in especially for professional indoor micro solutions [@Checko2015] and for outdoor network capacity enhancement, as they facilitate advanced schemes that improve network capacity [@Schwarz-TWC2014]. The major disadvantage of a conventional is that its component require dedicated (or equivalent) infrastructure to communicate with their controlling macro base station. To alleviate this drawback of , we propose a novel type of network access node, the , which is a featuring an and a wireless backhaul to the macro base station as detailed in \[sec:eRRH\]. Such can be placed opportunistically within the network, requiring power supply only. Macro base stations can take control over using wireless backhaul connections. This allows establishing on-demand by associating a number of with one or several macro base stations; such a is illustrated in the upper-right part of \[fig:Net\]. High capacity wireless backhauling can be achieved with doubled-sided massive systems, enabling highly directive transmission between macro base station and as further described in \[sec:double\_massive\]. Macro base stations coordinately associate amongst each other, to achieve certain capacity, diversity and/or mobility requirements of their users. To efficiently support requirements, the individual layers of the presented heterogeneous cellular network architecture have to collaboratively fulfill certain tasks as we describe below in detail. In Table \[fig:Coord\] we summarize the three levels of cooperation required in the described layered cellular network architecture to enable realization of targets. Cooperation on the macro-layer will involve enhanced features that apply Game theoretic and other optimization methods to achieve large scale network coordination over long time scales (minutes to hours). Cooperation in-between the macro- and micro-layers is performed on the meso-level, which mostly deals with user and network node assignment issues that apply over areas covered by several macro base stations. Micro-level cooperation, finally, utilizes advanced multi-antenna and multi-point transmission/reception techniques on a time scale, to achieve efficient and robust coordination of few neighboring radio access nodes of the micro-layer. Macro-Layer Functions {#sec:macro} --------------------- In future cellular networks, the importance of macro base stations for serving users will decrease. Nevertheless, the macro-layer will play a central role in mobile communications, by undertaking network management functionality and providing wireless backhaul to network access nodes of the micro-layer, such as, small cells and . We anticipate that future macro base stations will be employed to supervise and maintain data transmissions from the micro-layer, thereby enhancing transmission efficiency by utilizing the holistic network view available at the macro-layer. For that purpose we identify the following key functions of the macro-layer: ### Mobility management and user assignment A major task of the macro-layer is in assisting the assignment of users amongst network access nodes of the micro-layer in the long term, respectively over large geographic areas. This is especially important for highly-mobile users that traverse quickly through coverage regions of small cells. Depending on the sojourn time of coverage regions, it can be beneficial to avoid the attachment of highly-mobile users to small cells, in order to minimize signaling load. If macro base stations are able to determine locations and trajectories of users (e.g., through explicit user feedback or employing multi-antenna localization techniques), they can also assist in reserving resources at network access nodes along the path of users to minimize hand-over interruptions. These methods are especially promising for users that move along well-defined paths such as (rail-)roads. ### Data provisioning and geocasting Data provisioning can help reducing backbone load and latency, by caching popular content, such as videos, as close to users as possible [@Bastug2015]. Here, macro base stations can support identification of user hot-spot locations to initiate data caching at the respective small cells of the micro-layer. In the context of mobile users, data provisioning goes hand-in-hand with mobility management as described above to minimize latency after hand-overs. Furthermore, in applications such as , information is mostly relevant only for users in certain geographic areas; e.g., road hazard warnings are important for users moving on the respective stretch of road. Such schemes can efficiently be implemented if geocasting is supported by the macro-layer. ### Provisioning of signaling information Even though payload data transmission will mostly be handled by the micro-layer in future mobile networks, transmission of signaling information may still reside with the macro-layer. Such control-plane/user-plane splitting concepts promise robust and efficient wireless connectivity for mobile users in  [@Ishii12]. Especially with -based small cells, which are prone to signal outage [@Rangan2014], keeping the control-plane at the macro-layer can reduce loss of connection by enabling fast hand-over to alternative access nodes. ### Coverage backup The macro-layer will retain its role as reliable coverage solution in areas that do not economically justify provisioning of high performance micro-layers (e.g., sparsely populated rural areas and back country). Furthermore, the macro-layer will provide back-up connectivity whenever network access nodes of the micro-layer are in outage, thus enhancing reliability of data transmission through extra macro diversity. This is especially important when employing based radio access on the micro-layer, since outage probability in the regime is high due to severe shadowing effects [@Rangan2014]. Moreover, the macro-layer will support users at highest mobility that traverse coverage regions of the micro-layer in very short time (fractions of a second) and cannot efficiently be co-scheduled with other users on the micro-layer for the following reason: The high-performance micro-layer will apply aggressive spatial multi-user multiplexing to efficiently serve large number of users in parallel. Such schemes require accurate , which is commonly not available at very high mobility due to fast temporal channel variations. Hence, micro-layer efficiency can be enhanced by offloading highest mobility users to the macro network. Further efficiency improvements for highest mobility users that move along predetermined paths (e.g., rail-roads) are possible by providing macro base stations with dedicated distributed antennas to reduce the access distance between users and antenna arrays [@Mueller2015]. Meso-Layer Functions {#sec:meso} -------------------- The meso-layer covers the interaction between macro base stations and other (semi)-autonomous radio access nodes of the micro-layer, such as, small cells and . For that purpose it has to support the following functions: ### Wake-up on-demand An important goal is the reduction of the global energy footprint of mobile networks. This can most easily be achieved by deactivating network nodes of the micro-layer whenever they are not required to satisfy demand, as gauged by the macro-layer. Largest energy savings are possible when chains of deactivated access nodes can be completely powered down; yet, this implies power up delay when reactivating, which can be problematic for highly-mobile users. To avoid excessive delay, the macro-layer should hence apply predictive methods to determine early on when to reactivate radio access nodes. ### Dynamic access node association The meso-layer is responsible for associating to macro base stations to form on-demand. This implies coordination amongst macro base stations to determine optimal associations, as well as forwarding the corresponding control information to the micro-layer. With the trailing cell concept can be effectively realized, which virtually moves the signal of a cell along with users by switching between .[^2] ### Management of coordination areas The meso-layer has to identify small cells of the micro-layer that should employ techniques to improve performance (reduce interference, enhance macro-diversity, improve mobility). This implies assisting the dynamic formation of network coordination areas and forwarding of coordination information in-between small cells as well as between the macro- and micro-layers, in case of macro-assisted coordination and control-plane/user-plane splitting. ### Wireless backhauling Future networks will employ highly directive beamforming, as enabled by massive and technologies, to establish on-demand wireless backhaul connections between network access nodes of the same layer and across layers (macro to micro). Since these network access nodes are basically static, accurate can be obtained at all involved nodes with minimal effort, facilitating sophisticated coordinated beamforming/precoding techniques. On-demand wireless backhauling extends the available options for opportunistic placement of network access nodes, since fixed infrastructure requirements are minimized. It also enables advanced dynamic coordination of network access nodes that are not equipped with powerful fixed backhaul. Micro-Layer Functions {#sec:micro} --------------------- The micro-layer is the actual user access layer of the mobile network. It has to support network capacity and other requirements imposed on systems. This will necessitate on-demand and dynamic level coordination of multiple neighboring radio access nodes to control interference, provide robustness with respect to signal outages and enhance mobility support. The corresponding cooperation schemes will involve comparatively simple time-frequency coordination of resource allocations but also sophisticated spatial techniques, such as, joint transmission from spatially distributed access nodes and interference alignment. Coordination amongst micro-layer access nodes can either be autonomous, employing distributed optimization algorithms such as reinforcement learning [@Simsek15], or macro-assisted, enabling joint optimization at a central entity. With such methods, the micro-layer can efficiently and dependably realize (amongst others) the following functions for mobile users: ### User association In current cellular networks, user association to network access nodes is basically triggered from the user-side based on channel quality measurements. To avoid overloading of crowded small cells and base stations, several strategies for autonomous and coordinated load balancing in have been proposed in literature [@Ye2013], often applying SNR thresholding to artificially increase the coverage areas of lightly loaded cells. Even though such methods can produce favorable load conditions in static scenarios, with highly-mobile users user association should additionally account for sojourn times of coverage regions to avoid large signaling overhead. Highly-mobile users commonly move along specific paths (roads); this repetitive side information can be utilized by the access nodes to adaptively train optimal radiation beam-patterns and user association strategies over time, e.g., through reinforcement learning techniques. ### Provisioning of spatial macro-diversity The micro-layer of future ultra-dense cellular networks will inherently possess a large degree of spatial macro-diversity. By means of dynamic coordination amongst radio access nodes, this macro-diversity can be made available to users to enhance robustness with respect to signal outages. In current mobile networks, however, backhaul connections between small cells are mostly not sufficiently powerful to enable fast autonomous coordination, as required for mobile users. This problem can be alleviated through wireless backhauling as described in \[sec:meso\]. ### Vehicular communications In the context of high-mobility users, have gained interest in recent years for applications, promising enhanced road traffic safety and efficiency. Lately the interest in mobile communications for such use cases is increasing, since suitable technology is comparatively cheaply available off-the-shelf and the required network infrastructure is practically ubiquitous. This has also been recognized by the and early LTE standardization efforts for vehicular (LTE-V) are ongoing in the development of Release14, with the goal of supporting services and providing high-bandwidth infotainment applications for in-car users. We anticipate that networks will utilize a combination of and mobile communications to mutually enhance the broadband experience of participating users. Deduced Transceiver Architectures {#sec:TX} ================================= To realize the network architecture illustrated in \[fig:Net\] several new or modified types of radio access networks have to be treated, whose specifics and features we discuss below. Double-Sided Massive MIMO {#sec:double_massive} ------------------------- Wireless backhauling between macro base stations and small cells/, as described in \[sec:meso\], has to provide very high capacity, in order to sustain high rate data transmissions to several users of the micro-layer in parallel. Therefore the band lends itself for these wireless connections, due to the large amount of available bandwidth in the regime. Alternatively, since transmission is sensitive to weather conditions (rain, fog) and thus prone to outage, transmission in the low GHz regime can be applied to reliably cover large distances. In both cases, large antenna arrays on both ends of the links, i.e., at macro base stations and small cells/, make sense to achieve large spatial multiplexing gains and guarantee minimal interference to unintended network nodes. This leads to so-called double-sided massive transmission, as illustrated in \[fig:Double\]. Such systems are to date to a large extend unexplored; only in [@Schwarz_CCNC16] the authors investigate several downlink multi-user transceivers for double-sided massive antenna arrays, showing that the achieved rate is very sensitive to antenna correlation, especially at the transmitter-side. Double-sided massive systems are also relevant in , due to the large beamforming gains achievable at transmitter and receiver that enable comparatively efficient remote energy supply. In \[fig:Double\], we consider an example of wireless backhauling between macro base stations and several small cells/ over a dedicated bandwidth. Since all involved network nodes are static, the wireless channels in-between them will not vary too much over time, allowing to obtain accurate estimates of at the transmitters and receivers with minimal effort. The corresponding system model is equivalent to interfering multi-cell data transmissions in cellular networks, despite having massive antenna arrays on both ends of the links. Yet, transceiver designs cannot directly be adopted from such cellular systems, since, for reasons of complexity, in massive hybrid base band/ signal processing is applied in many cases to reduce the number of required chains [@Alkhateeb2014]. In case wireless backhauling employs the same bandwidth as data transmission between users and access nodes of the micro-layer, the system model of \[fig:Double\] has to be further extended to enable joint optimization of backhauling and user data transmission. This corresponds to a situation with mixed single-/double-sided massive transmission, since user equipment will in general not support massive antenna arrays for reasons of space and complexity. Hence, such systems open a new avenue of research and engineering tasks, ranging from practical transceiver designs to fundamental questions regarding limits of achievable data rates of double-sided massive . Massive Distributed Antennas Arrays ----------------------------------- DASs are well known to improve coverage of cellular networks, to reduce signal outages through additional macro-diversity and to enhance network capacity [@Heath2011]. In contrast to autonomous small cells, have the advantage that they are centrally controlled by a macro base station, which enables the implementation of highly efficient joint transmission and coordinated beamforming techniques [@Schwarz-TWC2014]. On the other hand, if such distributed antennas are well isolated of each other, they can also be utilized to radiate independent signals mimicking the behaviour of autonomous small cells. An efficient way of implementing a is to couple several spatially distributed to a macro base station utilizing high-bandwidth and low-latency technology. Employing large antenna arrays at the additionally allows harvesting the gains promised by three-dimensional beamforming and solutions, techniques that are currently of large interest within the scientific community as well as standardization of Release14 and beyond. A simplified block-diagram of such massive is illustrated in \[fig:DAS\]; here, the macro base station is responsible for processing the users’ signals and for up- and down-converting them to and from . The enable adaptive beamforming through radio distribution networks, consisting of phase shifters and combiners, that are remotely controlled by the macro base station over dedicated side links. Massive can support highest network capacity, by coordinating transmissions from several massive antenna arrays to achieve large spatial multiplexing gains with minimal interference. Compared to autonomous small cell solutions, they additionally enhance the dependability of the wireless connection, since users can, in case of outage, be immediately and seamlessly switched over from one to the next, without requiring any coordination amongst independent network access nodes. This feature is also advantageous for achieving reliable and robust support of high-mobility users [@Mueller2015], since frequent hand-overs between autonomous access nodes can be avoided. Enhanced Remote Radio Heads {#sec:eRRH} --------------------------- The classical , as already utilized nowadays in cellular networks, has three main disadvantages: 1) the assignment of to base stations is static; 2) dedicated high-bandwidth low-latency links, such as , are required to attach the active antennas to the base station; 3) all signal processing has to be performed by the base station, since do not possess any processing capabilities. The first two drawbacks can be avoided by extending the with chains to act as and by attaching these on-demand dynamically to macro base stations to form as required. To alleviate the third drawback, we consider enhancing the capabilities of to enable autonomous base band and/or beamforming. Equipped with such , signal processing load at the base stations is reduced to multi-user scheduling and resource assignment, whereas base band precoding and beamforming is automatically optimized by the . In \[fig:RRH\], we show the block-diagram of such a employing : the pre-processed base band data of the users is forward to the either over dedicated connections or sharing the bandwidth with other transmissions. To optimize the beamforming/precoding weights, the are provided with side-information about intended and unintended users by the base station. Although similar to classical decode-and-forward relay nodes, thus obtain additional input information from the macro base station and can coordinate amongst each other to enable joint optimization of beamforming/precoding weights. Compared to traditional , precoding with occurs entirely transparent to base stations, allowing to reduce the required backhaul capacity, since the actual user data is in general of much smaller dimension than the precoded signal especially with massive . This is highly attractive for systems employing massive antenna arrays, since such system share the wireless medium as backhaul connection. Furthermore, keeping beamforming/precoding adaptation as close to the wireless channel as possible enables reducing the latency, because the up- and downlink delay of -sharing between base station and is omitted. This makes such solutions suitable for high-mobility scenarios that require fast beamforming adaptation due to strong temporal channel variations. Research Challenges {#sec:chall} =================== Utilization of the presented layered cellular network architecture to support the envisioned Society in Motion comes with several research challenges, associated with the interaction of network access nodes within and in-between layers. We summarize the most critical research issues below: #### Network coordination Dynamic coordination of network access nodes on all layers of the network is a central enabler for the features described in \[sec:Net\]. A static or semi-static setup of network parameters, possibly even by hand as still common practice in current cellular networks, is neither feasible nor efficient in future ultra-dense mobile communications; networks should rather support autonomous self-optimization. For that purpose, efficient and reliable coordination methods for the different network layers need to be developed. On a small scale, involving several network access nodes, cooperative schemes that jointly optimize the state of involved nodes enable highest performance; yet, such schemes do not scale well with the number of involved nodes. Thus, on a large scale, competitive (Game theoretic) schemes are promising feasible candidates for achieving efficient coordination. Especially the interaction of such cooperative and competitive schemes needs to be further addressed by research. #### Wireless backhauling solutions On-demand wireless backhauling is in many cases an enabler for sophisticated coordination schemes, since a powerful wired backhaul can often not be supplied without significant cost, specifically in outdoor locations. However, this requires very high capacity solutions for wireless backhauling to support the data rate demands of many attached users. Such solutions may be available in and massive technologies, utilizing large antenna arrays on both ends of the transmission link. Transceiver architectures and computationally efficient schemes for such double-sided massive systems, enabling efficient and reliable backhauling, need to be developed. #### enhancements When serving high-mobility users, the itself is often the limiting factor of achievable rate. Several studies have identified severe weaknesses of when serving users at high velocity. These are mostly caused by inaccurate at transmitter and receiver, hindering the application of sophisticated transmission and reception schemes. Enhancements of pilot symbol structures and feedback algorithms have been proposed, which trade-off increased overhead for reduced inaccuracy. To get the most out of such schemes, they should operate adaptively with respect to the delay-Doppler dispersion characteristics of the wireless channel. With , novel multi-carrier waveforms are likely to be employed; this gives the opportunity to design waveform parameters to optimally match the dispersion characteristics of the channel. To efficiently and robustly support indoor/outdoor static/mobile users with diverse channel characteristics, waveforms should be able to adjust time-frequency spacing, the applied prototype pulse as well as the length, potentially for each user individually. Conclusion ========== In this article, we discussed challenges for mobile communications imposed by a likely future development, where a large number of highly-mobile users must efficiently and dependably be served in parallel to even more quasi-static users. We outlined a layered cellular network architecture and discussed functions that must be realized by the individual layers of the network to support such demands. We highlighted the importance of dynamic and autonomous self-coordination to achieve efficient operation of the network and we identified several research challenges that need to be addressed to realize a future connected Society in Motion. [^1]: The financial support by the Austrian Federal Ministry of Science, Research and Economy, by the National Foundation for Research, Technology and Development and by TU Wien is gratefully acknowledged. [^2]: Notice, in literature this is also known as moving cell; we avoid this notation, since it is also used for small cells that are mounted on vehicles.
--- abstract: 'The electronic states near a surface or a domain wall in the $p_x$$\pm$i$p_y$-wave superconductor are studied. This state has been recently suggested as the superconducting state of Sr$_2$RuO$_4$. The $p_x$$\pm$i$p_y$-wave paring state breaks the time reversal symmetry and induces a magnetic field. The obtained temperature dependence of the magnetic field is consistent with the observed $\mu$SR data.' address: - 'Department of Physics, Faculty of Science, Shizuoka University, 836 Oya, Shizuoka 422-8529, Japan' - 'Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan' author: - Masashige Matsumoto - Manfred Sigrist title: 'Quasiparticle States near the Surface and the Domain Wall in a $p_x$$\pm$i$p_y$-Wave Superconductor' --- , $p$-wave superconductor; quasi-classical theory; Sr$_2$RuO$_4$ Studying unconventional superconductors has become one of the most attractive problems in recent condensed matter research. They include the recently discovered Sr$_2$RuO$_4$ [@Maeno]. Triplet pairing state of ${\mbox{\boldmath$d$}}({\mbox{\boldmath$k$}})$=$(k_x$$\pm$${{\rm i}}k_y){\hat z}$ is suggested as the d-vector [@Sigrist]. Tunneling conductance for such paring state has been examined finding that the conductance peak features related to the bound states are very sensitive to the angle of the incidence of the electron [@Honerkamp; @Yamashiro]. Recently we have studied quasiparticle properties at the surface or domain wall and reported that the local density of states at the surface is constant and does not show any peak-like or gap-like structure within the superconducting energy gap at low temperatures. While at the domain wall it is v-shaped and contains a small gap-like feature [@Matsumoto]. The intrinsic magnetism in the superconducting phase by $\mu$SR experiment indicates a pairing state with broken time reversal symmetry [@Luke]. The magnetic field in the superconducting phase can be induced by surface, domain wall and impurity [@Sigrist2]. In this paper we examine the temperature dependence of the magnetic field induced near the surface and domain wall and compare them with the $\mu$SR experiment. For this purpose we use the same formulation as in our previous paper [@Matsumoto], which is based on the quasi-classical formulation developed by Schopohl $et$ $al$. [@Schopohl]. The spatial variation of the order parameter and vector potential can be determined self-consistently. For simplicity we assume a two-dimensional $p_x$+i$p_y$ state. In Fig. \[fig:1\] we show the magnetic field near a surface and a domain wall which is formed between $p_x-$i$p_y$ and $p_x$+i$p_y$ state. ![Induced magnetic field. $B_c$=$\Phi_0/2\sqrt{2}\pi\xi_0\lambda_{\rm L}$, $\lambda_{\rm L}$ is the London penetration depth and $\Phi_0$=$h/2e$. (a) Spatial dependence at several temperatures. $x$ is the distance from the surface or domain wall scaled by $\xi_0$=${v_{\rm F}}/\pi\Delta(0)$, where $\Delta(0)$ is the magnitude of the bulk order parameter at $T$=0. We chose a cutoff energy $\omega_c$=20$T_c$ and $\kappa$=$\lambda_{\rm L}/\xi_0$=2.5. Temperatures scaled by $T_c$ are depicted. $B_z$ is antisymmetric under $x$$\leftrightarrow$$-x$ for the domain wall. (b) Temperature dependence of the maximum $\mid B_z/B_c\mid$.[]{data-label="fig:1"}](figure1a.eps "fig:"){width="0.56\linewidth"} ![Induced magnetic field. $B_c$=$\Phi_0/2\sqrt{2}\pi\xi_0\lambda_{\rm L}$, $\lambda_{\rm L}$ is the London penetration depth and $\Phi_0$=$h/2e$. (a) Spatial dependence at several temperatures. $x$ is the distance from the surface or domain wall scaled by $\xi_0$=${v_{\rm F}}/\pi\Delta(0)$, where $\Delta(0)$ is the magnitude of the bulk order parameter at $T$=0. We chose a cutoff energy $\omega_c$=20$T_c$ and $\kappa$=$\lambda_{\rm L}/\xi_0$=2.5. Temperatures scaled by $T_c$ are depicted. $B_z$ is antisymmetric under $x$$\leftrightarrow$$-x$ for the domain wall. (b) Temperature dependence of the maximum $\mid B_z/B_c\mid$.[]{data-label="fig:1"}](figure1b.eps "fig:"){width="0.43\linewidth"} Near the $T_c$ field maximum increases linearly with the decreasing the temperature and it saturates at low temperatures. This temperature dependence is qualitatively consistent with the $\mu$SR experiment. In the surface case the energy level of the bound state is estimated as ${\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})$, where ${\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})$ is the $p_y$-component of the order parameter with momentum ${\mbox{\boldmath$k_{\rm F}$}}$. Therefore, bound states in the region ${{k_{\rm F}}_y}$$<0$ are occupied and yielding a spontaneous magnetic field, as long as they satisfy the condition $T$$<$$\mid{\Delta_y}({\mbox{\boldmath$k_{\rm F}$}})\mid$. An interesting magnetic property appears in the case of a $p_x$ state. It has been pointed out that midgap state gives rise to a paramagnetic response [@Higashitani; @Fogelstrom; @Walter]. Let us demonstrate it in the $p_x$ state, which is suggested in the presence of the strong magnetic field in the $x$ direction [@Agterberg]. Figure \[fig:2\] shows the spatial dependence of the paramagnetic field in the $z$ direction. ![Spatial dependence of the magnetic field near the (1,0,0) surface for the $p_x$ state at several temperatures. A small external field $B_{\rm ext}$=0.01$B_c$ is applied in the $z$ direction. Temperatures scaled by $T_c$ are depicted.[]{data-label="fig:2"}](figure2.eps){width="0.5\linewidth"} As it is studied in the $d$-wave case energy level of the midgap state shifts to $e{{v_{\rm F}}_y}A_y$, which splits the zero bias conductance peak [@Fogelstrom]. Here ${{v_{\rm F}}_y}$ and $A_y$ are the $y$-component of Fermi velocity and vector potential, respectively. Note that $A_y$ has the opposite sign of $B_{\rm ext}$. Therefore, bound states in ${{k_{\rm F}}_y}$$>$0 region are occupied for a positive $B_{\rm ext}$ and it generates a magnetic field parallel to $B_{\rm ext}$. Bound states satisfying $T$$<$$\mid e{{v_{\rm F}}_y}A_y\mid$ contribute to the effect and the paramagnetic field rapidly decreases with the increase of temperature. In the real case, a small $p_y$ part can be induced by $B_{\rm ext}$. The realized phase of the $p_y$ component is such that generates a surface current which induces a filed parallel to $B_{\rm ext}$. This results in also the paramagnetic response. Without the strong field in the $x$ direction, it is difficult to see this paramagnetic response, since the occupied bound states are already asymmetric under ${{k_{\rm F}}_y}$$\rightarrow$$-{{k_{\rm F}}_y}$ and the state is difficult to modify with a small external field in the $z$ direction. This work was supported by Grant-in-Aid for Encouragement of Young Scientists from Japan Society for the Promotion of Science. [9]{} Y. Maeno $et$ $al$., Nature [**372**]{} (1994) 532. M. Sigrist $et$ $al$., in: Physics and Chemistrry of Transition-Metal Oxides, (Springer, 1999). C. Honerkamp and M. Sigrist, J. Low Temp. Phys. [**111**]{} (1998) 895. M. Yamashiro, Y. Tanaka, Y. Tanuma and S. Kashiwaya, J. Phys. Soc. Jpn. [**67**]{} (1998) 3224. M. Matsumoto and M. Sigrist, J. Phys. Soc. Jpn. [**68**]{} (1999) 994; erratum, $ibid$. 3120. G. M. Luke $et$ $al$., Nature [**394**]{} (1998) 558. M. Sigrist and K. Ueda, Rev. Mod. Phys. [**63**]{} (1991) 239. N. Schopohl and K. Maki, Phys. Rev. B [**52**]{} (1995) 490. S. Higashitani, J. Phys. Soc. Jpn. [**66**]{} (1997) 2556. M. Fogelström, D. Rainer and J. A. Sauls, Phys. Rev. Lett. [**79**]{} (1997) 281. H. Walter $et$ $al$., Phys. Rev. Lett. [**80**]{} (1998) 3598. D. F. Agterberg, Phys. Rev. Lett. [**80**]{} (1998) 5184.
--- author: - 'M. Musakhanov[^1]' title: Heavy and light quarks in the instanton vacuum --- Introduction. {#intro} ============= The physics of the heavy mesons and baryons with open and hidden heavy quarks is very reach and hot topic. Understanding the heavy-meson physics is important for evaluation of the components of the $CKM$-matrix, verification of the Standard Model and probing the physics beyond it, as well as production of different exotic meson states. Currently the experiments with $B$- and $D$-mesons are intensively studied by Belle [@Belle], BaBar [@BABAR] and CDF collaborations, where unprecedented integrated luminocities were achieved, as well as neutrino-production of open and hidden charm in neutrino-hadron processes studied by K2K [@K2K], MiniBoone [@MiniBooNE], NuTeV [@NuTeV] and Minerva [@Minerva] collaborations. Theoretically, in pre-QCD era some success was achieved by the quantum-mechanical models which use effective potentials to describe heavy hadrons and their excitations (see e.g. [@Eichten:1979ms] and references therein). However, such description inevitably introduces undefined phenomenological constants. The relation of these constants to QCD parameters is quite obscure: due to interaction with gluons and virtual light quark pairs all the constants contain nonperturbative dynamics. The numerical values of these constants are determined from fits to experimental data, which limits the predictive power of such models. An advanced version of the potential model is NRQCD [@Bodwin:1994jh], however in this model light quarks and their interactions with heavy quarks via gluons is done in a phenomenological way. For this reason it is limited to description of systems with two heavy quarks. Alternatively, the heavy mesons are described in the Heavy Quark Effective Theory (HQET) proposed in [@Isgur:1989], which treats the heavy mesons using the pQCD methods but does not take into account nonperturbative effects. We propose to study the heavy quark physics in the framework of the instanton vacuum model. This model was developed in [@Diakonov] and provided a consistent description of the light mesons physics [@Musakhanov]. One of the most prominent advances of the instanton vacuum model is the correct description of the spontaneous breaking of the chiral symmetry ($S\chi$SB), which is responsible for properties of most hadrons and nuclei  [@Leutwyler:2001hn]. The $S\chi$SB is due to specific properties of QCD vacuum, which is known to be one of the most complicated objects due to perturbative as well as non-perturbative fluctuations and is a very important object of investigations by methods of Nonperturbative Quantum Chromo Dynamics (NQCD). In the instanton picture $S\chi$SB is due to the delocalization of single-instanton quark zero modes in the instanton medium. One of the advantages of the instanton vacuum is that it is characterized by only two parameters: the average instanton size $\rho\sim0.3\,{\rm fm}$ and the average inter-instanton distance $R\sim1\,{\rm fm}$. These essential numbers were suggested in [@Shuryak:1981ff] and were derived from $\Lambda_{\overline{{\rm MS}}}$ in  [@Diakonov]. These values were recently confirmed by lattice measurements [@lattice]. In case of the heavy quarks, the instanton vacuum description was discussed in [@Diakonov:1989un; @Chernyshev:1995gj]. For the heavy quarks even the charmed quark mass $m_{c}\sim1.5$ GeV is larger than the typical parameters of the instanton media–the inverse instanton size $\rho^{-1}\approx600$ MeV and the interinstanton distance $R^{-1}\approx200$ MeV and thus the quark mass determines the dynamics of the heavy quarks. Light quark determinant with the quark sources term. {#Light quark} ==================================================== Instanton vacuum field is assumed as a superposition of $N_{+}$ instantons and $N_{-}$ antiinstantons: $$\begin{aligned} A_{\mu}(x)=\sum_{I}^{N_{+}}A_{\mu}^{I}(\xi_{I},x)+\sum_{A}^{N_{-}}A_{\mu}^{A}(\xi_{A},x).\label{A}\end{aligned}$$ Here $\xi=(\rho,z,U)$ are (anti)instanton collective coordinates– size, position and color orientation (see reviews  [@Diakonov; @Schafer:1996wv]. The main parameters of the model are the average inter-instanton distance $R$ and the average instanton size $\rho$. The estimates of these quantities are $$\begin{aligned} & & \rho\simeq0.33\, fm,\, R\simeq1\,{\rm fm},\mbox{(phenomenological)}~~\mbox{\cite{Diakonov,Schafer:1996wv}},\nonumber \\ & & \rho\simeq0.35\, fm,\, R\simeq0.95\,{\rm fm},\mbox{(variational)}~~\mbox{\cite{Diakonov}},\nonumber \\ & & \rho\simeq0.36\, fm,\, R\simeq0.89\,{\rm fm},~\mbox{(lattice)}~\mbox{\cite{lattice}}\label{classicalParameters}\end{aligned}$$ and have $\sim10-15\%$ uncertainty. Our main approximation is the interpolation formula for the light quark propagator in a single instanton field: $$\begin{aligned} \label{Si} &&S_{i}=S_{0}+S_{0}\hat{p}\frac{|\Phi_{0i}><\Phi_{0i}|}{c_{i}}\hat{p}S_{0},\\ \nonumber &&S_{0}=\frac{1}{\hat{p}+im},\,\,\, c_{i}=im<\Phi_{0i}|\hat{p}S_{0}|\Phi_{0i}>\,.\end{aligned}$$ The advantage of this interpolation is shown by the projection of $S_{i}$ to the zero-modes: $$\begin{aligned} S_{i}|\Phi_{0i}>=\frac{1}{im}|\Phi_{0i}>,\,\,\,<\Phi_{0i}|S_{i}=<\Phi_{0i}|\frac{1}{im}\end{aligned}$$ as it must be, while the similar projection of $S_{i}$ given by  [@Diakonov] has a wrong component, negligible only in the $m\rightarrow0$ limit. Summation of the re-scattering series leads to the light quark propagator in the instanton vacuum: $$S =S_{0} -S_{0}\sum_{i,j}\hat p |\Phi_{0i}> <\Phi_{0i}|\frac{1}{B}|\Phi_{0j}> <\Phi_{0j}|\hat p S_{0}, \label{propagator1}$$ where $B=\hat pS_0\hat p.$ Here $\tilde {\rm Tr}$ means the trace on the flavor and only on zero-mode ($|\Phi_{0j}> $) space. The explicit form of the matrix $ B(m)$ on the flavor and only on zero-modes ($|\Phi_{0j}> $) space is: $$B^{fg}_{ij}=\delta_{fg} <\Phi_{0i}|\hat p \, S_{0,f} \hat p |\Phi_{0j}> .$$ Then the low-frequency part of the light quark determinant [@Musakhanov] is $${{\rm Det}}_{\rm low} [m] = {\rm det} B(m). \label{detB}$$ Making few further steps [@Musakhanov] we get the fermionized representation of low-frequencies light quark determinant in the presence of the quark sources, which is relevant for our problems, in the form: $$\begin{aligned} &&{\rm Det}_{\rm low}\exp(-\xi^{+}S\xi)=\int\prod_{f}D\psi_{f}D\psi_{f}^{\dagger} \prod_{\pm,f}^{N_{\pm}}V_{\pm,f}[\psi^{\dagger},\psi] \nonumber\\\label{part-func} &&\times\exp\int\sum_{f}\left(\psi_{f}^{\dagger}(\hat{p}\,+\, im_{f})\psi_{f}+\psi_{f}^{\dagger}\xi_{f}+\xi_{f}^{+}\psi_{f}\right),\end{aligned}$$ where $$\begin{aligned} \label{V} V_{\pm,f}[\psi^{\dagger},\psi]=&&i\int d^{4}x\left(\psi_{f}^{\dagger}(x)\,\hat{p}\Phi_{\pm,0}(x;\zeta_{\pm})\right)\\\nonumber &&\times\int d^{4}y\left(\Phi_{\pm,0}^{\dagger}(y;\zeta_{\pm})(\hat{p}\,\psi_{f}(y)\right).\end{aligned}$$ The light quark partition function $Z[\xi_f,\xi_f^+]$ is given by the averaging of ${\rm Det}_{\rm low}\exp(-\xi^{+}S\xi)$ over the collective coordinates of the instantons $\zeta_\pm$ as: $$\begin{aligned} \nonumber Z[\xi_f,\xi_f^+]=\int D\zeta &{\rm Det}_{\rm low}\exp(-\xi^{+}S\xi),\,\,\, D\zeta=\prod_\pm d\zeta_\pm.\end{aligned}$$ The averaging over collective coordinates $\zeta_{\pm}$ is a rather simple procedure, since factorized form of the Eq. (\[part-func\]) and the low density of the instantons ($\pi^{2}\left(\frac{\rho}{R}\right)^{4}\sim0.1$). These one allows us to average over positions and orientations of the instantons independently. Light quark partition function at $N_f=1$ and $N_\pm=N/2$ is exactly given by $$\begin{aligned} &&Z[\xi,\xi^+]=\exp\left[{-\xi^+\left(\hat p \,+\, i(m+M(p))\right)^{-1}\xi}\right]\label{Z}\\\nonumber &&\times\exp\left[{\rm Tr}\ln\frac{\hat p+im+iM(p)}{\hat p+im }+N\ln\frac{N/2}{\lambda}-N\right] \\ &&N={\rm tr}\frac{iM(p)}{\hat p \,+\, i(m+M(p))},\, M(p)=\frac{\lambda}{N_c}(2\pi\rho F(p))^2. \label{M}\end{aligned}$$ Here the form-factor $F(p)$ is given by Fourier-transform of the zero-mode. The coupling $\lambda$ and the dynamical quark mass $M(p)$ are defined by the Eq. (\[M\]). At $N_f=2$, $N_\pm=N/2$ and saddle-point approximation (no meson loops contribution) $$\begin{aligned} &&Z[\xi_f,\xi_f^+]=\exp\left[-\sum_f\xi_f^+\left(\hat p+im_f+iM_f(p)\right)^{-1}\xi_f\right] \label{ZNf=2}\\\nonumber &&\times\exp\left[N\ln\frac{N/2}{\lambda}-N - \frac{V\sigma^2}{2}+ \sum_f{\rm Tr}\ln\frac{\hat p+im_f+iM_f(p)}{\hat p+im_f }\right].\end{aligned}$$ Here $\lambda,\sigma$ and dynamical quark mass $$M(p)=\frac{\lambda^{0.5}}{2g}( 2\pi\rho)^2F^2(p)\sigma,\,\,\,g^2=\frac{(N^2_c-1)2N_c}{2N_c-1}$$ are defined from the Eqs. $$\begin{aligned} N=\frac{1}{2}{\rm Tr}\frac{iM_f(p)}{\hat p+im_f+iM_f(p)}=\frac{1}{2}\sigma^2.\end{aligned}$$ In general, at $N_f >2$, and in the saddle-point approximation (no meson loops contribution) $Z[\xi_f,\xi_f^+]$ has a similar form as the Eqs. (\[Z\], \[ZNf=2\]). Light quark propagator {#Light quark propagator} ======================= The propagator is defined as $$\begin{aligned} S=\int DA \,\,{\rm Det}\left(\hat P+im\right)\frac{1}{\hat P+im},\,\,\, \hat P=\hat p+\hat A.\end{aligned}$$ In the instanton vacuum model $A\approx\sum_i A_i$, where $A_i$ are instantons and $DA\approx D\zeta .$ Then, accordingly Eq. (\[M\]) the light quark propagator is: $$\begin{aligned} S=\frac{1}{\hat p \,+\, i(m+M(p))}.\end{aligned}$$ Pobylitsa [@Pobylitsa:1989uq] neglected by the quark determinant: $$\begin{aligned} S_{Pob}=\int D\zeta \frac{1}{\hat P+im} \end{aligned}$$ and derived the Eq.: $$\begin{aligned} S^{-1}_{Pob}= S_0^{-1} +\int D\zeta \sum_i(S_{Pob}-\hat A^{-1}_i)^{-1}, \label{PobEq}\end{aligned}$$ where it was applied large $N_c$ argumentation. Representing $S^{-1}_{Pob}- S_0^{-1} = \Sigma$, it was found the Eq.: $$\begin{aligned} &&\Sigma =\frac{N}{2VN_c}{\rm tr_c} \sum_\pm \int dz_\pm \frac{ \hat p|\Phi_{0,\pm}> <\Phi_{0,\pm}| \hat p}{\Sigma_0}+O\left[\left(\frac{N}{VN_c}\right)^2\right], \nonumber\\ &&\Sigma_0=<\Phi_{0,\pm}|\Sigma|\Phi_{0,\pm}>\end{aligned}$$ and finally ($m=0$ case) the solution for the dynamical quark mass: $$\begin{aligned} \label{MPob} M^2_{Pob}(k)= \frac{N}{4VN_c}\frac{(2\pi\rho)^4 F^4(k)}{ \int\frac{d^4q}{(2\pi)^4}\frac{(2\pi\rho)^4 F^4(q)}{q^2}}\end{aligned}$$ corresponding to the Eq. $$\begin{aligned} \label{MPob1} 4N_c \int\frac{d^4q}{(2\pi)^4}\frac{M^2_{Pob}(q)}{q^2}=\frac{N}{V}\end{aligned}$$ The Eq. (\[M\]) for the $\lambda$ ($m=0$ case) has explicit form $$\begin{aligned} \label{Mdet} 4N_c \int\frac{d^4q}{(2\pi)^4}\frac{M^2(q)}{q^2+M^2(q)}=\frac{N}{V}.\end{aligned}$$ As we see, the difference between Eqs. (\[MPob1\]) and (\[Mdet\]) is only in the denominators. This one is due to the account of the quark determinant in the derivation of the Eq. (\[Mdet\]) (and Eq. (\[M\])). In the following we will use the Eq. (\[M\]) for the dynamical quark mass $M(k)$. Any $N_f$ case in the saddle-point approximation has no essential difference with the present case $N_f=1$. Heavy quark propagator. {#Heavy quark} ======================= At the ref. [@Diakonov:1989un] it was considered the Eq. for the heavy quark propagator in the line similar the Eq. (\[PobEq\]). Our aim here is to extend the approach [@Diakonov:1989un] taking in-to account the light quarks contribution at $N_f=1$ case. So, define the heavy quark propagator as: $$\begin{aligned} &&S_H=\frac{1}{Z}\int D\psi D\psi^{\dagger} \prod_{\pm}^{N_{\pm}}\bar V_{\pm}[\psi^{\dagger} ,\psi ]\,e^{\int\psi^{\dagger}(\hat p+im )\psi} w[\psi,\psi^\dagger], \nonumber\\ \nonumber &&w[\psi,\psi^\dagger] =\int \frac{D\zeta}{\prod_{\pm}^{N_{\pm}}\bar V_{\pm}[\psi^{\dagger} ,\psi ]} \prod_{\pm}^{N_{\pm}}V_{\pm}[\psi^{\dagger} ,\psi ]\frac{1}{\theta^{-1}-\sum_i a_i}, \nonumber\\ && w_\pm=\frac{1}{\theta^{-1}-a_\pm},\,\, <t|\theta|t'>=\theta(t-t'), \\\nonumber && <t|\theta^{-1}|t'>=-\frac{d}{dt}\delta(t-t'), a_i(t)=iA_{i,\mu}(x(t))\frac{d}{dt}x_\mu(t).\end{aligned}$$ In the $w[\psi,\psi^\dagger]$ the measure of the integration has a factorized form $ \prod_{\pm}^{N_{\pm}}\frac{d\zeta_\pm}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]}$ as in the Eq. (\[PobEq\]). It provide the way for the extension of this Eq.. Extended Eq. with the account of the light quarks has a form: $$\begin{aligned} \label{Eqw} &&w{-1}[\psi,\psi^\dagger]=\\\nonumber &&= \theta^{-1} +\int \prod_{\pm}^{N_{\pm}}\frac{d\zeta_\pm}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]} \sum_i\left(w[\psi,\psi^\dagger]-\hat A^{-1}_i\right)^{-1}.\end{aligned}$$ Again, we have the approximate solution of this Eq. as: $$\begin{aligned} &&w^{-1}[\psi,\psi^\dagger]-\theta^{-1}= \\ &&= \frac{N}{2}\sum_\pm \int\frac{ d\zeta_\pm}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]} V_{\pm}[\psi^{\dagger} ,\psi ]\left( \theta-a_\pm^{-1}\right)^{-1}+ O(N^2/V^2) \nonumber \\ &&=-\frac{N}{2}\sum_{\pm}\int\frac{ d\zeta_\pm}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]}V_{\pm}[\psi^{\dagger} ,\psi ]\frac{1}{\theta} (w_\pm-\theta)\frac{1}{\theta}+ O(N^2/V^2) \nonumber \\ \nonumber &&\equiv - \frac{N}{2}\sum_\pm \frac{1}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]}\Delta_{H,\pm}[\psi^{\dagger},\psi ] + O(N^2/V^2)\end{aligned}$$ and finally we get $$\begin{aligned} \label{SH1} S_H=\left[\frac{1}{\theta^{-1} - \lambda\sum_\pm\Delta_{H,\pm}[\frac{\delta}{\delta\xi} ,\frac{\delta}{\delta\xi^+}] } e^{-\xi^+\left(\hat p + i(m+M(p))\right)^{-1}\xi}\right]_{|_{\xi=\xi^+=0}}.\end{aligned}$$ If to neglect by overlapping quark loops, then $$\begin{aligned} &&S_H^{-1}\approx\left[\left(\theta^{-1} - \lambda\sum_\pm\Delta_{H,\pm}[\frac{\delta}{\delta\xi} ,\frac{\delta}{\delta\xi^+}] \right) e^{\xi^+\left(\hat p + i(m+M(p))\right)^{-1}\xi}\right]_{|_{\xi=\xi^+=0}} \nonumber\\ &&=\theta^{-1} - \frac{N}{2VN_c}\sum_\pm\int d^4z_\pm {\rm tr}_c\left(\theta^{-1}(w_\pm-\theta)\theta^{-1}\right). \label{SH3}\end{aligned}$$ The Eq. (\[SH3\]) exactly coincide with the similar one from [@Diakonov:1989un]. Now re-write the Eq. (\[SH1\]) introducing heavy quark fiels $Q,Q^\dagger$: $$\begin{aligned} && S_H=e^{\left[-{\rm tr}\ln\left(\hat p \,+\, i(m+M(p))\right)\right]}\int D\psi D\psi^{\dagger} D Q D Q^\dagger \,\,Q \, Q^\dagger\\\nonumber &&\times\exp\left[\psi^{\dagger}(\hat p +i(m+M(p)))\psi+ Q^\dagger\left(\theta^{-1} - \lambda\sum_\pm\Delta_{H,\pm}[\psi^{\dagger},\psi ] \right)Q\right] \\\nonumber &&\times\exp\left[-{\rm tr}\ln\left(\theta^{-1} - \lambda\sum_\pm\Delta_{H,\pm}[\psi^{\dagger},\psi ]\right)\right],\end{aligned}$$ where last exponent represent the (negligible) contribution of the heavy quark loops, while the second one has the heavy and light quarks interaction action, explicitly represented by $$\begin{aligned} && - \lambda\sum_\pm Q^\dagger\Delta_{H,\pm}[\psi^{\dagger},\psi ]Q= \nonumber\\ &&= - i\lambda\sum_\pm\int d^4z_\pm \frac{d^4 k_1}{(2\pi)^4} \frac{d^4 k_2}{(2\pi)^4} e^{(i(k_2-k_1)z_\pm)} (2\pi\rho )^2 F(k_1 )F(k_2 ) \nonumber \\ \nonumber &&\times\left[ \frac{1}{N_c^2}\psi^+(k_1)\frac{1\pm\gamma_5}{2}\psi(k_2)Q^+ {\rm tr}_c\left(\theta^{-1}(w_\pm-\theta)\theta^{-1}\right)Q\right. \\\nonumber &&\left.+\frac{1}{32(N_c^2-1)}\psi^+(k_1) (\gamma_\mu\gamma_\nu \frac{1\pm\gamma_5}{2})\lambda^i \psi(k_2){\rm tr}(\tau^{\mp}_{\mu}\tau^{\pm}_{\nu}\lambda^j)\right. \\ &&\times\left. Q^+ {\rm tr}_c\left(\theta^{-1}(w_\pm-\theta)\theta^{-1}\lambda^j \right)\lambda^i Q\right].\end{aligned}$$ We see that the heavy-light quarks interactions terms has a form of the product of the colorless currents of a heavy and light quarks together with similar term of the colorful currents product. The structure of these currents are defined by the instanton color orientation integration, while the instanton position integration provide energy-momentum conservation in the interaction vertex. At the $N_f>1$ case we have an interaction vertex with $N_f$ pairs of a light quark legs and the pair of a heavy quark legs. The specific structure of the interaction is defined again by instanton color orientation and will be much more reach then at $N_f=1$ case. We expect that the action generated by the instantons will have reach symmetry properties related to light and heavy quarks sectors both. Namely, it appear the light-heavy quarks interaction terms leading to the specific traces of the light quarks chiral symmetry in light-heavy quarks systems. Heavy quark anti-quark system. {#Heavy quark antiquark} ============================== Now it is considered the correlator for this system again with the account of ($N_f=1$ case) light quark contribution: $$\begin{aligned} &&<T|C(L_1,L_2)|0>=\frac{1}{Z}\int D\psi D\psi^{\dagger} \left\{\prod_{\pm}^{N_{\pm}}\bar V_{\pm}[\psi^{\dagger} ,\psi ]\right\}\\ \nonumber &&\times\exp\int\left(\psi^{\dagger}(\hat p+im )\psi\right)<T|W[\psi,\psi^\dagger]|0>, \\ \nonumber &&<T|W[\psi,\psi^\dagger]|0> =\int\frac{ D\zeta}{\left\{\prod_{\pm}^{N_{\pm}}\bar V_{\pm}[\psi^{\dagger} ,\psi ]\right\}} \left\{\prod_{\pm}^{N_{\pm}}V_{\pm}[\psi^{\dagger} ,\psi ]\right\} \\ \nonumber &&{\rm Tr}<T|\left(\theta^{-1}-\sum_i a^{(1)}_i\right)^{-1}|0> <0|\left(\theta^{-1}-\sum_i a^{(2)}_i\right)^{-1}|T>.\end{aligned}$$ Here the correlator is a Wilson loop along the rectangular contour $L\times r$, where the sides $L_1,L_2$ are parallel to $x_4$ axes and separated by the distance $r$. The $a^{(1)},a^{(2)}$ are the projections of the instantons onto the lines $L_1,L_2.$ In the ref. [@Diakonov:1989un] this correlator was considered within the approach similar to the Eq. (\[PobEq\]) of the ref. [@Pobylitsa:1989uq] but without a light quarks. The argumentation, which provided the derivation of the Eq (\[Eqw\]), is applicable to the present case and leads to the similar Eq.. $$\begin{aligned} &&W^{-1}[\psi,\psi^\dagger]= \\\nonumber &&= w_1^{-1}[\psi,\psi^\dagger]\otimes w_2^{-1,T}[\psi,\psi^\dagger] -\frac{N}{2}\sum_\pm\int\frac{ d\zeta_\pm}{ \bar V_{\pm}[\psi^{\dagger} ,\psi ]} \\ \nonumber &&\times V_{\pm}[\psi^{\dagger} ,\psi ] \left(w_1[\psi,\psi^\dagger]-a^{(1)-1}_\pm\right)^{-1}\otimes\left(w_2[\psi,\psi^\dagger]-a^{(2)-1}_\pm\right)^{-1,T} , \end{aligned}$$ where, superscript $T$ means the transposition, $\otimes$ – tensor product. This Eq. has an approximate solution: $$\begin{aligned} &&W^{-1}[\psi,\psi^\dagger]= w_1^{-1}[\psi,\psi^\dagger]\otimes w_2^{-1,T}[\psi,\psi^\dagger] \\ \nonumber && -\frac{N}{2}\sum_\pm\int\frac{ d\zeta_\pm}{ \bar V_{\pm}[\psi^{\dagger} ,\psi ]} V_{\pm}[\psi^{\dagger} ,\psi ] \\ \nonumber &&\times\left(\theta^{-1}\left(w^{(1)}_\pm-\theta\right)\theta^{-1}\right) \otimes\left(\theta^{-1}\left(w^{(2)}_\pm-\theta\right)\theta^{-1}\right)^{T}+ O(N^2/V^2).\end{aligned}$$ and $$\begin{aligned} &&w_1^{-1}[\psi,\psi^\dagger]=\theta^{-1}- \\\nonumber &&-\frac{N}{2}\sum_{\pm}\frac{ d\zeta_\pm}{ \bar V_{\pm}[\psi^{\dagger} ,\psi ]} V_{\pm}[\psi^{\dagger} ,\psi ]\theta^{-1}(w^{(1)}_\pm-\theta)\theta^{-1}+ O(N^2/V^2) \\\nonumber &&=\theta^{-1} - \frac{N}{2}\sum_\pm \frac{1}{\bar V_{\pm}[\psi^{\dagger} ,\psi ]}\Delta^{(1)}_{H,\pm}[\psi^{\dagger},\psi ] + O(N^2/V^2)\end{aligned}$$ and similar for the $w_2^{-1}[\psi,\psi^\dagger].$ From previous calculations we see that the lowest orders on $\frac{N}{N_cV}$ in $C(L_1,L_2)$ are given by the integration over $\psi,\psi^\dagger$ of the $W^{-1}[\psi,\psi^\dagger]. $ Here it was neglected by overlapping quark loops. Then, we have the new interaction term between heavy quarks located on the lines $L_1$ and $L_2$ due to exchange of the light quarks between them. Explicitly the integration of the first term in $W^{-1}[\psi,\psi^\dagger]$ over $\psi,\psi^\dagger$ leads to: $$\begin{aligned} &&\frac{1}{Z}\int D\psi D\psi^{\dagger} \left\{\prod_{\pm}^{N_{\pm}}\bar V_{\pm}[\psi^{\dagger} ,\psi ]\right\}\exp\int\psi^{\dagger}(\hat p+im )\psi \\\nonumber &&\times\, w_1^{-1}[\psi,\psi^\dagger]\otimes w_2^{-1,T}[\psi,\psi^\dagger] =\left(\theta^{-1}-\lambda\sum_\pm\Delta^{(1)}_{H,\pm}[\frac{\delta}{\delta\xi} ,\frac{\delta}{\delta\xi^+}]\right) \\\nonumber && \otimes \left(\theta^{-1}-\lambda\sum_\pm\Delta^{(2)}_{H,\pm}[\frac{\delta}{\delta\xi} ,\frac{\delta}{\delta\xi^+}]\right)^{T} {e^{-\xi^+\left(\hat p \,+\, i(m+M(p))\right)^{-1}\xi}}_{|_{\xi=\xi^+=0}}.\end{aligned}$$ Light quarks generated potential is given by $$\begin{aligned} &&V_{lq}=\left(\lambda\sum_\pm\Delta^{(1)}_{H,\pm}[\frac{\delta}{\delta\xi_1} ,\frac{\delta}{\delta\xi_1^+}]\right) \otimes \left(\lambda\sum_\pm\Delta^{(2)}_{H,\pm}[\frac{\delta}{\delta\xi_2} ,\frac{\delta}{\delta\xi_2^+}]\right)^{T} \nonumber\\ &&\times {e^{\left[-\xi_2^+\left(\hat p \,+\, i(m+M(p))\right)^{-1}\xi_1-\xi_1^+\left(\hat p \,+\, i(m+M(p))\right)^{-1}\xi_2\right]}}|_{\xi=\xi^+=0}.\end{aligned}$$ The range of this potential is controlled by dynamical light quark mass $M\sim 350$ MeV and might be important for the heavy quarkonium states properties. Conclusion. {# Conclusion} ============ Approximating the gluon field by the instanton configurations it was derived the low-frequency part of the light quark determinant in the presence of quark sources. It was provided the calculation of the instanon generated light-heavy quarks interaction terms and the heavy quark propagator with the account of the light quark determinant together with the QCD instanton vacuum properties at the $N_f=1$ case. With these knowledge it was calculated the light quark contribution to the interaction between heavy quarks. The extension of this approach to $N_f>1$ case is obvious and provide the possibility for the detailed investigation of the role of the light quarks chiral symmetry and its spontaneous breaking for the heavy and heavy-light quarks systems. The estimations of the light quark contributions to their properties are on the way. [69]{} Belle Collaboration[\]]{}: S. K. Choi *et al.*, Phys. Rev. Lett. **91** (2003) 262001 ;\ K. Abe *et al.* , Phys. Rev. Lett. **87** (2001) 091802;\ K. Abe *et al.*, Phys. Rev. D **66** (2002) 071102. \[BABAR Collaboration\]: B. Aubert *et al.*, Phys. Rev. Lett. **87** (2001) 091801;\ B. Aubert *et al.*, Nucl. Instrum. Meth. A **479** (2002) 1;\ B. Aubert *et al.*, Phys. Rev. Lett. **90** (2003) 242001. \[K2K Collaboration\]: M. H. Ahn *et al.*, Phys. Rev. Lett. **90** (2003) 041801;\ M. Hasegawa *et al.* [\[]{}, Phys. Rev. Lett. **95** (2005) 252301 [\[]{};\ R. Gran *et al.* [\[]{}, Phys. Rev. D **74** (2006) 052002. MiniBooNE Collaboration[\]]{}: A. A. Aguilar-Arevalo *et al.* , Phys. Rev. Lett. **100** (2008) 032301;\ A. A. Aguilar-Arevalo *et al.*, Phys. Lett. B **664** (2008). NuTeV Collaboration[\]]{}: G. P. Zeller *et al.*, Phys. Rev. Lett. **88** (2002) 091802 [\[]{}Erratum-ibid. **90** (2003) 239902[\]]{};\ M. Goncharov *et al.*, Phys. Rev. D **64** (2001) 112006;\ A. Romosan *et al.*, Phys. Rev. Lett. **78** (1997) 2912. \[Minerva Collaboration\]: D. Drakoulakos *et al.*, arXiv:hep-ex/0405002;\ K. S. McFarland [\[]{}, Nucl. Phys. Proc. Suppl. **159** (2006) 107. E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane and T. M. Yan, Phys. Rev. D **21** (1980) 203. G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D **51** (1995) 1125 [\[]{}Erratum-ibid. D **55** (1997) 5853[\]]{}. N. Isgur and M. B. Wise, Phys. Lett. B **232** (1989) 113;\ N. Isgur and M. B. Wise, Phys. Lett. B **237** (1990) 527. D. Diakonov and V. Y. Petrov, Nucl. Phys. B **245** (1984) 259;\ D. Diakonov,  V. Polyakov and C. Weiss, Nucl. Phys. B **461** (1996) 539;\ D. Diakonov, Prog. Part. Nucl. Phys. **51**, 173 (2003). P. V. Pobylitsa, Phys. Lett. B **226** (1989) 387. D. Diakonov, V. Y. Petrov and P. V. Pobylitsa, Phys. Lett. B **226** (1989) 372. S. Chernyshev, M. A. Nowak and I. Zahed, Phys. Rev. D **53** (1996) 5176. M. M. Musakhanov and F. C. Khanna, Phys. Lett. B **395** (1997) 298;\ E. D. Salvo and M. M. Musakhanov, Eur. Phys. J. C **5**(1998)501;\ M. Musakhanov, Eur. Phys. J. C **9** (1999) 235 ;\ M. Musakhanov, Nucl. Phys. A **699**(2002) 340;\ M. M. Musakhanov and H. C. Kim, Phys. Lett. B **572** (2003) 181;\ H. C. Kim, M. Musakhanov and M. Siddikov, Phys. Lett. B **608** (2005) 95;\ H. C. Kim, M. M. Musakhanov and M. Siddikov, Phys. Lett. B **633** (2006) 701;\ K. Goeke, M. M. Musakhanov and M. Siddikov, Phys. Rev. D **76** (2007) 076007;\ K. Goeke, H. C. Kim, M. M. Musakhanov, M. Siddikov, Phys. Rev. D **76** (2007) 116007;\ K. Goeke, M. Musakhanov, M. Siddikov, Phys. Rev. D **81** (2010) 054029. H. Leutwyler, Czech. J. Phys. **52** (2002) B9. E. V. Shuryak, Nucl. Phys. B **203** (1982) 93. M. C. Chu, J. M. Grandy, S. Huang and J. W. Negele, Phys. Rev. D **49** (1994) 6039;\ J. W. Negele, Nucl. Phys. Proc. Suppl. **73** (1999) 92;\ T. DeGrand, Phys. Rev. D **64** (2001) 094508;\ P. Faccioli and T. A. DeGrand, Phys. Rev. Lett. **91** (2003) 182001;\ P. O. Bowman, U. M. Heller, D. B. Leinweber, A. G. Williams and J. b. Zhang, Nucl. Phys. Proc. Suppl. **128** (2004) 23. T. Schafer and E. V. Shuryak, Rev. Mod. Phys. **70** (1998) 323. [^1]:
--- author: - 'Allyson Souris [^1]\' - 'Anirban Bhattacharya [^2]\' - 'Debdeep Pati [^3]\' - bibliography: - 'soft\_tMVN.bib' title: The Soft Multivariate Truncated Normal Distribution with Applications to Bayesian Constrained Estimation --- [**Abstract.**]{} We propose a new distribution, called the soft tMVN distribution, which provides a smooth approximation to the truncated multivariate normal (tMVN) distribution with linear constraints. An efficient blocked Gibbs sampler is developed to sample from the soft tMVN distribution in high dimensions. We provide theoretical support to the approximation capability of the soft tMVN and provide further empirical evidence thereof. The soft tMVN distribution can be used to approximate simulations from a multivariate truncated normal distribution with linear constraints, or itself as a prior in shape-constrained problems.\ <span style="font-variant:small-caps;">Keywords:</span> [*Approximate; Blocking; Gibbs sampling; Markov chain Monte Carlo; Sigmoidal*]{} Introduction ============ The truncated multivariate normal (tMVN) distribution is routinely used as a prior distribution on model parameters in Bayesian shape-constrained regression. Structural constraints, such as monotonicity and/or convexity, are commonly induced by expanding the function in an appropriate basis where the constraints can be induced by imposing [*linear constraints*]{} on the coefficients; some examples of such a basis include piecewise linear functions [@dunson], splines [@cai], Bernstein polynomials [@wang], and compactly supported basis functions [@maatouk; @PhysRevC.99.055202]. Under a Gaussian or scale-mixture of Gaussian error distribution, the conditional posterior of the basis coefficients once again turns out to be truncated normal with linear constraints, necessitating sampling from a tMVN distribution for posterior inference. The problem of sampling from a tMVN distribution with linear constraints is also frequently encountered as a component of a larger Markov chain Monte Carlo (MCMC) algorithm to sample from the full conditional distribution of a constrained parameter vector. As a running example revisited on multiple occasions in this article, consider binary variables $y_i = {\mathbbm{1}}(z_i > 0)$, with $z = (z_1, \ldots, z_n)^\T$ a vector of latent Gaussian thresholds [@albert] and $w \in \mb R^q$ a vector of parameters/latent variables so that the joint distribution of $\theta = (z, w)$ follows a $\m N(\mu, \Sigma)$ distribution. It then immediately follows that the (conditional) posterior of $\theta \mid y, \mu, \Sigma$ follows a $\m N(\mu, \Sigma)$ distribution truncated to $\otimes_{i=1}^n \m C_i \, \otimes \mb R^q$, with $\m C_i = (0, \infty)$ or $(-\infty, 0)$ depending on whether $y_i = 1$ or $0$. Such latent Gaussian threshold models are ubiquitous in the analysis of binary and nominal data; examples include probit regression and its multivariate extensions [@albert; @holmes; @chib; @obrien], multinomial probit models [@mcculloch; @zhang; @johndrow], tobit models [@tobin; @polasek], and binary Gaussian process (GP) classification models [@girolami] among others. In this article, we propose a new family of distributions called the soft tMVN distribution which replaces the hard constraints in a tMVN distribution with a smoothed or “soft” version using a logistic sigmoid function. The soft tMVN distribution admits a smooth log-concave density on the $d$-dimensional Euclidean space. Although the soft tMVN distribution is supported on the entire $d$-dimensional space, it can be made to increasingly concentrate most of its mass on a polyhedron determined by multiple linear inequality constraints, by tweaking a parameter. In fact, we show that the soft tMVN distribution approximates the corresponding tMVN distribution in total variation distance. Recognizing the soft tMVN distribution as the posterior distribution in a pseudo-logistic regression model, we develop an efficient blocked Gibbs sampler combining the Polya–Gamma data augmentation of [@polson2013] along with a structured multivariate normal sampler from [@bhattacharya]. In contrast, existing Gibbs samplers for a tMVN distribution sample the coordinates one-at-a-time from their respective full conditional univariate truncated normal distributions [@geweke; @kotecha; @damien; @rodriguez]. The algorithm of Geweke is implemented in the `R` package `tmvtnorm` [@tmvtnorm]. While the Gibbs sampling procedure is entirely automated, it is well-recognized in a broader context that such one-at-a-time updates can lead to slow mixing, especially if the variables are highly correlated. We have additionally observed numerical instabilities in the `R` implementation for unconstrained dimensions exceeding 400. While exact Hamiltonian Markov chain (HMC) algorithms to sample from tMVN [@pakman2014exact] are also popular, such algorithms are not suitable to sample from the soft tMVN, and leaf-frog steps with careful tuning are necessary to obtain good mixing. There also exists accept-reject algorithms for the tMVN distribution that create exact samples from the distribution [@botev]. The algorithm of Botev is implemented in the `R` package `TruncatedNormal` [@truncatednormal]. While exact samples are possible, when the acceptance probability becomes small, either the algorithm slows tremendously or approximate samples are produced. We typically saw small acceptance probabilities in the `R` implementation when the constrained dimension exceeded 200. With such motivation, we propose to replace a tMVN distribution with its softened version inside a larger MCMC algorithm and use our sampling strategy for the soft tMVN distribution. In recent years, there has been several instances of such approximate MCMC (aMCMC) [@johndrow2015approximations] algorithms where the exact transition kernel of a Markov chain is replaced by an approximation thereof for computational ease. The soft tMVN distribution can also be used as a prior distribution in Bayesian shape-constrained regression problems as an alternative to the usual tMVN prior. Like the tMVN distribution, the soft tMVN distribution is conditionally conjugate for the mean in a Gaussian likelihood. The soft tMVN can be viewed as a shrinkage prior which encourages shrinkage towards a linearly constrained region rather than being supported on the region. There is an interesting parallel between the soft tMVN distribution and global-local shrinkage priors used in sparse regression problems. The global-local priors replace the point mass (at zero) of the more traditional discrete mixture priors and rather encourage shrinkage towards the origin, with the motivation that a subset of the regression coefficients may have a small but non-negligible effect. Similarly, the soft tMVN prior favors the shape constraints while allowing for small departures. The rest of the article is organized as follows. In Section 2, we introduce the soft tMVN distribution as an approximation to the tMVN distribution and discuss its properties. In Section 3, we discuss various strategies to sample from a soft tMVN distribution, including a scalable Gibbs sampler suitable for high-dimensional situations. Section 4 contains a number of simulation examples to illustrate the efficacy of the proposed sampler as well as the approximation capability of the soft tMVN distribution. Section 5 contains an example of a Gibbs sampler for a shape-constrained model where the soft tMVN distribution is preferred as a prior over the tMVN distribution. We conclude with a discussion in Section 6. The soft tMVN distribution ========================== Consider a tMVN distribution $$\begin{aligned} \label{eq:trunG} \gamma(\theta) \propto e^{-\frac{1}{2} \, (\theta - \mu)^{\T} \Sigma^{-1} (\theta - \mu) } \, {\mathbbm{1}}_{\m C}(\theta),\end{aligned}$$ where $\mu \in \mb R^d$, $\Sigma$ is a $d \times d$ positive definite matrix, and $\m C$ is described by $r \le d$ linear constraints, $$\begin{aligned} \m C = \bigg\{ \theta \in \mb R^d \!:\! s_i \,(a_i^{\T} \theta) \ge 0, \ i = 1, \ldots, r \bigg\},\end{aligned}$$ where $s_i \in \{1, -1\}$ denotes the sign of the $i$th inequality, and $a_i \in \mb R^d$. Without loss of generality, we assume the first $r$ coordinates to be constrained; this is mainly for notational convenience and can always be achieved by reordering the variables, if necessary. We also assume throughout that $\m C$ has positive $\mb R^d$-Lebesgue measure, so that the density $\gamma$ in is non-singular on $\mb R^d$. In the special case where $a_i = e_i$, the $i$th unit vector in $\mb R^d$ (with 1 at the $i$th coordinate and 0 elsewhere), the constraint set $\m C$ reduces to the form $\otimes_{i=1}^r \m C_i \otimes \mb R^q$ mentioned in the introduction. While this is an important motivating example, our approach works more generally for the type of constraints in the above display. Write, using the convention $0^0 = 1$, $$\begin{aligned} {\mathbbm{1}}(\theta \in \m C) &= \prod_{i \in [r] \,:\, s_i = 1} {\mathbbm{1}}(a_i^{\T} \theta \ge 0) \, \prod_{i \in [r] \,:\, s_i = -1} {\mathbbm{1}}(a_i^{\T} \theta < 0) \\ &= \prod_{i=1}^r \{{\mathbbm{1}}(a_i^{\T} \theta \ge 0) \}^{{\mathbbm{1}}(s_i = 1)} \, \{{\mathbbm{1}}(a_i^{\T} \theta < 0) \}^{{\mathbbm{1}}(s_i = -1)}. \end{aligned}$$ Our main idea is to replace the indicator functions above with a smoothed or “soft” approximation. A rich class of approximations to the indicator function ${\mathbbm{1}}_{(0, \infty)}(\cdot)$ is provided by sigmoid functions, which are non-negative, monotone increasing, differentiable, and satisfy $\lim_{x \to \infty} \sigma(x) = 1$ and $\lim_{x \to -\infty} \sigma(x) = 0$. The cumulative distribution function of any absolutely continuous distribution on $\mb R$ which is symmetric about zero can be potentially used as a sigmoid function. Here, for reasons to be apparent shortly, we choose to use the logistic sigmoid function $\sigma(x) = 1/(1+e^{-x})$, which is the cdf of the logistic distribution. Specifically, define, for $\eta > 0$, $$\begin{aligned} \label{eq:err} \sigma_{\eta}(x) = \frac{1}{1+e^{-\eta x}} = \frac{e^{\eta x}}{1 + e^{\eta x}}, \quad x \in \mb R, \end{aligned}$$ to be a scaled version of $\sigma(\cdot)$. The parameter $\eta$ controls the quality of the approximation, with larger values of $\eta$ providing increasingly better approximations to ${\mathbbm{1}}_{(0, \infty)}(\cdot)$. In fact, it is straightforward to see that $$\begin{aligned} \label{eq:approx_basic} | \sigma_{\eta}(x) - {\mathbbm{1}}_{(0, \infty)}(x) | \le \frac{1}{1 + e^{\eta |x| }}, \quad x \in \mb R. \end{aligned}$$ It is also immediate that $(1 - \sigma_\eta(\cdot))$ is an approximation to ${\mathbbm{1}}_{(-\infty, 0)}(\cdot)$ with the same approximation error. We are now ready to describe our approximation scheme. Fixing some large $\eta$ and replacing the indicators by their respective sigmoidal approximations in , we obtain the approximation $\gamma_\eta$ to $\gamma$ as $$\begin{aligned} \label{eq:trunG_approx} \gamma_\eta(\theta) \propto e^{-\frac{1}{2} \, (\theta - \mu)^{\T} \Sigma^{-1} (\theta - \mu) } \ \prod_{i=1}^r \bigg(\frac{e^{\eta \,a_i^{\T}\theta}}{1 + e^{\eta \, a_i^{\T}\theta}} \bigg)^{{\mathbbm{1}}(s_i = 1)} \, \bigg(\frac{1}{1 + e^{\eta \, a_i^{\T}\theta}} \bigg)^{{\mathbbm{1}}(s_i = -1)},\end{aligned}$$ for $\theta \in \mb R^d$. We refer to $\gamma_\eta$ as a soft tMVN distribution and generically denote it by $\m N_{\m C}^s(\mu, \Sigma)$. In the one-dimensional case, $\gamma_\eta(\theta) = \phi(\theta|\mu, \sigma)F(\theta)$ where $\phi(x|\mu, \theta)$ is the normal density with mean $\mu$ and variance $\sigma^2$ and $F(x)$ is the logistic distribution function. This is similar to a skew normal density, except in the skew normal density, $F(x)$ is the normal distribution function instead of the logistic distribution function [@Arellano]. It is immediate to note that $\gamma_\eta$ is a smooth (infinitely differentiable) density supported on $\mb R^d$. Further, a simple calculation shows that $$\begin{aligned} \nabla^2 \big( - \log \gamma_\eta(\theta) \big) = \Sigma^{-1} + \sum_{i=1}^r \frac{ \eta^2 \, e^{\eta \, a_i^{\T}\theta}}{(1 + e^{\eta \, a_i^{\T}\theta)^2} } \, a_i a_i^{\T} \succsim 0, \end{aligned}$$ i.e., the Hessian matrix of the negative log density is positive definite. This implies that $\gamma_\eta$ is a log-concave density, which, in particular means $\gamma_\eta$ is unimodal. We collect these various observations about $\gamma_\eta$ in Proposition \[prop:approx\]. \[prop:approx\] Let $\gamma$ and $\gamma_\eta$ be respectively defined as in and . Then, $\gamma_\eta$ is an infinitely differentiable, unimodal, log-concave density on $\mb R^d$. Further, $$\lim_{\eta \to \infty} \int_{\mb R^d} | \gamma_\eta(\theta) - \gamma(\theta) | \,d \theta = 0.$$ A proof is provided in the supplementary material. The last part of Proposition \[prop:approx\] formalizes the intuition that $\gamma_\eta$ approximates $\gamma$ for large $\eta$ by showing that the $L_1$ distance between $\gamma_\eta$ and $\gamma$ converges to 0 as $\eta \to \infty$. An inspection of the proof for the $L_1$ approximation will reveal that we haven’t used any particular feature of the logistic function and the argument can be extended to other sigmoid functions. The $L_1$ approximation result implies that although $\gamma_\eta$ has a non-zero density at all points in $\mb R^d$, the effective support is the region $\m C$ for large values of $\eta$, and a random draw from $\gamma_\eta$ will fall inside $\m C$ with overwhelmingly large probability. This is because [$$\begin{aligned} \gamma_\eta(\theta \not\in \m C) &= 1 - \gamma_\eta(\theta \in \m C) \\ &= \gamma(\theta \in \m C) - \gamma_\eta(\theta \in \m C) \\ &\leq \int_{\mb R^d} | \gamma_\eta(\theta) - \gamma(\theta) | \,d \theta,\end{aligned}$$ ]{} so using Proposition \[prop:approx\], the probability of $\theta$ falling outside of the region $\m C$ approaches zero as $\eta$ approaches infinity. To obtain a more quantitative feel for how the approximation gets better with increasing $\eta$, we set $\gamma$ to be a standard bivariate normal distribution truncated to the first orthant, $$\begin{aligned} \label{eq:biv_ex} \gamma(\theta) \propto e^{-\theta^{\T} \Sigma^{-1} \theta} \, {\mathbbm{1}}_{(0, \infty)}(\theta_1) \, {\mathbbm{1}}_{(0, \infty)}(\theta_2), \quad \Sigma = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}.\end{aligned}$$ Figure \[fig:compTN\] shows contour plots of $\gamma$ (last column) along with those for $\gamma_\eta$ for various values of $\eta$, with $\eta$ increasing from left to right. Each row corresponds to a different value of $\rho$. It is evident that the approximation quickly improves as $\eta$ increases, and stabilizes around $\eta = 100$. We later show in simulations involving substantial higher dimensions that $\gamma_\eta$ with $\eta = 100$ continues to provide a reasonable approximation to the corresponding tMVN distribution $\gamma$. The accurate approximation of the soft tMVN has two important consequences in our opinion. First, for any of the examples discussed in the introduction which require a sample from a tMVN within an MCMC algorithm, a sample from a tMVN can be replaced with a sample from the corresponding soft tMVN distribution; we discuss efficient strategies to sample the soft tMVN distribution in the next section. Second, the soft tMVN distribution can itself be used as a prior distribution for constrained parameters. As a prior, the soft tMVN replaces the hard constraints imposed by the tMVN with soft constraints, encouraging shrinkage towards the constrained region $\m C$. Indeed, the soft tMVN distribution can be considered a global shrinkage prior [@polson2010] which shrinks vectors towards a pre-specified constrained region. The tMVN prior is conditionally conjugate for a Gaussian likelihood and the soft tMVN prior naturally inherits this conditional conjugacy. Suppose $Y \mid \theta, \sigma^2 \sim \m N(\Phi \theta, \sigma^2 I_n)$ and $\theta \sim \m N_{\m C}^s(\mu, \Sigma)$ is assigned a soft tMVN prior. Then, $$\theta \mid Y, \sigma^2, \mu, \Sigma \sim \m N_{\m C}^s\big( (\Phi^{\T} \Phi/\sigma^2 + \Sigma^{-1})^{-1} \Phi^{\T} Y, \, (\Phi^{\T} \Phi/\sigma^2 + \Sigma^{-1})^{-1} \big).$$ The conditional conjugacy allows one to fit a conditionally Gaussian model with a soft tMVN prior using standard Gibbs sampling algorithms, provided one can efficiently sample from a soft tMVN distribution. We provide a detailed exposition in Section \[sec:cons\], with a specific application of the soft tMVN distribution as a prior in Bayesian monotone single-index models. Sampling from the soft tMVN distribution ======================================== Gibbs sampler in high-dimensions -------------------------------- In this subsection, we propose a scalable data-augmentation blocked-Gibbs sampler to sample from a soft tMVN distribution. The proposed Gibbs sampler updates the entire $\theta$ vector in a block, unlike one-at-a-time updates for Gibbs samplers for tMVNs. Apart from log-concavity, the other nice feature behind our choice of the logistic sigmoid function is that $\gamma_\eta$ can be recognized as the posterior distribution of a vector of regression parameters in a logistic regression model. To see this, consider the setup of a logistic regression model with binary response $t_i \in \{0, 1\}$ and vector of predictors $W_i \in \mb R^d$ for $i = 1, \ldots r$, $$\mbox{Pr}(t_i = 1 \mid \theta, W_i) = \frac{e^{W_i^{\T} \theta}}{1+e^{W_i^{\T}\theta}}.$$ Assuming a $\m N(\mu, \Sigma)$ prior on the vector of regression coefficients $\theta$, the posterior distribution of $\theta \mid t, W, \mu, \Sigma$ is given by $$e^{-\frac{1}{2} \, (\theta - \mu)^{\T} \Sigma^{-1} (\theta - \mu) } \ \prod_{i=1}^r \bigg(\frac{e^{W_i^{\T}\theta}}{1 + e^{W_i^{\T}\theta}} \bigg)^{t_i} \, \bigg(\frac{1}{1 + e^{W_i^{\T}\theta}} \bigg)^{(1-t_i)}.$$ If we now set $t_i = {\mathbbm{1}}(s_i = 1)$ and $W_i = \eta \, a_i$, then the above density is identical to $\gamma_\eta$. The number of constraints $r$ plays the role of the sample size, and the ambient dimension $d \ge r$ indicates the number of the regression parameters in this pseudo-logistic model. Thus, sampling from $\gamma_\eta$ is equivalent to sampling from the conditional posterior of regression parameters in a high-dimensional logistic regression model, which can be conveniently carried out using the Polya–Gamma data augmentation scheme of [@polson2013]. The Polya–Gamma scheme introduces $r$ auxiliary variables $\omega_1, \ldots, \omega_r$ and performs Gibbs sampling by alternatively sampling from $\omega \mid \theta, t$ and $\theta \mid \omega, t$ as follows:\ (i) Sample $\omega_i \mid \theta, t \sim \mbox{PG}(1,W_i^{\T}\theta)$ independently for $i = 1, \ldots, r$,\ (ii) Sample $\theta \mid \omega, t \sim \m N_d(\mu_\omega, \Sigma_\omega)$, with $$\begin{aligned} \label{eq:structured_G} \Sigma_\omega = (W^{\T} \Omega W + \Sigma^{-1})^{-1}, \quad \mu_{\omega} = \Sigma_{\omega} (W^{\T} \kappa + \Sigma^{-1} \mu),\end{aligned}$$ where $W \in \mb R^{r \times d}$ with $i$th row $W_i^{\T}$, $t = (t_1, \ldots, t_r)^{\T}$, $\kappa = (t-1/2)$, and $\Omega = \mbox{diag}(\omega_1, \ldots, \omega_r)$.\ In (i), PG denotes a Polya–Gamma distribution which can be sampled using the `Bayeslogit` package in `R` [@polson2013]. Note that the entire $\theta$ vector is sampled in a block in step (ii). The worst-case complexity of sampling from the multivariate Gaussian distribution in is $O(d^3)$. However, exploiting the structure of $\mu_\omega$ and $\Sigma_\omega$, a sample from $\m N(\mu_\omega, \Sigma_\omega)$ can be obtained with significantly less cost using a recent algorithm in [@bhattacharya] provided $d \gg r$ and a $\m N(0, \Sigma)$ variate can be cheaply sampled. Define $\Phi = \Omega^{1/2} W$ and $\alpha = \Omega^{-1/2} \kappa$. Then, a sample from (ii) is obtained by first sampling $$\begin{aligned} \label{eq:bar_theta} \bar{\theta} \sim \m N( (\Phi^{\T}\Phi + \Sigma^{-1})^{-1} \Phi^{\T} \alpha, \, (\Phi^{\T}\Phi + \Sigma^{-1})^{-1}),\end{aligned}$$ and setting $$\begin{aligned} \label{eq:bar_mu} \theta = \bar{\mu} + \bar{\theta}, \quad \bar{\mu} = (\Phi^{\T}\Phi + \Sigma^{-1})^{-1} \Sigma^{-1} \mu. \end{aligned}$$ First, by the Sherman–Woodbury–Morrison formula, $$(\Phi^{\T} \Phi + \Sigma^{-1})^{-1} = \Sigma - \Sigma \Phi^{\T}(\Phi \Sigma \Phi^{\T} + I_r)^{-1}\Phi \Sigma.$$ Thus, $$\begin{aligned} \label{eq:bar_mu1} \bar{\mu} = \mu - \Sigma \Phi^{\T}(\Phi \Sigma \Phi^{\T} + I_r)^{-1}\Phi \mu,\end{aligned}$$ which only requires solving a $r \times r$ system. Sampling $\bar{\theta}$ in can be efficiently carried out by adapting the algorithm of [@bhattacharya] to the present setting. The steps are:\ (a) Sample $u \sim \m N(0, \Sigma)$ and $\delta \sim \m N(0, \mr I_r)$.\ (b) Set $v = \Phi u + \delta$.\ (c) Solve $(\Phi \Sigma \Phi^{\T} + \mr I_r) w = (\alpha - v)$.\ (d) Set $\bar{\theta} = u + \Sigma \Phi^{\T} w$. [\ ]{} It follows from [@bhattacharya] that $\bar{\theta}$ obtained in step (d) has the desired Gaussian distribution. Barring the sampling of $u$ in step (a), the remaining steps have a combined complexity of $O(r^2 d)$, which can be significantly smaller than $d^3$ when $d \gg r$. If $\Sigma$ is a diagonal matrix, $u$ can be trivially sampled with $O(d)$ cost. Even for non-diagonal $\Sigma$, it is often possible to exploit its structure to cheaply sample from $\m N(0, \Sigma)$. For example, in the probit and multivariate probit regression context, $\Sigma$ assumes the form (see Section \[sect:probit\]), $$\Sigma = \begin{pmatrix} \mathrm{I}_N + H L H^{\T} & HL \\ L H^{\T} & L \end{pmatrix},$$ where $L$ is a $q\times q$ diagonal matrix and $H$ is an $N \times q$ (possibly dense) matrix. A sample $u$ from $\m N(0, \Sigma)$ is then obtained by\ (i) Sample $z \sim \m N(0, \mathrm{I}_N)$ and $u_2 \sim \m N(0, L)$ independently.\ (ii) Set $u_1 = H u_2 + z$ and $u = (u_1^{\T}, u_2^{\T})^{\T}$. Since $u$ is a linear transformation of $(z, u_2)$ which is jointly Gaussian, $u$ also has a joint Gaussian distribution. Calculating the covariance matrix of $u$ then immediately shows that $u \sim \m N(0, \Sigma)$. Since $L$ is diagonal, $u_2$ can be sampled in $O(q)$ steps, and the matrix multiplication costs $O(N q^2)$, so that the overall cost is $O(N q^2)$. Other strategies ---------------- In moderate dimensions, it is possible to use a Metropolis (Gaussian) random walk and its various extensions to sample from a soft tMVN distribution. In particular, given that the soft tMVN distribution can be recognized as the posterior distribution in a model with a Gaussian prior, elliptical slice sampling [@murray2010elliptical] is a viable option. There is substantial literature on sampling from log-concave distributions using variants of the Metropolis algorithm with strong theoretical guarantees [@frieze1994; @frieze1999; @lovasz2006fast; @lovasz2006simulated; @belloni]. More recently, [@dalalyan] and [@durmus] provided non-asymptotic bounds on the rate of convergence of unadjusted Langevin Monte Carlo (LMC) algorithms for log-concave target densities. Assuming the target density is proportional to $e^{-f(\theta)}$ for some convex function $f$, the successive iterates of a first-order LMC algorithm takes the form $$\begin{aligned} \theta_{k+1} = \theta_k - h \nabla f(\theta_k) + \sqrt{2h} \, \xi_{k+1}, \quad k = 0, 1, \ldots, \end{aligned}$$ where the $\{\xi_k\}$s are independent $\m N(0, \mathrm{I})$ variates and $h > 0$ is a step-size parameter. Clearly, $\{\theta_k\}_{k=0,1, \ldots}$ forms a discrete-time Markov chain and the results in [@dalalyan] and [@durmus] characterize the rate at which the distribution of $\theta_k$ converges to the target density in total variation distance. Aside from the non-asymptotic bounds, another key message from their results is that the typical Metropolis adjustment as in Metropolis adjusted Langevin (MALA) [@roberts] is not required for log-concave targets. [@dalalyan] also provides a second-order version of the LMC algorithm called LMCO which can incorporate the Hessian $\nabla^2 f$. Since both $\nabla (- \log \gamma_\eta)$ and $\nabla^2 ( - \log \gamma_\eta)$ are analytically tractable, it is possible to use both the LMC and LMCO algorithms to sample from $\gamma_\eta$. Other than MCMC, another possible strategy to sample from $\gamma_\eta$ is to use a multivariate generalization of the adaptive rejection sampling (ARS) [@gilks]. \ Simulations =========== In this section, we conduct a number of simulations to empirically illustrate that the soft tMVN distribution continues to provide an accurate approximation to the tMVN distribution in high-dimensional situations. These simulations also demonstrate the scalability of the proposed Gibbs sampler. To begin with, we first justify our continued use of $\eta = 100$ in higher dimensions. In Figure \[fig:compTN\], we had provided the contour plots of a bivariate tMVN distribution and its soft tMVN approximation with $\eta = 100$. As an obvious extension, we now consider the bivariate marginal of $(\theta_1, \theta_2)$, where $\theta \in {\mathbb{R}}^{50}$ is drawn from a multivariate normal distribution with mean $\mu = 0$ and with a compound symmetry covariance structure, $\Sigma = (1-\rho) I_{50} + \rho 1_{50}1_{50}^{\T}$, truncated to the positive orthant. We consider two choices of $\rho$, namely $\rho = 0.25$ and $0.75$, and provide the contour plots for $\m N_{\m C}(\mu, \Sigma)$ and $\m N_{\m C}^s(\mu, \Sigma)$ in the top and bottom panels of Figure \[fig:con\_bi\] respectively. The contour plots were drawn by collecting $150,000$ samples from the $\m N_{\m C}(\mu, \Sigma)$ and $\m N_{\m C}^s(\mu, \Sigma)$ distributions, and then retaining the first two coordinates in each case to obtain samples from the bivariate marginal. Specifically, we used the rejection sampler of [@botev] implemented in the `R` package `TruncatedNormal` [@truncatednormal] to draw samples from a tMVN distribution and used our data augmentation Gibbs sampler to sample from the soft tMVN distribution. The figure shows that $\eta = 100$ remains a reasonable choice in higher dimensions, and we henceforth fix $\eta = 100$ throughout. The figure also shows that the contours between between the two distributions are comparable with the soft tMVN having a slightly larger peak. Next, we provide some numerical summaries in two different settings. Due to the inherent difficulty of comparing two high-dimensional distributions, we will compare the marginal densities. Specifically, given densities $f$ and $g$ on $\mb R^d$ with finite mean, we consider two different measures to compare them. The first one uses the 1^st^ Wasserstein ($\mbox{W}_1$) distance between two distributions, $W_1(f,g)$ [@OptimalTransport]. The $\mbox{W}_1$ distance is defined as $$W_1(f, g) = \inf_{(U, V) \in \mathcal{C}_{f, g}} \, E\|U-V\|$$ where $\mathcal{C}_{f, g}$ is the collection of all couplings between $f$ and $g$, i.e., pair of random variables $(U, V)$ with $U \sim f$ and $V \sim g$. Our first comparison metric is an average $\mbox{W}_1$ distance between the marginals, $$\begin{aligned} \label{eq:D} D :\,= \frac{1}{d} \sum_{i=1}^d \mbox{W}_1(f_i, g_i),\end{aligned}$$ where $f_i$ denotes the $i$th marginal density of $f$. We used the R package `transport` to compute the average $W_1$ distance between $\gamma$ and $\gamma_\eta$, which to our convenience only requires samples from the two densities in questions. We note here that an analytic calculation is out of question since the marginal densities of both $\gamma$ and $\gamma_\eta$ lack closed-form expressions. Our second measure is an average squared $L_2$ distance between the mean vectors for the two densities, $$\begin{aligned} \label{eq:xi} \xi :\, = \frac{\|\mu_f - \mu_g\|^2}{d}, \end{aligned}$$ with $\mu_f = \int_{\mb R^d} x f(x) dx$. We compute $D$ and $\xi$ between $\gamma$ and $\gamma_\eta$ for two different covariance structures in $\Sigma$. Due to the lack of analytic expressions for the marginals for non-diagonal $\Sigma$, we resort to simulations to approximate $D$ and $\xi$. The highest dimension $d$ used in our simulations is $d = 600$; while our sampler can be scaled beyond this, the rejection sampler starts producing warning messages due to incurring small acceptance probabilities. The code for sampling from the soft tMVN distribution with both covariance structures is located at <https://github.com/aesouris/softTMVN>. Probit-Gaussian Process Example ------------------------------- For our first example, we consider $\theta \sim \m N_{n}(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$ where the covariance matrix $\Sigma$ is formed from the M[á]{}tern kernel [@rasmussen2004gaussian] and $\m{C} = \m{C}_1 \otimes \m{C}_2 \otimes \cdots \otimes \m{C}_n$ where $\m{C}_i$ is either $(-\infty, 0)$ or $(0, \infty)$ for $i = 1, \ldots, n$. This structure is motivated by a binary Gaussian process (GP) classification model. Suppose $Y_i \in \{0,1\}$ is a binary response at locations $s_i$ modeled as $Y_i = {\mathbbm{1}}\{Z(s_i) > 0\}$ for $i = 1, \ldots, n$, where $Z$ is a continuous latent threshold function. In GP classification, $Z$ is assigned a mean-zero Gaussian process prior $Z \sim GP(0, K_n)$, with $[K_n]_{ij} = K(s_i, s_j)$ and $K$ a positive definite kernel. Here, we take $K$ to be a M[á]{}tern kernel. Letting $Z = [Z(s_1), \ldots, Z(s_n)]^{\T}$, the conditional distribution of $Z \mid Y$ follows the above $\m N_{n}(0, K_n) {\mathbbm{1}}_{\m{C}}(Z)$ where $\m{C}_i = (-\infty, 0)$ if $Y_i = 0$ and $\m{C}_i = (0, \infty)$ if $Y_i = 1$. For the simulation, set $n = \{100, 200\}$. Let $s_i = i$ for $i = 1, \ldots, n$. We randomly sample $\ell_1$ from $\{10,\ldots, n/2\}$ and $\ell_2$ from $\{n/2+1, \ldots, n-10\}$ and let $Y_1, \ldots, Y_{\ell_1} = 1$, $Y_{\ell_1+1}, \ldots, Y_{\ell_2} = 0$, and $Y_{\ell_2 + 1}, \ldots, Y_n = 1$. This is simply to mimic the situation when the true latent function $Z$ takes positive values on $[0, a]$, negative values on $[a, b]$ and positive values again on $[b, \infty]$ for some $0 < a < b$. We set the smoothness parameter for the M[á]{}tern kernel at 3/5 and the scale parameter at 1. We then proceed to draw 5000 samples from the tMVN, $\m N_{n}(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$, using Botev’s rejection sampler and 5000 samples from the soft tMVN, $\m N_{n}^s(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$, using our Gibbs sampler. The 5000 samples were collected for our method after discarding 1000 initial samples as burn-in and collecting every 100th sample to thin the chain. There is high autocorrelation in the chain, so the large thinning parameter is necessary, but this is an effecient sampler, so we are not worried about the extra sampling. Figures \[fig:DP3\] and \[fig:DP4\] show the marginal density plots of 8 coordinates of $\theta$ based on the 5000 samples for the two values of $n$ respectively. The tMVN distribution is shown in blue while the soft tMVN is in pink. It is evident that for both values of $n$, the marginal densities are visually indistinguishable. To obtain an overall summary measure, Figure \[fig:ksi2\] shows the histogram of $\xi$, defined in equation , (left panel) and $D$, defined in , over $50$ independent simulations. Both the histograms are tightly centered near the origin, which again suggests the closeness of the tMVN and soft tMVN distributions. As a quick comparison, the value of $D$ between $\Gauss(0, \Sigma)$ and $\Gauss(0.005, \Sigma)$ for the current $\Sigma$ is about $0.03$ for both values of $n$. Probit-Gaussian Example {#sect:probit} ----------------------- Our second example assumes $\theta \sim \m N_{N+P}(0, \Sigma) {\mathbbm{1}}_{\m{C}}(\theta)$ where $$\Sigma = \begin{bmatrix} I_n + X \Lambda X^{\T} & X \Lambda \\ \Lambda X^{\T} & \Lambda \end{bmatrix},$$ $\m{C} = \m{C}_1 \otimes \m{C}_2 \otimes \cdots \otimes \m{C}_N \otimes {\mathbb{R}}^P$, $\m{C}_i$ is either $(-\infty, 0)$ or $(0, \infty)$ for $i$ in $1, \ldots, N$, $X$ is an $N \times P$ matrix, and $\Lambda$ is a $P \times P$ diagonal matrix. This covariance structure is motivated by a univariate/multivariate probit model. The usual univariate probit model has binary response variables $Y_i = \{0,1\}$ with predictors $x_i \in {\mathbb{R}}^d$ for $i = 1, \ldots, n$. Using the latent variable representation of [@albert], $Y_i = {\mathbbm{1}}(z_i > 0)$ where $z_i$ follows a $\m N(x_i^{\T}\beta, 1)$ distribution and $\beta \in {\mathbb{R}}^p$. Setting a Gaussian prior on $\beta$, $\beta_j \sim \m N(0, \lambda_j)$, the joint distribution of $\theta = [z, \beta]$ follows a Gaussian distribution. Then the conditional posterior of $\theta \mid Y, x, \lambda$ follows the above $\m N_{N+P}(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$ distribution where $X = [x_1, \ldots x_n]^{\T}$, $\Lambda = \operatorname{\text{diag}}\{\lambda_1, \ldots, \lambda_p\}$, $N = n$, $P = p$, and $\m{C}_i = (-\infty, 0)$ if $Y_i = 0$ and $\m{C}_i = (0, \infty)$ if $Y_i = 1$. The multivariate probit model has data $(y_i, x_i)$ where $y_i = [y_{i1}, \ldots, y_{iq}] \in \{0,1\}^q$ is a binary response with predictors $x_i \in {\mathbb{R}}^p$ for $i = 1, \ldots n$. Using data augmentation, $y_{ik} = {\mathbbm{1}}(z_{ik})$ where $z_{ik}$ follows a $\m N(x_i^{\T}\beta_k,1)$ distribution and $\beta_k \in {\mathbb{R}}^p$. Assume that $\beta_{jk}$ follows a $\m N(0, \lambda_{jk})$ prior. Letting $\tilde{y}_k = [y_{1k}, \ldots, y_{nk}]$, $\tilde{z}_k = [z_{1k}, \ldots, z_{nk}]$, and $\lambda_k = [\lambda_{1k}, \ldots, \lambda_{pk}]$, we can rewrite the model in terms of vectors instead of matrices. Let $Y = [\tilde{y}_1, \ldots, \tilde{y}_q]$, $Z = [\tilde{z}_1, \ldots, \tilde{z}_q]$, $\lambda = [\lambda_1, \ldots, \lambda_q]$, and $\beta = [\beta_1, \ldots, \beta_q]$. Then $\theta = [Z, \beta]$ follows a Gaussian distribution and the conditional distribution of $\theta$ follows the above $\m N_{N+P}(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$ where $\tilde{X} = [x_1, \ldots, x_n]^{\T}$, $X = \operatorname{\text{diag}}(\tilde{X})_{k = 1, \ldots, q}$, $\Lambda = \operatorname{\text{diag}}(\lambda)$, $N = nq$, $P = pq$, and $\m{C}_{ik} = (-\infty, 0)$ if $y_{ik} = 0$ and $\m{C}_{ik} = (0, \infty)$ if $y_{ik} = 1$. For this simulation, we sample $x_i \stackrel{iid}{\sim} \m N(0,I_P)$ and $\lambda_j \sim U[1/15, 1/5]$, and then set $\Sigma$ to the above form. Draw $\beta \sim \m N(0, \Lambda)$ and $Z \sim \m N(X\beta, I_n)$. Then if $Z_i \geq 0$, set $Y_i = 1$ and if $Z_i < 0$, set $Y_i = 0$. For both $(N, P) = \{(100, 400), (200, 400)\}$, we then proceed to draw 5000 samples from the tMVN, $\m N_{n}(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$, using Botev’s rejection sampler and 5000 samples from the soft tMVN, $\m N_{n}^s(0, \Sigma){\mathbbm{1}}_{\m{C}}(\theta)$, using our Gibbs sampler. The 5000 samples were collected for our method after discarding 1000 initial samples as burn-in and collecting every 100th sample to thin the chain. Figures \[fig:DP1\] and \[fig:DP2\] show the marginal density plots of 8 coordinates of $\theta$ based on the 5000 samples for the two combinations respectively; as before, the tMVN distribution is shown in blue while the soft tMVN is in pink. We once again see that for both combinations, the marginal densities overlap well. To obtain an overall summary measure, Figure \[fig:ksi1\] shows the histogram of $\xi$, defined in equation , (left panel) and $D$, defined in , over $50$ independent simulations. We see that the histogram of $\xi$ and $D$ shifts to the right for $n = 200$ than for $n = 100$. This shift is expected as the size of the matrix $X$ grows, and thus, the size of $\Sigma$ grows. As a point of comparison, in Figure \[fig:wass\_cal\], we plot the histogram of $D$ between $\m N(0, \Sigma)$ and $\m N(0.005, \Sigma)$ for the present choice of $\Sigma$ and see a similar shift. We believe that the shift occurs for the probit-Gaussian motivated soft tMVN but not the probit-Gaussian process motivated soft tMVN due to structure of $\Sigma$. In the probit-Gaussian process motivated soft tMVN, $\Sigma$ does not change with each trial and it has a very solid structure, while in the probit-Gaussian motivated soft tMVN, $\Sigma$ changes for each trial and has a very random structure. Usage as prior in Bayesian constrained regression {#sec:cons} ================================================= In this section, we provide a concrete example of using the soft tMVN distribution as a prior distribution in a constrained Gaussian regression problem. As noted in the introduction, a general approach to Bayesian constrained regression is to expand the unknown function onto a suitable basis which allows formulation of the functional constraints in terms of linear constraints on the basis coefficients. Since the soft tMVN distribution is also conditionally conjugate to a Gaussian likelihood, one may use it as a prior distribution on the basis coefficients instead of a tMVN distribution. For illustration purpose, we consider a monotone single-index model considering its usefulness in practical applications, noting that the methodology can be extended to more standard constrained regression applications such as estimation of bounded, monotone, or convex/concave functions. We pick the monotone single-index model example due to limited previous treatment from a Bayesian perspective. Moreover, this example nicely brings out the computational advantages of using a soft tMVN prior. Given response-covariate pairs $\{(y_i, x_i)\}_{i=1}^n \in {\mathbb{R}}\times {\mathbb{R}}^p$, a Gaussian single index model [@Antoniadis; @Chen; @Gramacy; @Wang2009; @Yu2002] assumes the form $$\begin{aligned} \label{eq:sing_mod} y_i = f(x_i^{\T}\alpha) + \epsilon_i, \quad \epsilon_i \sim N(0, \sigma^2), \quad i = 1, \ldots, n,\end{aligned}$$ where $f : {\mathbb{R}}\to {\mathbb{R}}$ is an unknown link function and $\alpha \in {\mathbb{R}}^p$ an unknown coefficient vector. Throughout, we assume the covariates to be standardized. The single-index model provides a bridge between linear and non-linear modeling by first linearly projecting the high-dimensional vector of predictors to the real line and then modeling the response as a non-linear function of the projection. The model is clearly non-identifiable without further restrictions; we follow a standard prescription to impose a unit norm restriction, $\|\alpha\| = 1$, on $\alpha$. We consider a monotone single-index model [@Cavanagh; @Ahn; @Balabdaoui; @Foster; @Luo] where the link function $f$ is monotone non-decreasing. Monotone single-index models have widespread applications in biomedical science, e.g. find gene-gene interactions [@Luss] and to study the relationship between risk factors for survival with leukemia [@Schell]. To model $f$, we use a Bernstein polynomial basis noting that other basis functions mentioned in the introduction can also be used. Using the Bernstein polynomial basis, there are established sufficient conditions which enforce $f$ to be monotonic. Define, for $j = 0, \ldots, M$, $$B_{M,j}(u) = \binom{M}{j}u^j(1-u)^{M-j}, \quad u \in [0, 1],$$ so that the Bernstein polynomial of degree $M$ is $$B_M(u) = \sum_{j=0}^M \theta_j B_{M,j}(u).$$ If $$\label{eq:non-decreasing} \theta_0 \leq \theta_1 \leq \cdots \leq \theta_M,$$ then $B_M(u)$ is non-decreasing [@Chak]. To apply the Bernstein polynomial basis to our setting, we need some preprocessing as described below. Since $|x_i^{\T}\alpha| \leq {\left\Vertx_i\right\Vert}{\left\Vert\alpha\right\Vert} = \|x_i\|$ by the Cauchy-Schwarz inequality and the identifiability restriction respectively, if we let $c = \max_{i} {\left\Vertx_i\right\Vert}$ and transform $\tilde{x_i} = x_i/c$, we have $|\tilde{x_i}^{\T}\alpha| \leq 1$. Hence, we need to perform a change of variable to transform the support of the Bernstein polynomial to $[-1,1]$. To that end, we write $B_{M,j}(u) = p_j(u)/(M+1)$ for $u \in [0, 1]$, where $p_j(u)$ is the density of a Beta$(j+1, M-j+1)$ distribution. Letting $T = 2U-1$ for $U \sim \mbox{Beta}(j+1, M-j+1)$, the density of $T$ is $q_j(t) = \frac{1}{2}p_j\{(t+1)/2\}$ for $t \in [-1,1]$. Let $\tilde{B}_{M,j}(t) = q_j(t)/(M+1)$ for $j = 0, \ldots, M$ represent the transformed Bernstein polynomial basis and define our monotone single-index model as $$\begin{aligned} \label{eq:mon_sing_mod} y_i = \tilde{B}_M(\tilde{x}_i^{\T}\alpha) + \epsilon_i, \quad \tilde{B}_M(t) = \sum_{j=0}^M \theta_j \tilde{B}_{M,j}(t), \quad t \in [-1, 1].\end{aligned}$$ Under the order-restriction on the basis coefficients in , $\tilde{B}_M(\cdot)$ remains non-decreasing. Set $\psi_0 = \theta_0$, $\psi_1 = \theta_1-\theta_0, \ldots, \psi_M = \theta_M - \theta_{M-1}$, so that is equivalent to $\psi_k \geq 0$ for $k = 1,\ldots,M$. Thus the non-decreasing constraint can be written in terms of $\psi= [\psi_0, \ldots, \psi_M]^{\T}$. Let $A$ be an $(M+1) \times (M+1)$ lower triangular matrix where all the lower triangle elements and diagonal elements are 1. Then $A\psi = \theta$ where $\theta = [\theta_0, \ldots, \theta_M]^{\T}$. To place the monotone single-index model in vectorized notation, let $\tilde{B}_M^i = [\tilde{B}_{M,0}(\tilde{x}_i^{\T}\alpha),\\ \ldots, \tilde{B}_{M,M}(\tilde{x}_i^{\T}\alpha)]^{\T}$ and $\mathbb{B}_{\alpha} = [\tilde{B}_M^1, \ldots \tilde{B}_M^n]^{\T}$ so that $\mathbb{B}_\alpha$ is a $n \times (M+1)$ matrix, with the subscript serving as a reminder that $\mathbb{B}_{\alpha}$ depends on $\alpha$. Then letting $Y = [y_1, \ldots, y_n]^{\T}$, can be equivalently represented as $$Y = \mathbb{B}_\alpha \theta + \epsilon = \mathbb{B}_\alpha A \psi + \epsilon.$$ Our prior specification on the model parameters $(\psi, \alpha, \sigma^2)$ assumes the form $\pi(\psi, \alpha, \sigma^2) = \pi(\psi) \, \pi(\alpha) \, \pi(\sigma^2)$. We consider two different priors on $\psi$: (i) a tMVN prior $\m N_{\m C}(0, 25 I_{M+1})$, and (ii) a soft tMVN prior $\m N_{\m C}^s(0, 25 I_{M+1})$, where in both cases $\m C = {\mathbb{R}}\otimes [0, \infty)^M$. Next, we set $\alpha = \beta/{\left\Vert\beta\right\Vert}$ and assign a standard Gaussian prior on $\beta$. Finally, we consider a inverse-Gamma prior on $\sigma^2$ with mean 1 and variance 10. For sake of future reference, we refer to the joint prior on $(\psi, \alpha, \sigma^2)$ corresponding to cases (i) and (ii) by $\pi^h$ and $\pi^s$ respectively, with the superscripts indicative of a usual (hard) or soft tMVN prior on the constrained parameter. We employ a Metropolis-within-Gibbs algorithm to sample from the posterior distribution with either prior. For $\pi^h$, the conditional posterior $\psi \mid \sigma^2, \alpha$ is $\m N_{\m C}(\mu_\psi, \Sigma_\psi)$, while the same for $\pi^s$ is $\m N_{\m C}^s(\mu_\psi, \Sigma_\psi)$, where $$\Sigma_\psi = \left(\frac{1}{\sigma^2}D_{\alpha}^{\T}D_\alpha + \frac{1}{25}I_{M+1}\right)^{-1}, \quad \mu_{\psi} = \frac{1}{\sigma^2}\Sigma_\psi D_\alpha^{\T} Y, \quad D_\alpha = \mathbb{B}_\alpha A.$$ The conditional distribution of $\sigma^2|\psi,\alpha$ is inverse-Gamma in both cases. To sample from $\alpha|\sigma^2,\psi$, we use a Metropolis step with the proposal density on $\beta$ as $J(\beta^t|\beta^{t-1}) \sim \m N(\beta_{t-1},0.01^2I)$. The proposal standard deviation of $0.01$ was chosen to give an acceptance probability around 0.35 for $\beta$. The following simulation compares the Metropolis-within-Gibbs algorithms for the priors $\pi^h$ and $\pi^s$ respectively. We generate data from the model with $n = 800$, $p = 5$, $M = 20$, and a set of true parameter values $\psi_0, \alpha_0, \sigma_0^2$. We set $\sigma_0 = 0.1$ and $\alpha_0 = \beta_0/\|\beta_0\|$ with $\beta_0$ drawn from a standard Gaussian distribution. Finally, we set $\theta_0 \in \mathbb{R}^{21}$ equal to the vector where the first six entries are -1, then -0.5, then the next seven entries are 0, then 0.5, then the last six entries are 1. We consider $30$ independent replicates for model fitting and perform out-of-sample prediction on a single separate dataset of size $200$. We set $\eta = 500$ for the soft tMVN prior $\pi_s$. We observed sensitivity for smaller values of $\eta$ in this context; something that we didn’t encounter earlier, possibly due to the more difficult sampling problem involved here[^4]. For each of the 30 replicates, we run the Gibbs samplers for $\pi_h$ and $\pi_s$ outlined above to collect 1000 posterior samples each. These 1000 samples are after a burn-in period of 1000 and after thinning the chain by 100. The 1000 samples are used to calculate the posterior mean of $\alpha$, $\hat{\alpha}$, and the posterior mean of $\theta$, $\hat{\theta}$. For $\pi_h$, we use the rejection sampler of [@botev] implemented in the `R` package `TruncatedNormal` [@truncatednormal] to draw samples from the tMVN distribution, while for $\pi_s$, we use our data augmentation Gibbs sampler to sample from the soft tMVN distribution. The code to run both Gibbs samplers can be found at <https://github.com/aesouris/softTMVN>. In terms of statistical performance, the two samplers were comparable. The average out-of-sample prediction error for the soft tMVN prior across the 30 replicates was $0.005$ with a standard deviation of $0.0106$, while the same numbers for the tMVN prior were $0.002$ and $0.0066$ respectively. $\alpha$-ESS $\psi$-ESS run-time (in hours) ----------------- -------------- ------------ --------------------- soft tMVN prior $253.6625$ $686.0742$ $3.78_{0.0026}$ tMVN prior $168.4741$ $796.6799$ $15.45_{3.8204}$ : [*The first two columns report the average effective sample sizes (out of 1000 MCMC samples) for $\alpha$ and $\psi$ for the two Gibbs samplers. The average is over both the parameter entries as well as the 30 replicates. The final column reports the run-time (in hours) for the respective Gibbs samplers to collect 1000 posterior samples, with the subscript denoting the standard deviation across replicates.*]{}[]{data-label="tab:eff"} Table \[tab:eff\] reports the effective sample sizes for $\alpha$ and $\psi$ as well as the run-time for the two Gibbs samplers. The two samplers are similar in terms of the effective sample sizes; however the Gibbs sampler for the tMVN prior has almost 5 times the run-time of the soft tMVN sampler. The mixing is slow for either samplers which is indicative of a general issue for problems with constrained parameter spaces; remember the 1000 posterior samples are collected with a thinning size of 100. Although a formal proof is beyond the scope of the paper, empirical evidence suggests that the constrained parameters inside the Gibbs sampler may get stuck into regions of low probability, and it can take a long time to escape these regions. Specifically, we see that Botev’s state-of-the-art rejection sampler can sometimes take exceedingly long to make a single move; note the variability in the run-time across the 30 trials in Table \[tab:eff\]. While our chain also suffers from a similar slow mixing, it has substantially better per-iteration cost which makes it possible to run it for a large path-length to collect a substantial number of effective samples. The computational advantage becomes even more pronounced for higher dimensions; we do not report a simulation with a higher dimension $M$ since the tMVN sampler takes exceedingly long to run. Discussion ========== In this paper, we have presented the soft tMVN distribution, which provides a smooth approximation to the tMVN distribution with linear constraints. Our theoretical and empirical results suggest that the soft tMVN distribution offers a good approximation to the tMVN distribution in high dimensional situations. We envision the soft tMVN distribution to be applicable in Bayesian constrained problems as a more computationally viable alternative prior to the usual tMVN prior, especially in complex problems where the an MCMC algorithm may get stuck in regions of very low probability under a tMVN prior, making it difficult to move. The monotone single index model example illustrates this phenomenon and we expect it to be more widely prevalent. [^1]: aesouris@stat.tamu.edu [^2]: anirbanb@stat.tamu.edu [^3]: debdeep@stat.tamu.edu [^4]: See the supplemental document for an example with a smaller value of $\eta$.
--- abstract: 'The semiclassical long-time limit of free evolution of quantum wave packets on the torus is under consideration. Despite of simplicity of this system, there are still open questions concerning the detailed description of the evolution on time scales beyond the Ehrenfest time. One of the approaches is based on the limiting Wigner or Husimi distributions of time-evolved wave packets as the Planck constant tends to zero and time tends to infinity. We derive explicit expressions for semiclassical measures corresponding to all time scales and the corresponding stages of evolution: classical-like motion, spreading of the wave packet, and its revivals. Also we discuss limitations of the approach based on semiclassical measures and suggest its generalization.' address: 'Steklov Mathematical Institute of Russian Academy of Sciences, 119991 Moscow, Russia' author: - A S Trushechkin title: Semiclassical evolution of quantum wave packets on the torus beyond the Ehrenfest time in terms of Husimi distributions --- Introduction ============ The dynamics of a localized quantum wave packet in a finite region or on a compact manifold on short time scales is well-known to be described the classical motion of its center and gradual spreading. The characteristic time scale when this description breaks down is called the Ehrenfest time. The Ehrenfest time is estimated as $O(\ln\hbar^{-1})$ (where $\hbar$ is the Planck constant), although it may be larger for integrable systems (see rigorous results in [@CR; @Bambusi; @Hage1; @Hage2; @Bouzounia; @Schubert]). The description of semiclassical evolution of quantum wave packets at the Ehrenfest time and beyond it attracts much attention [@BerrySpin; @Schubert; @SemiclWaveRev; @WangHeller; @Schubert-spread; @MaciaRiemann; @MaciaTorus; @AnaMaciaTorus; @AnaMaciaView; @Ana14]. Mathematically, can be formulated as the simultaneous limit when the Planck constant goes to zero and time goes to infinity. We will refer to this type of limits as semiclassical long-time limits [@BerrySpin]. One of directions of researches is related to the so called semiclassical measures, i.e., semiclassical limit of Wigner measures [@Gerard; @Mark; @Carles; @Numeric]. In general, the description of semiclassical dynamics in the Wigner–Weyl representation is quite popuar [@Almedia2013; @Almedia2016; @Gosson]. In [@MaciaRiemann; @MaciaTorus; @AnaMaciaTorus; @AnaMaciaView; @Ana14], a number of properties of semiclassical measures related to times beyond the Ehrenfest time have been obtained. However, explicit calculation of semiclassical measures even for simplest cases presents certain difficulties. In particular, in [@AnaMaciaView] this problem is characterised as ’notoriously difficult’. The result of the present work is explicit calculation of semiclassical measures related to the free dynamics of quantum wave packets on the flat torus $\mathbb T^d=\mathbb R^d/(2\pi\mathbb Z^d)$. We generalize the results of [@VolTrush] where only Gaussian wave packets are considered. Also our results provide further insights about the limitations of the approach to long-time semiclassical dynamics based on semiclassical measures (reported in [@Carles]) and propose its generalizations. The usual way to deal with quantum dynamics in the semiclassical approximation is to reduce it to corresponding problems in classical dynamics. Here we adopt an alternative approach based on direct summation of series of eigenvectors for time-evolved wavepackets. An application of this approach to the Jaynes–Cummings model is given in [@Kara]. The following text is organised as follows. Preliminary facts about Wigner and Husimi measures, semiclassical measures, and coherent states are given in \[SecPrelim\]. Also we prove some intermediate results there. The main results (Theorems \[ThMain\]–\[ThTime\]) are stated and proved in \[SecMain\]. Theorem \[ThMain\] is the main one, while Theorems \[ThMu\] and \[ThTime\] are corollaries of Theorem \[ThMain\] and intermediate formulas obtained in its proof. In Sec. \[SecDiscus\] we discuss the results. Preliminaries {#SecPrelim} ============= Schrödinger equation on the flat torus -------------------------------------- Consider the Schrödinger equation on the flat torus $\mathbb T^d=\mathbb R^d/(2\pi\mathbb Z^d)$: $$\label{EqSchr} i\hbar\frac{\partial \psi_t}{\partial t}=-\hbar^2\Delta\psi_t,$$ where $\psi_t=\psi_t(x)$, $t\in\mathbb R$, $x\in\mathbb T^d$, $\Delta$ is the Laplace operator over the spatial variables $x$, $\hbar>0$ is the Planck constant. The solution of the Cauchy problem with some initial function $$\psi_0(x)=\frac1{(2\pi)^{\frac d2}}\sum_{k\in\mathbb Z^d}c^{(0)}_k\exp(ikx)\in L^2(\mathbb T^d)$$ can be formally represented as an action of a unitary operator in $L^2(\mathbb T^d)$: $$\label{EqEvol} \psi_t(x)=\exp(-i\hbar t\Delta)\psi_0(x)=\frac1{(2\pi)^{\frac d2}}\sum_{k\in\mathbb Z^d}c^{(0)}_k\exp(ikx-i\hbar t k^2).$$ Formula (\[EqEvol\]) directly implies that every solution of (\[EqSchr\]) is periodic with the period $$\label{EqT} T_\hbar=\frac{2\pi}\hbar,$$ i.e. $\psi_{t+T_\hbar}=\psi_t$. The time $T_\hbar$ is called the revival time. This periodicity is caused by interference and has purely wave nature. As $\hbar\to0$, the revival time tends to infinity. Semiclassical measures ---------------------- We will identify functions on $\mathbb T^d$ with $(2\pi\mathbb Z^d)$-periodic functions on $\mathbb R^d$. Then, the Wigner distribution on the phase space $\Omega=\mathbb T^d\times\mathbb R^d$ for an arbitrary function $\psi\in L^2(\mathbb T^d)$ is defined as [@QMPS; @Hillery] $$\begin{aligned} W_\psi(q,p)&=\frac1{(\pi\hbar)^d}\int_{\mathbb R^d}\overline{\psi(q+x)}\psi(q-x)\exp\left(\frac{2ipx}\hbar\right)dx\nonumber\\&=\frac1{(2\pi)^d}\sum_{j,k\in\mathbb Z^d}\overline{c_j}c_k\exp[i(k-j)q]\,\delta\left(p-\frac\hbar2(k+j)\right),\label{EqWigner}\end{aligned}$$ where $c_k$ are the Fourier coefficients of $\psi(x)=(2\pi)^{-d/2}\sum_kc_k\exp(ikx)$, $\delta(\cdot)$ is the Dirac delta function, and $(q,p)\in\mathbb T^d\times\mathbb R^d$. An important property of the Wigner distribution is that its marginal distributions over $q$ and $p$ coincide with the corresponding quantum-mechanical distributions: $$\begin{aligned} \label{EqWignerProp} \int_{\mathbb R^d} W_\psi(q,p)\,dp&=|\psi(q)|^2,\\ \int_{\mathbb T^d} W_\psi(q,p)\,dq&= \sum_{k\in\mathbb Z^d}|c_k|^2\delta(p-\hbar k). \label{EqWignerProp2}\end{aligned}$$ However, the Wigner distribution is generally non-positive. By this reason, sometimes it is called quasiprobability distribution. Consider a family of functions $\{\psi_\hbar\}$ depending on $\hbar$ and the corresponding Wigner distributions $W_{\psi_\hbar}$. For shortness, we will write $W_\hbar$ if $\psi$ is fixed. If there exists a measure $\mu$ on $\Omega$ such that there exists a limit $$\label{EqSemiM} \lim_{\hbar\to0}\int_\Omega W_{\hbar}(q,p)a(q,p)\,dqdp=\int_\Omega a(q,p)\mu(dqdp)$$ for all functions $a\in C^\infty_0(\Omega)$ (infinitely differentiable functions with compact supports), then the measure $\mu$ is called the semiclassical measure [@Zworski; @MaciaRiemann; @MaciaTorus]. It is always possible to choose a proper sequence $\{\psi_{\hbar_n}\}$ such that limit (\[EqSemiM\]) exists for this sequence. The Planck constant $\hbar$ is a fundamental physical constant with dimensions of action. So, rigorously speaking, it cannot tend to zero. This formal mathematical limit means that the Planck constant is much smaller than another quantity with dimensions of action arising in a concrete problem. In Section \[SecPhysSmall\] we will describe such kind of conditions for our case. We will adopt another, equivalent, approach to the semiclassical measures, which is based not on the Wigner distribution, but on the Husimi distribution. For this purpose, we need to define coherent states on the torus. Coherent states --------------- Consider a smooth rapidly decreasing function $\varphi(x)$, $x\in \mathbb R^d$, with unit $L^2(\mathbb R^d)$-norm and a family of functions from $L^2(\mathbb R^d)$ of the form $$\label{EqCoherRd} \eta^{(\hbar)}_{qp}(x)=\frac1{\sqrt{\alpha_\hbar^d}}\varphi\left(\frac{x-q}{\alpha_\hbar}\right) \exp\left\lbrace\frac{ip(x-q)}\hbar\right\rbrace,$$ where $(q,p)\in\mathbb R^{2d}$ and $\alpha_\hbar>0$ is a constant depending on $\hbar$ such that $\alpha_\hbar\to0$ and $\hbar/\alpha_\hbar\to0$ as $\hbar\to0$ (e.g., $\alpha_\hbar=\sqrt\hbar$). These functions satisfy the general definition of coherent states on $L^2(\mathbb R^d)$ given in [@Klauder]: this family of functions continuously depends in its parameters $(q,p)$ and constitutes a continuous resolution of identity: $$\label{EqUnityRd} \frac1{(2\pi\hbar)^d}\int_{\mathbb R^2} P[\eta^{(\hbar)}_{qp}]\,dqdp=1.$$ Here $P[\psi]$ is an operator acting on an arbitrary vector $\chi$ as $P[\psi]\chi=(\psi,\chi)\psi$ ($P[\psi]=|\psi\rangle\langle\psi|$ in the Dirac notations; it is a projector whenever $\psi$ is a unit vector); $(\cdot,\cdot)$ is a scalar product (with linearity in the second argument). Equality (\[EqUnityRd\]) is understood in the weak sense: for all $\psi,\chi\in L^2(\mathbb R^d)$ we have $$\frac1{(2\pi\hbar)^d)}\int_{\mathbb R^2} (\psi,\eta^{(\hbar)}_{qp})(\eta^{(\hbar)}_{qp},\chi)\,dqdp=(\psi,\chi).$$ Usually coherent states are required to correspond to classical particles in some way. Let us proof the following known property (we will use it subsequently). \[PropDeltaLine\] The semiclassical measure of the family of functions $\eta^{(\hbar)}_{q_0p_0}$ is the Dirac measure at $(q_0,p_0)$. Denote $W_{q_0,p_0}^{(\hbar)}$ the Wigner distribution corresponding to the wave function $\upsilon^{(\hbar)}_{q_0p_0}$. By (\[EqWigner\]), $$\begin{aligned} \fl W_{q_0,p_0}^{(\hbar)}(q,p)=\frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\eta^{(\hbar)}_{q_0p_0}(q+x)}\eta^{(\hbar)}_{q_0p_0}(q-x)\exp\left(\frac{2ipx}\hbar\right)dx\nonumber\\= \frac1{(\pi\hbar\alpha_\hbar)^d}\int_{\mathbb R^d} \overline{\varphi^{(\hbar)}\left(\frac{q-q_0+x}{\alpha_\hbar}\right)} \varphi^{(\hbar)}\left(\frac{q-q_0-x}{\alpha_\hbar}\right)\nonumber\\\quad\times \exp\left\lbrace\frac{2ipx+ip_0(q-q_0-x)-ip_0(q-q_0+x)}\hbar\right\rbrace dx\nonumber\\= \frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\varphi^{(\hbar)}\left(\frac{q-q_0}{\alpha_\hbar}+x\right)} \varphi^{(\hbar)}\left(\frac{q-q_0}{\alpha_\hbar}-x\right)\nonumber\\\quad\times \exp\left\lbrace\frac{2i(p-p_0)\alpha_\hbar x}\hbar\right\rbrace dx\nonumber\\= \left(\frac{\alpha_\hbar}\hbar\right)^d\left(\frac1{\alpha_\hbar}\right)^d f\left(\frac{q-q_0}{\alpha_\hbar},\frac{\alpha_\hbar}\hbar(p-p_0)\right),\label{EqWdelta}\end{aligned}$$ where $$f(q,p)=\frac1{\pi^d}\int_{\mathbb R^d}\overline{\varphi(q+x)}\varphi(q-x)\exp(2ipx)dx.$$ Since $\int_{\mathbb R^d}f(q,p)dqdp=1$, expression (\[EqWdelta\]) implies $$\lim_{\hbar\to0}W_{q_0,p_0}^{(\hbar)}(q,p)=\delta(q-q_0)\delta(p-p_0).$$ The Dirac measure at $(q_0,p_0)$ corresponds to a classical particle in this phase point (for the configuration space $\mathbb R^d$; we will return to the case of torus a bit later). Time evolution of this semiclassical measure on short times also can be shown to correspond to the classical phase trajectory. A particular case are Gaussian coherent states, which correspond to the following choice of the function $\varphi$: $$\label{EqGauss} \varphi(x)=\frac{1}{(2\pi)^{\frac d4}}\exp\left(-\frac{x^2}4\right).$$ In this case, $\alpha_\hbar$ and $\hbar/(2\alpha_\hbar)$ are the standard deviations of the position and the momentum respectively. Their product gives $\hbar/2$, so, the Gaussian coherent states minimize the uncertainty relations. Functions of form (\[EqCoherRd\]) are also referred to as quantum wave packets because they are superpositions of monochromatic waves $\exp(ipx)$ and they are localized in both position and momentum spaces. On the base of coherent states (\[EqCoherRd\]) on $\mathbb R^d$, coherent states on the torus $\mathbb T^d$ can be constructed as follows [@DG; @KR96; @Gonzalez; @KR07; @KR08]: $$\label{EqCoher} \upsilon^{(\hbar)}_{qp}(x)=\sum_{n\in\mathbb Z^d}\eta^{(\hbar)}_{qp}(x-2\pi n),$$ where $(q,p)\in\Omega$. They also constitute a continuous resolution on identity: $$\label{EqUnity} \frac1{{(2\pi\hbar)^d}}\int_\Omega P[\upsilon^{(\hbar)}_{qp}]\,dqdp=1.$$ Let us note that the functions $\upsilon^{(\hbar)}_{qp}$ as elements of $L^2(\mathbb T^d)$ are not normalized to unity. However, their norms tend to unity as $\hbar\to0$. Indeed, $$\begin{aligned} \|\upsilon^{(\hbar)}_{qp}\|^2&=\int_{\mathbb T^d}\overline{\upsilon^{(\hbar)}_{qp}(x)}\upsilon^{(\hbar)}_{qp}(x)\,dx= \sum_{n\in\mathbb Z^d}\int_{\mathbb T^d} \overline{\eta^{(\hbar)}_{qp}(x-2\pi n)}\upsilon^{(\hbar)}_{qp}(x)\,dx\\&= \int_{\mathbb R^d} \overline{\eta^{(\hbar)}_{qp}(x)}\upsilon^{(\hbar)}_{qp}(x)\,dx= \sum_{m\in\mathbb Z^d}\int_{\mathbb R^d} \overline{\eta^{(\hbar)}_{qp}(x)}\eta^{(\hbar)}_{qp}(x-2\pi m)\,dx\\&= 1+\sum_{m\in\mathbb Z^d\backslash\{0\}}\int_{\mathbb R^d} \overline{\eta^{(\hbar)}_{qp}(x)}\eta^{(\hbar)}_{qp}(x-2\pi m)\,dx.\end{aligned}$$ We have used that the functions $\eta^{(\hbar)}_{qp}(x)\in L^2(\mathbb R^d)$ have the unit norm. Since the function $\varphi$ rapidly decreases, the last expression tends to unity. From now $W_{q_0,p_0}^{(\hbar)}$ will denote the Wigner distribution corresponding to the wave function $\upsilon^{(\hbar)}_{q_0p_0}$. We will use the following property of the distribution $W_{q_0,p_0}^{(\hbar)}$. \[PropUni\] $$\begin{aligned} \label{EqWquni} &W_{q_0,p_0}^{(\hbar)}(q,p)&=W_{q_0+\Delta q,p_0}^{(\hbar)}(q+\Delta q,p),\\ &W_{q_0,p_0}^{(\hbar)}(q,p)&=W_{q_0,p_0+\Delta p}^{(\hbar)}(q,p+\Delta p)+o(1),\quad \hbar\to0\label{EqWpuni}\end{aligned}$$ The first equality is obvious from the definitions of the Wigner distribution and the functions $\eta$ and $\upsilon$. For the proof of the second inequality, we firstly note that $$\eta_{q_0,p_0+\Delta p_0}^{(\hbar)}(x)=\eta_{q_0,p_0}^{(\hbar)}(x) \exp\left(-\frac{i\Delta p(x-q)}\hbar\right),$$ $$\label{EqUspEta} \upsilon_{q_0,p_0}^{(\hbar)}(x)=\eta_{q_0,p_0}^{(\hbar)}(x-2\pi n_{x-q_0})+ o(1),\quad\hbar\to0,$$ where $n_y$ denotes the integer with the property $y-2\pi n_y\in[-\pi,\pi)^d$ for an arbitrary real $y$. Hence, $$\upsilon_{q_0,p_0+\Delta p_0}^{(\hbar)}(x)=\upsilon_{q_0,p_0}^{(\hbar)}(x) \exp\left(-\frac{i\Delta p(x-q-2\pi n_{x-q_0})}\hbar\right)+ o(1),\quad\hbar\to0.$$ Then, $$\begin{aligned} \fl W_{q_0,p_0+\Delta p}^{(\hbar)}(q,p+\Delta p)\!= \frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\upsilon^{(\hbar)}_{q_0,p_0+\Delta p}(q+x)} \upsilon^{(\hbar)}_{q_0,p_0+\Delta p}(q-x) \exp\left(\frac{2i(p+\Delta p)x}\hbar\right)\!dx\\\fl =\frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\upsilon^{(\hbar)}_{q_0,p_0}(q+x)} \upsilon^{(\hbar)}_{q_0,p_0}(q-x) \exp\left(\frac{2ipx}\hbar\right) \exp\left(\frac{2i\pi \Delta p}\hbar(n_{q-q_0+x}-n_{q-q_0-x})\right)dx\\\fl +o(1).\end{aligned}$$ Due to the highly oscillating term $\exp(2ipx/\hbar)$ (where $p$ is a variable of integration with a test function), the integration over $x$ is actually performed in an infinitesimal neighbourhood of zero. Hence, $n_{q-q_0+x}=n_{q-q_0-x}$ unless $q-q_0=\pi k$ for some $k$, and (\[EqWpuni\]) is proved. If $q-q_0=\pi k$, then both $W_{q_0,p_0}^{(\hbar)}(q,p)$ and $W_{q_0,p_0+\Delta p}^{(\hbar)}(q,p+\Delta p)$ are infinitesimal and (\[EqWpuni\]) is obviously true. \[PropDelta\] The semiclassical measure of the family of functions $\upsilon^{(\hbar)}_{q_0p_0}$ is sum of the Dirac measures at the points $(q_0+2\pi n,p_0)$, $n\in\mathbb Z^d$. Using (\[EqUspEta\]), $$\begin{aligned} \fl W_{q_0,p_0}^{(\hbar)}(q,p)=\frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\upsilon^{(\hbar)}_{q_0p_0}(q+x)}\upsilon^{(\hbar)}_{q_0p_0}(q-x)\exp\left(\frac{2ipx}\hbar\right)dx\\\fl= \frac1{(\pi\hbar)^d}\int_{\mathbb R^d} \overline{\eta^{(\hbar)}_{q_0p_0}(q+x-2\pi n_{q-q_0+x})} \eta^{(\hbar)}_{q_0p_0}(q-x-2\pi n_{q-q_0-x})\exp\left(\frac{2ipx}\hbar\right)dx+o(1).\end{aligned}$$ Using the same reasonings as in the proof of Proposition \[PropUni\], we can put $$n_{q-q_0+x}=n_{q-q_0-x}=n_{q-q_0}.$$ Then, due to Proposition \[PropDeltaLine\], $$\lim_{\hbar\to0}W_{q_0,p_0}^{(\hbar)}(q,p)= \delta(q-q_0-2\pi n_{q-q_0})\delta(p-p_0),$$ or $$\lim_{\hbar\to0}W_{q_0,p_0}^{(\hbar)}(q,p)= \sum_{n\in\mathbb Z^d}\delta(q-q_0-2\pi n)\delta(p-p_0).$$ We mentioned that the Gaussian coherent states on $\mathbb{R}^d$ minimize the uncertainty relations. The uncertainty relations require modifications for compact manifolds (e.g., torus) and bounded domains (e.g., infinite square well). There are several analogues of uncertainty relations for these cases. The Gaussian coherent states on the torus minimize a variant of the uncertainty relations for the torus [@KR96; @KR07; @KRUncert]. Also some estimates of the standard deviations of position and momentum have been obtained in [@VolTrush-Trudy]. Husimi distribution ------------------- For an arbitrary function $\psi\in L^2(\mathbb T^d)$ with unit norm, let us define the probability distribution on the phase space $\Omega$ as $$H_\psi(q,p)=\frac1{(2\pi\hbar)^d}|(\upsilon_{qp},\psi)|^2.$$ It is called the Husimi distribution (or the Husimi function) associated to $\psi$ [@QMPS; @Hillery] In contrast to the Wigner distribution, the Husimi distribution is positive by construction. The normalization condition $$\int_\Omega H_\psi(q,p)\,dqdp=1$$ is satisfied due to resolution of identity (\[EqUnity\]) by coherent states. However, marginal position and momentum distributions does not coincide with the original quantum-mechanical distributions (in contrast to (\[EqWignerProp\]), (\[EqWignerProp2\])). However, the Husimi distribution has a direct physical meaning – see Remark \[RemHusimi\] below. The tomographic represetation of quantum mechanics is proposed [@Manko; @MankoTMF] to overcome the drawbacks of different phase space distributions corresponding to quantum states [@QMPS]. In the tomographic representation, not a single, but a family of probability distributions is assigned to a quantum state. Relations between the tomographic representation and the Husimi distribution is considered in [@MankoHusimi]. The notion of semiclassical measures can be equivalently reformulated in terms of the Husimi distribution. Again, consider a family of functions $\{\psi_\hbar\}$ and denote their Husimi distributions as $H_\hbar$. Consider the limit $$\label{EqSemiMH} \lim_{\hbar\to0}\int_\Omega H_{\hbar}(q,p)a(q,p)\,dqdp.$$ It turns out that this limit coincides with (\[EqSemiM\]). The definition of the semiclassical measure given in [@Martinez] is based exactly on the limit (\[EqSemiMH\]) for the Husimi distributions. But only the case of $\mathbb R^d$ and the Gaussian coherent states are considered there. Let us prove the equivalence of definitions (\[EqSemiM\]) and (\[EqSemiMH\]) for the case of torus and for arbitrary coherent states of form (\[EqCoher\]), (\[EqCoherRd\]). We need an additional property. \[PropSmear\] The Husimi distribution of an arbitrary function $\psi\in L^2(\mathbb T^d)$ can be expressed as $$\label{EqHusimiWigner} H_\psi(q,p)=\int_{\mathbb R^{2d}}W^{(\hbar)}_{q,p}(q',p')W_\psi(q',p')\,dq'dp',$$ where $W^{(\hbar)}_{q,p}$ and $W_\psi$ denote the Wigner distributions of the coherent state $\upsilon^{(\hbar)}_{q,p}$ and of the function $\psi$, respectively. This is a known relation between the Wigner and Husimi functions in $\mathbb R^d$ [@Hillery; @McKenna]; let us prove if for $\mathbb T^d$. First of all, let us note that, in view of (\[EqWdelta\]), the Wigner distribution of the coherent state $W^{(\hbar)}_{q,p}$ is a smooth function (regular distribution), so, expression (\[EqHusimiWigner\]) is well-defined (as the action of the generalized function $W_\psi$ on the test function $W^{(\hbar)}_{q,p}$). We have $$\begin{aligned} \fl H_\psi(q,p)=\frac1{(2\pi\hbar\alpha_\hbar)^d} \sum_{n,m\in\mathbb Z^d} \int_{\mathbb T^{2d}}dxdy\,\overline{\psi(x)}\psi(y)\varphi\left(\frac{x-2\pi n-q}{\alpha_\hbar}\right) \overline{\varphi\left(\frac{y-2\pi m-q}{\alpha_\hbar}\right)}\\ \qquad\qquad\qquad\qquad\:\:\,\times\exp\left\lbrace\frac{ip[(x-2\pi n)-(y-2\pi m)]}\hbar\right\rbrace \\= \frac1{(2\pi\hbar\alpha_\hbar)^d} \int_{\mathbb R^{2d}}dxdy\, \overline{\psi(x)}\psi(y)\varphi\left(\frac{x-q}{\alpha_\hbar}\right) \overline{\varphi\left(\frac{y-q}{\alpha_\hbar}\right)}\exp\left[\frac{ip(x-y)}\hbar\right]\\= \frac1{(\pi\hbar\alpha_\hbar)^d} \int_{\mathbb R^{2d}}dq'dx\, \overline{\psi(q'+x)}\psi(q'-x)\varphi\left(\frac{q'-q+x}{\alpha_\hbar}\right) \overline{\varphi\left(\frac{q'-q-x}{\alpha_\hbar}\right)}\\\qquad\qquad\quad\:\times\exp\left[\frac{2ipx}\hbar\right]\\= \int_{\mathbb R^{2d}}dq'dp' \int_{\mathbb R^d}\frac{dx}{(\pi\hbar)^d} \overline{\psi(q'+x)}\psi(q'-x)\exp\left[\frac{2ip'x}\hbar\right] \\\qquad\quad\times \int_{\mathbb R^d}\frac{dy}{(\pi\hbar\alpha_\hbar)^d} \varphi\left(\frac{q'-q-y}{\alpha_\hbar}\right) \overline{\varphi\left(\frac{q'-q+y}{\alpha_\hbar}\right)}\exp\left[\frac{2i(p'-p)y}\hbar\right] \\=\int_{\mathbb R^{2d}}W_\psi(q',p')W^{(\hbar)}_{q,p}(q',p')\,dq'dp',\end{aligned}$$ Q.E.D. Due to (\[EqWquni\])–(\[EqWpuni\]), (\[EqHusimiWigner\]) can be rewritten as $$%\label{EqHusimiWignerConv} H_\psi(q,p)=\int_{\mathbb R^{2d}}W^{(\hbar)}_{0,0}(q'-q,p'-p)W_\psi(q',p')\,dq'dp'+o(1),\quad \hbar\to0.$$ As a corollary of this formula and Proposition \[PropDelta\], we have the following. The existence of limit (\[EqSemiM\]) is equivalent to the existence of limit (\[EqSemiMH\]), and both limits coincide. \[RemHusimi\] The Husimi distribution has a direct physical meaning. Consider the probability operator-valued measure $M$ defined as $$M(B)=\frac1{(2\pi\hbar)^d}\int_\Omega P[\upsilon^{(\hbar)}_{qp}]\,dqdp,$$ where $B\subset\Omega$ is an arbitrary Borel set on the phase space. According to the formalism of quantum mechanics [@Holevo], it can be interpreted as an observable corresponding to simultaneous approximate measurements of position and momentum. Observables of this type were introduced by von Neumann [@Neumann]. His motivation was as follows. In classical mechanics, simultaneous measurements of position and momentum is possible. Hence, the correspondence principle requires this to be possible approximately also in quantum mechanics (with the errors of measurements tending to zero as $\hbar\to0$). If we choose the Gaussian function $\varphi(x)$ (\[EqGauss\]), then the product of errors of measurements of position and momentum is $\hbar/2$, which minimizes the uncertainty relations. We see that the Husimi distribution is nothing else but the distribution of outcomes of such measurements. Main results {#SecMain} ============ Let us denote $$\upsilon^{(\hbar)}_{q_0p_0,t}=\exp(-i\hbar t\Delta)\upsilon^{(\hbar)}_{q_0p_0}$$ — a wave packet evolved on the time $t\in\mathbb R$, $$H_{q_0,p_0,t}(q,p)=\frac1{(2\pi\hbar)^d}|(\upsilon_{qp},\upsilon_{q_0p_0,t})|^2$$ — the corresponding Husimi distribution. Also recall that $T_\hbar$ denotes the revival time (\[EqT\]). \[ThMain\] Consider a real-valued function $t_\hbar$ of $\hbar$ such that $\hbar (t_\hbar-AT_\hbar)\to0$,\ $\hbar (t_\hbar-AT_\hbar)/\alpha_\hbar\to B$, where $A\in\mathbb R$, $B\in[0,\infty]$. Another possibility is $\hbar t_\hbar\to+\infty$. Then: 1\) If $B=\infty$, or $A$ is irrational, or $\hbar t_\hbar\to+\infty$, then $$\label{EqLimFlat} \lim_{\hbar\to0}H_{q_0,p_0,t_\hbar}(q,p)= \frac1{(2\pi)^d}\,\delta(p-p_0);$$ 2\) If $B<\infty$ and $A=\frac MN$ is rational (expressed as an irreducible fraction), then $$\begin{aligned} \fl \lim_{\hbar\to0} \lbrace H_{q_0,p_0,t_\hbar}(q,p)\nonumber\\- \frac1{N'}\sum_{l\in[N']^d}\delta_B\left(q-q_0-2p_0(t_\hbar-AT_\hbar)-\Delta q_0+\frac{2\pi k}{N'}\right)\delta(p-p_0)\label{EqLim} \rbrace=0.\end{aligned}$$ Here $$\delta_B(q)=\frac1{(2\pi)^d}\sum_{j\in\mathbb Z^d}\sigma_{Bj}\exp(ijq),\\$$ $$\sigma_{R}=\int_{\mathbb R^d}\varphi(x)\overline{\varphi(x-2R)}\,dx,$$ $[N']=\{0,1,2,\ldots,N'-1\}$, $$N'=\cases{ N,&odd $N$,\\ \frac N2,&even $N$,} \qquad\qquad \Delta q_0=\cases{\frac{2\pi}{N} I,&$N\equiv2\pmod 4$,\\ 0,&otherwise,}$$ where $I=(1,1,\ldots,1)\in\mathbb Z^d$. \[ThMu\] Semiclassical measures $\mu$ corresponding to time-evolved wave packets $\upsilon^{(\hbar)}_{qp,t_\hbar}$ are: 1\) Let $t_\hbar$ is as in Theorem \[ThMain\]; $B=\infty$, or $A$ is irrational, or $\hbar t_\hbar\to+\infty$. Then $$\label{EqMuFlat} \mu(dqdp)=\frac1{(2\pi)^d}\,\delta(p-p_0)\,dqdp;$$ 2\) Let $t_\hbar$ is as in Theorem \[ThMain\]; $B<\infty$, $A=\frac MN$, $p_0=0$. Then $$%\label{EqMu} \mu(dqdp)= \frac1{N'}\sum_{k\in[N']^d} \delta_B\left(q-q_0-\Delta q_0+\frac{2\pi k}{N'}\right)\delta(p)\,dqdp;$$ 3\) Let $t_\hbar$ be a real-valued function of $\hbar$ such that $t_\hbar-\frac MNT_\hbar\to\tau\in\mathbb R$. Then $$%\label{EqMu} \mu(dqdp)= \frac1{N'}\sum_{k\in[N']^d} \delta\left(q-q_0-2p_0\tau-\Delta q_0+\frac{2\pi k}{N'}\right)\delta(p-p_0)\,dqdp.$$ \[ThTime\] Consider the function $t_\hbar=\lambda_\hbar t$, where $\lambda_\hbar\to\infty$ as $\hbar\to0$. 1\) If $\hbar\lambda_\hbar/\alpha_\hbar\to\infty$ as $\hbar\to0$, then, for all functions $a\in C^\infty_0(\Omega)$ and $b\in L^1(\mathbb R)$, there exists the limit $$\label{EqLimt} \lim_{\hbar\to0}\int_{\Omega\times\mathbb R}a(q,p)b(t)H_{q_0,p_0,\lambda_\hbar t}(q,p)\,dqdpdt=\langle a\rangle(p_0)\int_{\mathbb R}b(t)\,dt,$$ where $$\langle a\rangle(p_0)=\frac1{(2\pi)^d}\int_{\mathbb T^d}a(q,p_0)\,dq.$$ 2\) If $\hbar\lambda_\hbar/\alpha_\hbar\to B\in[0,+\infty)$, then, for all functions $a\in C^\infty_0(\Omega)$ and $b\in L^1(\mathbb R)$, there exists the limit $$\label{EqLimt2}\fl \lim_{\hbar\to0}\int_{\Omega\times\mathbb R}a(q,p)b(t)H_{q_0,p_0,\lambda_\hbar t}(q,p)\,dqdpdt=\lim_{T\to\infty}\frac1T\int_0^T a^{(b,B)}(q+p_0t,p_0)\,dt,$$ where $$\fl a^{(b,B)}(q,p)=\frac1{(2\pi)^{\frac d2}}\sum_{j\in\mathbb Z^d}a_j(p)\left[\int_{\mathbb R^d}dx\int_{-\infty}^{+\infty}b(t)\varphi(x)\overline{\varphi(x-2Btj)}\,dt\right]\exp(ijq).$$ In particular, if $B=0$, then $$\label{EqLimt3}\fl \lim_{\hbar\to0}\int_{\Omega\times\mathbb R}a(q,p)b(t)H_{q_0,p_0,\lambda_\hbar t}(q,p)\,dqdpdt=\lim_{T\to\infty}\frac1T\int_0^T a(q+p_0t,p_0)dt \int_{-\infty}^{+\infty}b(t)dt.$$ If $p_0$ does not belong to the “resonant” set $$\label{EqReson} R=\{p\in\mathbb R^d\,|\,jp=0 \textrm{ for some } j\in\mathbb Z^d\backslash\{0\}\},$$ then limits (\[EqLimt2\]) and (\[EqLimt3\]) are reduced to (\[EqLimt\]). Let us find the Fourier coefficients of coherent states $\upsilon^{(\hbar)}_{q_0p_0}(x)$: $$\upsilon^{(\hbar)}_{q_0p_0}(x)= \frac1{{(2\pi)}^{\frac d2}}\sum_{k\in\mathbb Z^d}c^{(\hbar)}_{k,qp}\exp(ikx).$$ We have $$\begin{aligned} c^{(\hbar)}_{k,qp}&=\frac1{(2\pi)^{\frac d2}}\int_{\mathbb T^d}\upsilon^{(\hbar)}_{qp}(x)\exp(-ikx)\,dx\\&= \frac1{(2\pi)^{\frac d2}}\sum_{n\in\mathbb Z^d}\int_{\mathbb T^d}\eta^{(\hbar)}_{qp}(x-2\pi n)\exp[-ik(x-2\pi n)]\,dx\\&= \frac1{(2\pi)^{\frac d2}}\int_{\mathbb R^d}\eta^{(\hbar)}_{qp}(x)\exp(-ikx)\,dx\\&= \left(\frac{\alpha_\hbar}{2\pi}\right)^{\frac d2}\int_{\mathbb R^d}\varphi(x) \exp\left\lbrace i(\alpha_\hbar x+q)\left(\frac p\hbar-k\right)-\frac{ipq}\hbar\right\rbrace dx.\end{aligned}$$ Using this formula, we can calculate the scalar product $$(\upsilon_{qp},\upsilon_{q_0p_0,t})= \sum_{k\in\mathbb Z^d} \overline{c^{(\hbar)}_{k,qp}}c^{(\hbar)}_{k,q_0p_0}\exp(-i\hbar tk^2).$$ Let a function $a(q,p)\in C^\infty_0(\Omega)$ is expanded into the Fourier series and the Fourier integral as follows: $$\fl a(q,p)=\frac1{(2\pi)^{\frac d2}} \sum_{j\in\mathbb Z^d}a_j(p)\exp(ijq)= \frac1{(2\pi)^d}\sum_{j\in\mathbb Z^d}\int_{\mathbb R^d} \tilde a_j(\xi)\exp(ijq+i\xi p)d\xi.$$ Then $$\begin{aligned} \label{EqAFourier}\fl \int_\Omega H_{q_0,p_0,t_\hbar}^{(\hbar)}(q,p)a(q,p)\,dqdp\\= \frac1{(2\pi\sqrt\hbar)^{2d}}\sum_{j\in\mathbb Z^d}\int_{\mathbb R^d}d\xi \tilde a_j(\xi) \int_\Omega |(\upsilon_{qp},\upsilon_{q_0p_0,t_\hbar})|^2\exp(ijq+i\xi p)\,dqdp.\end{aligned}$$ Let us calculate $$\begin{aligned} \frac1{(2\pi\hbar)^d}\int_\Omega |(\upsilon_{qp},\upsilon_{q_0p_0,t_\hbar})|^2\exp(ijq+i\xi p)dqdp=\\= \frac1{(2\pi\hbar)^d}\left(\frac{\alpha_\hbar}{2\pi}\right)^{2d} \sum_{k,n\in\mathbb Z^d} \int_{\mathbb R^{4d}}dxdx'dydy'\int_\Omega dqdp\, \overline{\varphi(x)}\varphi(y)\varphi(x')\overline{\varphi(y')} \quad\\\times \exp\left\lbrace -i(\alpha_\hbar x+q)\left(\frac p\hbar-k\right) +i(\alpha_\hbar y+q_0)\left(\frac {p_0}\hbar-k\right)-iht_\hbar k^2 +\right.\\\left. \quad+i(\alpha_\hbar x'+q)\left(\frac p\hbar-n\right) -i(\alpha_\hbar y'+q_0)\left(\frac {p_0}\hbar-n\right)+iht_\hbar n^2 +ijq+i\xi p \right\rbrace.\end{aligned}$$ The integration over $p$ yields the factor $$(2\pi)^d\delta\left(\xi-\frac{\alpha_\hbar}\hbar(x-x')\right)= \left(\frac{2\pi\hbar}{\alpha_\hbar}\right)^d \delta\left(x'-x+\frac{\hbar\xi}{\alpha_\hbar}\right).$$ The integration over $q$ yields the factor $(2\pi)^d\delta_{j+k-n}$, where $\delta_x$ is the Kronecker symbol ($\delta_x=1$ if $x=0$ and $\delta_x=0$ otherwise). Thus, the integration over $x'$ and the summation over $n$ can be eliminated with the substitutions $x'=x-\frac{\hbar\xi}{\alpha_\hbar}$ and $n=k+j$. We have $$\begin{aligned} \fl \frac1{(2\pi\hbar)^d}\int_\Omega |(\upsilon_{qp},\upsilon_{q_0p_0,t_\hbar})|^2\exp(ijq+i\xi p)dqdp\\= \left(\frac{\alpha_\hbar}{2\pi}\right)^{d} \sum_{k\in\mathbb Z^d} \int_{\mathbb R^{3d}}dxdydy'\, \overline{\varphi(x)}\varphi(y)\varphi\left(x-\frac{\hbar\xi}{\alpha_\hbar}\right)\overline{\varphi(y')} \\\times \exp\Big\lbrace \frac{i\alpha_\hbar}{\hbar}p_0(y-y')+i k[\alpha_\hbar(y'-y)+2\hbar t_\hbar j+\hbar\xi] - i\alpha_\hbar(x-y')j\\ \qquad\:\:+ij(q_0+i\hbar t_\hbar j)+i\hbar j\xi \Big\rbrace.\end{aligned}$$ Here we can drop the infinitesimal terms $i\hbar j\xi$ and $i\alpha_\hbar(x-y')j$ in the exponent. Also note that $$\lim_{\hbar\to0}\int_{\mathbb R^d}\overline{\varphi(x)}\varphi\left(x-\frac{\hbar\xi}{\alpha_\hbar}\right)dx=\int_{\mathbb R^d}|\varphi(x)|^2dx=1.$$ Further, the summation over $k$ yields the product $$\begin{aligned} &(2\pi)^d\sum_{k\in\mathbb Z^d} \delta(\alpha_\hbar(y'-y)+2\hbar t_\hbar j+2\pi k+\hbar\xi)\\= &\left(\frac{2\pi}{\alpha_\hbar}\right)^d\sum_{k\in\mathbb Z^d}\delta\left(y'-y+\frac{2\hbar t_\hbar j+2\pi k+\hbar\xi}{\alpha_\hbar}\right).\end{aligned}$$ By the elimination of the integration over $y'$ with the substitution $y'=y-(2\hbar t_\hbar j+2\pi k+\hbar\xi)/\alpha_\hbar$, we obtain $$\begin{aligned} \fl \lim_{\hbar\to0}\Big[\frac1{(2\pi\hbar)^d}\int_\Omega |(\upsilon_{qp},\upsilon_{q_0p_0,t_\hbar})|^2\exp(ijq+i\xi p)dqdp-\\ \sum_{k\in\mathbb Z^d} \int_{\mathbb R^d}dy\,\varphi(y)\overline{\varphi\left(y-\frac{2\hbar t_\hbar j+2\pi k+\hbar\xi}{\alpha_\hbar}\right)}\\ \qquad\quad\:\:\times \exp\left\lbrace ij(q_0+2p_0t_\hbar)+i\xi p_0+2\pi k\frac{p_0}\hbar+i\hbar t_\hbar j^2 \right\rbrace\Big]=0.\end{aligned}$$ If we drop the infinitesimal term $\hbar\xi/\alpha_\hbar$ in the argument of the function $\overline\varphi$ and substitute the result into formula (\[EqAFourier\]), we will obtain $$\begin{aligned} \fl \lim_{\hbar\to0}\Big[ \int_\Omega H^{(\hbar)}_{q_0,p_0,t_\hbar}(q,p)a(q,p)\,dqdp\nonumber\\ -\frac1{(2\pi)^{\frac d2}}\sum_{j,k\in\mathbb Z^d}a_j(p_0)\int_{\mathbb R^d}dx\,\varphi(x)\overline{\varphi\left(x-\frac{2\hbar t_\hbar j+2\pi k}{\alpha_\hbar}\right)}\nonumber\\\qquad\qquad\qquad\times \exp\left\lbrace ij(q_0+2p_0t_\hbar)+2\pi k\frac{p_0}\hbar+i\hbar t_\hbar j^2 \right\rbrace\Big]=0.\label{EqFinal}\end{aligned}$$ Now consider all limiting cases. At first, as we see, we have the Dirac measure for the momentum, which was expected since the momentum conservation. If $A$ is a whole number and $B<\infty$, then, in the summation over $k$ in (\[EqFinal\]), only the term with $k=-2A$ holds (otherwise the integral over $x$ tends to zero due to rapid decrease of $\varphi$). We can see that we obtain formula (\[EqLim\]) for the corresponding case. If $B=\infty$, or $A$ is irrational, or $\hbar t_\hbar\to\infty$, then, in the double sum in (\[EqFinal\]), only the term with $j=k=0$ remains non-zero in the limit. This corresponds to the uniform spatial distribution. So, we obtain formula (\[EqLimFlat\]). Let now $A=\frac MN$ (rational number expressed as an irreducible fraction) and $B<\infty$. The integral in (\[EqFinal\]) does not tend to zero if and only if $\frac{2Mj}N+k=0$. Accordingly, in the summation over $j$, only terms with $j=N'\ell$, $\ell\in\mathbb Z^d$ remains non-zero in the limit, where $N'=N$ for odd $N$ and $N'=\frac N2$ for even $N$. In the summation over $k$, only the term with $k=-\frac{2Mj}N$ remains non-zero in the limit. Consider the term $i\hbar t_\hbar j^2$ in the exponent in the right-hand side of (\[EqFinal\]). If $N$ is odd then $$\hbar t_\hbar j^2\sim2\pi\frac MN(N\ell)^2=2\pi MN\ell^2\in2\pi\mathbb Z^d$$ (we write $f\sim g$ whenever $\lim\frac fg=1$) and this term may be dropped. If $N$ is even, then $$\hbar t_\hbar j^2\sim2\pi \frac MN\left(\frac{N\ell}2\right)^2=\pi\frac{MN\ell^2}2.$$ If $N$ is divisible by four, then this number again belongs to $2\pi\mathbb Z^d$ and may be dropped. If $N$ is even, but not divisible by four, then $M$ is odd and $$\exp\left\lbrace i\hbar t_\hbar j^2 \right\rbrace \sim\exp\left\lbrace i\pi\frac{MN\ell^2}2\right\rbrace=(-1)^{N'\ell I}= \exp\left\lbrace i\pi N'\ell I\right\rbrace.$$ Thus, the second term in the limiting expression in (\[EqFinal\]) can be rewritten as $$\begin{aligned} \label{EqSums} \frac1{(2\pi)^{\frac d2}}&\sum_{\ell\in\mathbb Z^d}\sigma_{BN'\ell}a_{N'\ell}(p_0) \exp\left\lbrace iN'\ell[q_0+2p_0(t_\hbar-AT_\hbar)]+\gamma i\pi N'\ell I \right\rbrace\nonumber\\= \frac1{N'}&\sum_{k\in[N']^d}a^{(B)}\left(q_0+2p_0(t_\hbar-AT_\hbar)+\gamma\pi I+\frac{2\pi k}{N'},p_0\right)\nonumber\\= \frac1{N'}&\sum_{k\in[N']^d}a^{(B)}\left(q_0+2p_0(t_\hbar-AT_\hbar)+\Delta q_0+\frac{2\pi k}{N'},p_0\right),\end{aligned}$$ where $\gamma=1$ if $N\equiv2\pmod 4$ and $\gamma=0$ otherwise; $$\label{Eqab} a^{(B)}(q,p)=\frac1{(2\pi)^{\frac d2}}\sum_{j\in\mathbb Z^d}\sigma_{Bj}a_j(p)\exp(ijq).$$ To verify the first equality in (\[EqSums\]), we can use formula (\[Eqab\]) and see that all terms except $j=N'\ell$ cancel. Replacement of $\pi$ by $\frac{2\pi}N$ in the second equality in (\[EqSums\]) (recall that $\Delta q_0=\gamma\frac{2\pi}NI$) is valid since $k=(N'+1)/2$, $$\pi+\frac{2\pi k}{N'}=\pi+\frac{2\pi}{N'}\frac{N'+1}2=2\pi+\frac{2\pi}N,$$ and $(2\pi\mathbb Z^d)$-periodicity of $a^{(B)}(q,p)$ with respect to $q$. Finally, we obtain $$\begin{aligned} \fl \lim_{\hbar\to0}\bigg[ \int_\Omega H^{(\hbar)}_{q_0,p_0,t_\hbar}(q,p)a(q,p)\,dqdp\\-\frac1{N'}\sum_{k\in[N']^d}a^{(B)}\left(q_0+2p_0(t_\hbar-AT_\hbar)+\Delta q_0+\frac{2\pi k}{N'},p_0\right)\bigg],\end{aligned}$$ i.e., formula (\[EqLim\]). Thus, the theorem has been entirely proved. Theorem \[ThMu\] is a direct corollary of Theorem \[ThMain\]. Consider the first case. According to Theorem \[ThMain\], $$%\label{EqThTime1} \lim_{\hbar\to0}\int_\Omega H_{q_0,p_0,\lambda_\hbar t}^{(\hbar)}(q,p)a(q,p)dqdp= \langle a\rangle(p_0)$$ for all $t$ (if $\hbar \lambda_\hbar\to0$ or $\hbar \lambda_\hbar\to\infty$) or for irrational $t$ (if $\hbar \lambda_\hbar\to c\in(0,\infty)$). Since rational numbers have zero measure on the real line, anyway, $$\label{EqThTime2} \lim_{\hbar\to0}\int_{-\infty}^{+\infty}dt\,b(t)\int_\Omega H_{q_0,p_0,\lambda_\hbar t}^{(\hbar)}(q,p)a(q,p)dqdp= \langle a\rangle(p_0)\int_{-\infty}^{+\infty}b(t)\,dt.$$ Consider the second case. Let us rewrite formula (\[EqFinal\]) for this case (recall that the terms with $k\neq0$ vanish in this limiting case): $$\begin{aligned} \lim_{\hbar\to0}\Big[ \int_{\Omega\times\mathbb R}H^{(\hbar)}_{q_0,p_0,t_\hbar}(q,p)a(q,p)b(t)\,dqdpdt\nonumber\\\qquad-\frac1{(2\pi)^{\frac d2}}\sum_{j\in\mathbb Z^d}a_j(p_0)\int_{\mathbb R^d}dx\int_{-\infty}^{+\infty}dt\,b(t)\varphi(x)\overline{\varphi\left(x-\frac{2\hbar \lambda_\hbar tj}{\alpha_\hbar}\right)}\nonumber\\\qquad\qquad\qquad\times \exp\left\lbrace ij(q_0+2p_0\lambda_\hbar t)\right\rbrace\Big]=0.\label{EqThTime3}\end{aligned}$$ By the Riemann–Lebesgue theorem, the integral over $t$ tends to zero as $jp_0\neq0$ due to the term $2ijp_0\lambda_\hbar t$ in the exponent. Hence, $$\fl \lim_{\hbar\to0}\Big[ \int_{\Omega\times\mathbb R}H^{(\hbar)}_{q_0,p_0,t_\hbar}(q,p)a(q,p)b(t)\,dqdpdt-\frac1{(2\pi)^{\frac d2}}\sum_{j:\,jp_0=0}a^{(b,B)}_j(p_0) \exp(ijq_0)\Big]=0,$$ which can be rewritten as (\[EqLimt2\]). If $B=0$, then (\[EqLimt2\]) can be obviously rewritten as (\[EqLimt3\]). If $p_0$ does not belong to the resonant set, then, in (\[EqThTime3\]), only the term with $j=0$ remains non-zero in the limit. Since $$a_0(p)=\frac1{(2\pi)^{\frac d2}}\int_{\mathbb R^d}a(q,p)\,dq,$$ (\[EqLimt2\]) and (\[EqLimt3\]) can be rewritten as (\[EqLimt\]). Discussion {#SecDiscus} ========== Three time scales ----------------- From theorem \[ThMain\] three time scales can be deduced: 1. “Classical” time scale. If $t_\hbar=t=const$, or $t_\hbar\to\infty$ but $\hbar t_\hbar/\alpha_\hbar\to0$, then the wave packet moves along the classical trajectory: the second term in the limit (\[EqLim\]) has the form $$\delta(q-q_0-2p_0t_\hbar)\delta(p-p_0).$$ Conventionally, as a characteristic duration of this time scale can be chosen as the classical “period” of motion $T_{cl}=\pi/{\overline p}$, where $\overline p=\frac1d\sum_{j=1}^d p_j$ is the mean momentum for $p=(p_1,\ldots,p_d)\in\mathbb R^d$. 2. $T_{coll}=\alpha_\hbar/\hbar$ is a characteristic time of the collapse of the wave packet. The rate of wave packet spreading is known to be proportional to the initial standard deviation of the momentum. The standard deviation of the Gaussian wave packet is equal to $\hbar/(2\alpha_\hbar)$; 3. $T_\hbar=2\pi/\hbar$ is the full revival time. The instants $\frac MNT_\hbar$ correspond to fractional revivals. They correspond to revivals of small copies of the wave packet in several points on the torus. The structure of fractional revivals for the general case of systems with discrete spectrum was elaborated in [@Averbuh; @Averbuh2]. A more detailed analysis for the infinite square well is given in [@AronStroud; @AronStroud00] (the motion in the infinite square well is equivalent to the free motion on the torus [@VolTrush]). For a further development of the general theory of fractional revivals see [@Schleich-prl; @Schleich-pra; @Robinett00; @Robinett; @AronStroud05]. The Ehrenfest time is $O(T_{coll})=O(\alpha_\hbar/\hbar)$. By proper choices of $\alpha_\hbar$ we can made the Ehrenfest time arbitrarily close from below to $O(\hbar^{-1})$ and can made it arbitrarily small (but still indefinitely increasing as $\hbar\to0$), i.e., even smaller than $O(\ln\hbar^{-1})$. Rational and irrational times ----------------------------- Theorems \[ThMain\] and \[ThMu\] distinguish rational and irrational $A$. However, every irrational $A$ can be approximated by rationals $M/N$, where $M\to\infty$ and $N\to\infty$ such that $M/N\to A$. Hence, rational (with large denominators $N$) and irrational $A$ should be physically indistinguishable. This is true in our case as well: if $N\to\infty$, then, according to (\[EqLim\]), the number of small copies of the wave packet tends to infinity and their centres are uniformly distributed on the torus. Thus, the spatial distribution produced by the sum of many tiny wave packets tends to the uniform distribution. So, the cases of rational $A=M/N$ with large $N$ and irrational $A$ are indeed physically indistinguishable if a measurement instrument has a finite precision. The distinguish of rational and irrational times (in the units of $T_\hbar$) in the semiclassical limit reveals the relation of quantum mechanics to number-theoretic issues discovered in some other models [@Kara; @NumFactoring; @Morse]. Generalizations of semiclassical measures ----------------------------------------- To formulate the results in terms of semiclassical measures in Theorem \[ThMu\], we had to narrow the class of functions $t_\hbar$ (in comparison to Theorem \[ThMain\]). This is due to the term $2p_0(t_\hbar-T_\hbar)$ in the argument of $\delta_B$ in (\[EqLim\]). Generally, this term itself has no limit. The cases considered in Theorem \[ThMu\] are related to different cases when this divergence is eliminated. This is possible either in the case of the uniform spatial distribution, when $\delta_B$ does not depend on the spatial arguments at all, or whenever $p_0=0$, or whenever $t_\hbar-T_\hbar$ converges to a constant. Another way of obtaining the convergent expressions for semiclassical measures is time-averaging. This way was used in [@MaciaRiemann; @MaciaTorus; @AnaMaciaTorus; @AnaMaciaView; @Ana14]. We consider it in Theorem \[ThTime\]. Let us reformulate this theorem from a more general viewpoint developed in the aforementioned works. Let $\{\psi_\hbar\}$ is a family of functions; $t_\hbar=\lambda_\hbar t$, where $t\in\mathbb R$, and $\lambda_\hbar\to\infty$ as $\hbar\to0$. Denote by $W_\hbar(q,p,t)$ the Wigner distribution of the function $\exp(-i\lambda_\hbar t\Delta)\psi_\hbar$. Then, if for all functions $a\in C^\infty_0(\Omega)$ and $b\in L^1(\mathbb R)$ there exists the limit $$\label{EqSemiMt} \lim_{\hbar\to0}\int_{\Omega\times\mathbb R}a(q,p)b(t)W(q,p,t)\,dqdpdt= \int_{\Omega\times\mathbb R}a(q,p)b(t)\mu_t(dqdp)dt,$$ where the time-dependent measure $\mu_t(\Omega)$ is finite and bounded as a function of $t$, then $\mu_t$ is also called the (time-dependent) semiclassical measure. According to Theorem \[ThTime\], for coherent states, if $\hbar\lambda_\hbar/\alpha_\hbar\to\infty$ or $p_0$ does not belong to the set of resonant frequencies, we have $$\mu_t(dqdp)=\frac1{(2\pi)^d}\delta(p-p_0)$$ for all $t$, i.e., uniform spatial distribution. However, as we see, this approach does not distinguish all three time scales. If $\hbar\lambda_\hbar/\alpha_\hbar\to0$, then we have the classical time scale; if $\hbar\lambda_\hbar/\alpha_\hbar\to B\in(0,\infty)$, or $\hbar\lambda_\hbar/\alpha_\hbar\to \infty$ but $\hbar\lambda_\hbar\to0$, then we have the collapse time scale; if $\hbar\lambda_\hbar\to A>0$, then we have the revival time scale. All three cases gives the uniform spatial distribution in the case of time-averaging, but the reasons are different. If $\hbar\lambda_\hbar/\alpha_\hbar\to B\in[0,\infty)$, then the cause of the uniformity of the spatial distribution is the averaging over the classical trajectory (if the mean momentum does not belong to resonant frequencies, this is a necessary condition in this case). If $\hbar\lambda_\hbar/\alpha_\hbar\to \infty$, then the cause of the uniformity is not the averaging over the classical trajectory, but the actual collapse of the wave packet (and the uniformity takes place irrespectively of whether the mean momentum belongs to resonant frequencies). Moreover, such important and interesting wave phenomenon like wave packet revivals is fully missed in the approach based on time-averaging. This demonstrates limitations of the approach to long-time quantum dynamics based on semiclassical measures. Other limitations were reviewed in [@Carles]. As an alternative, one can consider the approach of Theorem \[ThMain\], where, instead of limits of the Husimi distributions themselves, distributions equivalent to the Husimi distributions in the corresponding long-time semiclassical limits are under consideration. Also we can try to modify the definition of the semiclassical measure by introduction a correction for the classical phase flow. Let us denote $g^t(q,p)$ the displacement of the point $(q,p)$ on $t$ along the classical phase trajectory. In the case of free motion on the torus, $g^t(q,p)=(q+2pt,p)$. If $\hbar(t_\hbar-T_\hbar)\to0$, then define $$\label{EqSemiMf} \lim_{\hbar\to0}\int_\Omega W_{\hbar}(g^{t_\hbar+AT_\hbar}(q,p))a(q,p)\,dqdp=\int_\Omega a(q,p)\omega(dqdp).$$ Then formula (\[EqLim\]) of Theorem \[ThMain\] takes the form $$%\label{EqMu} \omega(dqdp)= \frac1{N'}\sum_{k\in[N']^d} \delta_B\left(q-q_0-\Delta q_0+\frac{2\pi k}{N'}\right)\delta(p-p_0)\,dqdp.$$ An interesting question is a possibility of generalization of this approach. One of the difficulties is that the exact revival after some time $T_\hbar$ is a property of only quadratic Hamiltonians. In general, the dynamics of systems with discrete spectrum is not periodic, but almost periodic. Gaussian coherent states ------------------------ Let $\varphi$ be Gaussian (\[EqGauss\]). Then $$\delta_B(q)=\frac1{(2\pi)^d}\sum_{j\in\mathbb Z^d}\exp\left\lbrace-\frac{(Bj)^2}2+ijq\right\rbrace= \theta\left(\frac q{2\pi},\frac{B^2}{2\pi}\right),$$ where $$%\label{EqTheta} \theta(x,\tau)=\sum_{k\in\mathbb Z^d}\exp\{-\pi\tau k^2+2\pi ikx\}$$ is the theta function of several variables, $x\in\mathbb R^d$, $\tau\in\mathbb C$, $\mathrm{Re}\,\tau>0$. Using the functional equation for the theta function [@Mum] $$%\label{EqThetaModular} \theta\left(\frac{x}{i\tau},\frac{1}{\tau}\right)=\tau^{\frac d2} \exp\left(\frac{\pi x^2}\tau\right)\theta(x,\tau),$$ we arrive at $$\delta_B(q)=\frac1{(2\pi B^2)^\frac d2}\sum_{n\in\mathbb Z^d} \exp\left\lbrace-\frac{(q-2\pi n)^2}{2B^2}\right\rbrace.$$ So, in this case, $B$ is the spatial standard deviation of the wave packet. We have reproduced the corresponding results of work [@VolTrush]. Physically small parameters {#SecPhysSmall} --------------------------- We mentioned in Sec. \[SecPrelim\] that, physically, the Planck constant cannot tend to zero and one should speak about the smallness of certain dimensionless quantities. In our case the condition $\hbar\to0$, $\alpha_\hbar\to0$, $\hbar/\alpha_\hbar\to0$ is equivalent to the condition that every time scale is much greater than the previous one, i.e., they are “well distinguishable”: $$T_{rev}\gg T_{coll}\gg T_{cl}.$$ In other words we can say: - $\alpha_\hbar\ll 2\pi $ means that the spatial extension of the wave packet is much smaller than the size of the torus (this corresponds to $T_{coll} \ll T_{rev}$); - $\overline p\pi \gg \hbar$ means that the physical action related to a single revolution of the particle around the torus is much greater than the quantum of action (this corresponds to $T_{rev}\gg T_{cl}$); - $\overline p\alpha_\hbar \gg \hbar$ means that the action related to the motion of the center of the wave packet along its spatial extension is much larger than the quantum of action (this is a strengthening of the previous condition; corresponds to $T_{coll} \gg T_{cl}$). Conclusions =========== We have obtained explicit expressions for semiclassical measures corresponding to all stages of evolution of quantum wave packets on the flat torus: classical-like motion, spreading and revivals of the wave packet. The second time scale is the Ehrenfest time scale and the third one is beyond it. These explicit expressions allows to understand the limitations of the notion of semiclassical measure and to propose some generalizations. The results can be applied to the particle in the infinite square well because it is reduced to the dynamics on the flat torus [@VolTrush]. An interesting problem would be the calculation of semiclassical measures for more general potentials, for example, the Morse potential, as well as various multi-dimensional bounded domains and compact manifolds. Coherent states for the Morse potential were constructed in [@Angelova], the structure of revivals was studied in [@WangHeller; @Morse]. One can also consider quantum optimal control problems (with both coherent and incoherent controls [@PechIlyn; @PechTrush]) in semiclassical long-time limit. Our method of research was direct summation of series of eigenvectors for time-evolved wavepackets, instead of reduction of the quantum dynamics to the classical dynamics usually applied in the semiclassical analysis. Though this method has already showed its effectiveness in the Jaynes–Cummings model [@Kara], its possibilities for analysis of quantum mechanical models are still underexplored. Acknowledgements {#acknowledgements .unnumbered} ================ Tha author is very grateful to M.V. Berry, S.Yu. Dobrokhotov, A.S. Holevo, J.R. Klauder, I.V. Volovich, and E.I. Zelenov for helpful suggestions, comments, and interest to the work. The work was supported by the Russian Science Foundation under grant 14-50-00005. References {#references .unnumbered} ========== [99]{} Anantharaman N, Fermanian-Kammerer C, Macia F Semiclassical completely integrable systems: long-time dynamics and observability via two-microlocal Wigner measures 2015 *Am. J. Math.* **137** 577–638 Anantharaman N and Macia F Semiclassical measures for the Schrödinger equation on the torus 2014 *J. Eur. Math. Soc.* **16** 1253–88 Anantharaman N and Macia F The dynamics of the Schrödinger flow from the point of view of semiclassical measures 2012 *Spectral Geometry, Proceedings of Symposia in Pure Mathematics* **84** (Providence: Amer. Math. Soc.) 93–116 Andreev V A, Davidovich D M, Davidovich L D, Davidovich M D, Man’ko V I and Man’ko M A 2011 A transformational property of the Husimi function and its relation to the Wigner function and symplectic tomograms *Theor. Math. Phys.* **166** 356–368. Angelova M and Hussin V 2008 Generalized and gaussian coherent states for the Morse potential *J. Phys. A: Math. Theor.* **41** 304016 Arkhipov A S, Lozovik Yu E, Man’ko V I and Sharapov V A 2005 Center-of-mass tomography and probability representation of quantum states for tunneling *Theor. Math. Phys.* **142** 311–323 Aronstein D L and Stroud C R 1997 Fractional wave-function revivals in the infinite square well *Phys. Rev. A* **55** 4526–37 Aronstein D L and Stroud C R 2000 Analytical investigation of revival phenomena in the finite square-well potential *Phys. Rev. A* **62** 022102 Aronstein D L and Stroud C R 2005 Phase-difference equations: A calculus for quantum revivals *Laser Physics* **15** 1496–507 Averbukh I Sh and Perelman N F 1989 Fractional revivals: Universality in the long-term evolution of quantum wave packets beyond the correspondence principle dynamics *Phys. Rev. Lett.* **139** 449–53. Averbukh I Sh and Perelman N F 1991 The dynamics of wave packets of highly-excited states of atoms and molecules *Soviet Phys. Uspekhi* **34** 572–591. Bambusi D, Graffi S, Paul T. 1999 Long time semiclassical approximation of quantum flows: a proof of the Ehrenfest time *Asymptot. Anal.* **21** 149–60 Berry M V Random renormalization in the semiclassical long-time limit of a precessing spin 1998 *Physica D* **33** 26–33 Bouzouina A, Robert D. Uniform semiclassical estimates for the propagation of quantum observables 2002 *Duke Math. J.* **111** 223–52 Carles R, Fermanian-Kammerer C, Mauser N J and Stimming H P On the time evolution of Wigner measures for Schrödinger equations 2009 *Commun. Pure Appl. Anal.* **8** 559–85 Combescure M and Robert D. 1997 Semiclassical spreading of quantum wave packets and applications near unstable fixed points of the classical flow *Asymptot. Anal.* **14** 377–404 De Bièvre S and González J A 1993 Semiclassical behaviour of coherent states on the circle, in: Ali S T, Ladanov I M and Odzijewicz A, editors, Quantization and Coherent States Methods in Mathematical Physics, Proceedings of 11th Workshop on Geometrical Methods in Mathematical Physics, Bialystok 1992 (Singapore: World Scientific) Gérard P Mesures semi-classiques et ondes de Bloch 1991 *Séminaire Équations aux dérivées partielles (Polytechnique) (1990-1991). École Polytech.* Exp. N. 16. P. 1–19. Gilowski M, Wendrich T, Müller T, Jentsch C, Ertmer W, Rasel E M, Schleich W P Gauss sum factorization with cold atoms *Phys. Rev. Lett.* **100** 030201 González J A and del Olmo M A 1998 Coherent states on the circle *J. Phys. A* **31** 8841–57. de Gosson M A 2008 *J. Phys. A: Math. Theor.* **41** 095202 Hagedorn G A, Joye A Semiclassical dynamics with exponentially small error estimates 1999 *Comm. Math. Phys.* **207** 439–65 Hagedorn G A, Joye A Exponentially accurate semiclassical dynamics: propagation, localization, Ehrenfest times, scattering, and more general states 2000 *Ann. Henri Poincaré* **1** 837–83 Hillery M, O’Connell R F, Scully M O and Wigner E P 1984 Distribution functions in physics: fundamentals *Phys. Rep.* **106** 121–167 Holevo A S 2001 Statistical Structure of Quantum Theory (Berlin:Springer-Verlag) Jin S, Markowich P and Sparber C Mathematical and computational methods for semiclassical Schrödinger equations 2011 *Acta Numerica* **20** 121–209 Karatsuba A A and Karatsuba E A 2009 A resummation formula for collapse and revival in the Jaynes–Cummings model *J. Phys. A: Math. Theor.* **42** 195304 Klauder J R and Skagerstam B-S 1985 Coherent states. Applications in physics and mathematical physics (Singapore: World Scientific) Kowalski K, Rembielińsky J and Papaloucas L C Coherent states for a quantum particle on a circle 1996 *J. Phys. A: Math. Gen.* **29** 4149–67 Kowalski K and Rembielińsky J 2007 Coherent states for the quantum mechanics on a torus *Phys. Rev. A* **75** 052102 Kowalski K and Rembielińsky J 2008 Coherent states for the quantum mechanics on a compact manifold *J. Phys. A: Math. Theor.* **41** 304021 Kowalski K and Rembielińsky J 2002 On the uncertainty relations and squeezed states for the quantum mechanics on a circle *J. Phys. A: Math. Gen.* **35** 1405–14 Markowich P A, Mauser N J and Poupaud F A Wigner-function approach to (semi)classical limits: electrons in a periodic potential 1994 *J. Math. Phys.* **35** 1066–94 Leichtle C, Averbukh I Sh and Schleich W P 1996 Generic structure of multilevel quantum beats *Phys. Rev. Lett.* **77** 3999–4002 Leichtle C, Averbukh I Sh and Schleich W P 1996 Multilevel quantum beats: An analytical approach *Phys. Rev. A* **54** 5299–312 Li A Z and Hartert W G 2015 Quantum revivals of Morse oscillators and Farey-Ford geometry *Chem. Phys. Lett.* **633** 208–13 Macia F 2009 Semiclassical measures and the Schrödinger flow on Riemannian manifolds *Nonlinearity* **22** 1003–20 Macia F 2010 High-frequency propagation for the Schrödinger equation on the torus *J. Func. Anal.* **258** 933–55 Mancini S, Man’ko V I and Tombesia P 1996 Symplectic tomography as classical approach to quantum systems *Phys. Lett. A* **213** 1–6 Martinez A 2002 An introduction to semiclassical and microlocal analysis (New York: Springer) McKenna J and Frisch H L 1996 Quantum-mechanical, microscopic Brownian motion *Phys. Rev.* **145** 93–110 Mumford D 1983 Tata lectures on theta (Boston: Birkhaüser) von Neumann J 1932 Mathematische Grundlagen der Quantenmechanik (Berlin: Julius Springer) Ozorio de Almeida A M, Vallejos R O and Zambrano E Initial or final values for semiclassical evolutions in the Weyl–Wigner representation 2013 *J. Phys. A: Math. Theor.* **46** 135304 Ozorio de Almeida A M and Brodier O Semiclassical evolution of correlations between observables 2016 *J. Phys. A: Math. Theor.* **49** 185302 Pechen A and Ilyn N 2015 On critical points of the objective functional for maximization of qubit observables *Russian Math. Surveys* **70** 782–84 Pechen A and Trushechkin A 2015 Measurement-assisted Landau-Zener transitions *Phys. Rev. A* **91** 052316 Robinett R W 2000 Visualizing the collapse and revival of wave packets in the infinite square well using expectation values *Amer. J. Phys.* **68** 410–20 Robinett R W 2004 Quantum wave packet revivals *Phys. Rep.* **392** 1-119 Schubert R Semiclassical behaviour of expectation values in time evolved Lagrangian states for large times 2005 **256** 239–54 Schubert R, Vallejos R O and Toscano F How do wave packets spread? Time evolution on Ehrenfest time scales 2012 *J. Phys. A: Math. Theor.* **45** 215307 Toscano F, Vallejos R. Semiclassical description of wavepacket revival 2009 *Phys. Rev. E* **80** 046218 Volovich I V and Trushechkin A S 2009 Squeezed quantum states on an interval and uncertainty relations for nanoscale systems *Proc. Steklov Inst. Math.* *265* 276–306; arXiv: 1304.6277 \[quant-ph\] Volovich I V and Trushechkin A S 2012 Asymptotic properties of quantum dynamics in bounded domains at various time scales *Izv. Math.* **76** 39–78; arXiv: 1304.2332 \[quant-ph\] Wang Z Heller E J Semiclassical investigation of revival phenomena in one dimensional system 2009 *J. Phys. A: Math. Theor.* **42** 285304 Zachos C, Fairlie D and Curtright T. 2005 Quantum mechanics in phase space (Singapore: World Scientific) Zworski M 2012 Semiclassical analysis (Providence: American Mathematical Society)
--- bibliography: - '\\pathRefs.bib' - 'IEEEabrv.bib' - '\\pathRefs.bib' --- \_case\_x:nn [\_format\_str]{} [plain]{} [hpec]{} \[sec:acknowledgments\] Acknowledgments {#acknowledgments .unnumbered} =============== \[sec:references\] \_case\_x:nn [\_format\_str]{} [plain]{} [hpec]{} \_case\_x:nn [\_format\_str]{} [ [plain]{} [ ]{} [hpec]{} [ ]{} ]{}
--- abstract: 'Local differential privacy (LDP) can provide each user with strong privacy guarantees under untrusted data curators while ensuring accurate statistics derived from privatized data. Due to its powerfulness, LDP has been widely adopted to protect privacy in various tasks (e.g., heavy hitters discovery, probability estimation) and systems (e.g., Google Chrome, Apple iOS). Although has been proposed for many years, the more general notion of has only been studied in very few papers, which mainly consider mean estimation for numeric data. Besides, prior solutions achieve by leveraging Gaussian mechanism, which leads to low accuracy of the aggregated results. In this paper, we propose novel mechanisms that achieve with high utility in data analytics and machine learning. Specifically, we first design algorithms for collecting multi-dimensional numeric data, which can ensure higher accuracy than the optimal Gaussian mechanism while guaranteeing strong privacy for each user. Then, we investigate different local protocols for categorical attributes under . Furthermore, we conduct theoretical analysis on the error bound and variance of the proposed algorithms. Experimental results on real and synthetic datasets demonstrate the high data utility of our proposed algorithms on both simple data statistics and complex machine learning models.' address: - 'Xi’an Jiaotong University, Shaanxi, China' - 'Nanyang Technological University, Singapore' author: - Teng Wang - Jun Zhao - Xinyu Yang - Xuebin Ren bibliography: - 'mybibfile.bib' title: Locally Differentially Private Data Collection and Analysis --- Multi-dimensional data, $(\epsilon, \delta)$-local differential privacy, data collection and analysis, untrusted data curator, data utility Introduction {#sec-introduction} ============ With the rapid development of sensing technology [@guo2015mobile], smart devices, such as mobile phones, smart vehicles, wearable devices, and sensor networks, have increasingly developed into data sources of the era of big data and generated gigantic data continuously [@han2015mobile; @merlino2016mobile]. Various and massive user data are collected and analyzed to provide invaluable knowledge for different organizations or service providers, which significantly benefits human’s daily lives. However, privacy concerns related to user’s personal information have been serious challenges when collecting and analyzing user’s sensing data under untrusted data curators (such as in untrusted crowdsourcing systems) [@yang2015security; @jin2018incentive; @feng2018survey; @tang2019privacy]. As a formal privacy protection technique, differential privacy (DP) [@dwork06Calibrating; @dwork2014algorithmic], which provides rigorous guarantees for the privacy of each user by adding randomized noise, has been extensively studied in the literature. Specifically, a mechanism $\mathcal{M}$ achieves $\epsilon$-DP if for any pair of neighboring datasets $D$ and $D'$ (which differ in one record), it holds that ${{\mathbb{P}}\left[{\mathcal{M}(D)\in \mathcal{Y}}\right]}\leq e^\epsilon{{\mathbb{P}}\left[{\mathcal{M}(D')\in\mathcal{Y}}\right]}$, where $\mathbb{P}$ denotes the probability and $\mathcal{Y}$ is any possible subset of outputs. As a relaxed version of $\epsilon$-DP (also referred to *pure* DP), [@dwork2006our] (also referred to *approximate* DP) has the following meaning (loosely speaking, not exactly speaking): given a typically small probability $\delta$, a mechanism $\mathcal{M}$ achieves $\epsilon$-DP with probability at least $1-\delta$. Formally speaking, a mechanism $\mathcal{M}$ achieves if ${{\mathbb{P}}\left[{\mathcal{M}(D)\in \mathcal{Y}}\right]}\leq e^\epsilon{{\mathbb{P}}\left[{\mathcal{M}(D')\in\mathcal{Y}}\right]} + \delta$ holds for any pair of neighboring datasets $D$ and $D'$. $(\epsilon,\delta)$-DP can also be understood as being more general than $\epsilon$-DP since the former in the special case of $\delta=0$ becomes the latter. Since the introduction of differential privacy (DP), a large number of mechanisms [@han2019differentially; @gong2018protecting; @yang2017survey] have been proposed and applied to numerous scenarios, such as data statistics [@xu2013differentially; @zhu2015correlated; @chen2015differentially], learning models [@abadi2016deep; @phan2016differential; @zhang2017dynamic; @mohassel2017secureml], and systems [@hu2015differential; @bittau2017prochlo]. Nonetheless, the traditional differential privacy paradigm under centralized setting [@dwork06Calibrating] requires a trustworthy data curator and can not guarantee the privacy of each participate locally when collecting data, thus limiting its applications when facing untrusted data curators. Given the above discussions, local differential privacy (LDP) [@kasiviswanathan2011can; @duchi2013local] has been proposed to provide stronger privacy guarantees locally for each user, which no longer relies on a trustworthy data curator. Formally, for any neighboring input tuples $x$ and $x'$ of one user, the mechanism $\mathcal{M}$ satisfies if $\mathbb{P}[\mathcal{M}(x) \in \mathcal{Y}] \leq e^\epsilon \cdot \mathbb{P}[\mathcal{M}(x') \in \mathcal{Y}]$, for any possible subset of outputs $\mathcal{Y}$. That is, each user utilizes a LDP-achieving mechanism to perturb her/his data and then sends the noisy information to the *aggregator*. Then, the aggregator combines the perturbed data of all users to estimate the desired statistics. Thus, LDP achieves stronger privacy guarantees than centralized DP for protecting users’ data and also protects the aggregator from data breaches since the aggregator does not hold users’ true data. Besides, LDP model also ensures that the data of each participating user is invisible to any other users except the participating user. LDP has attracted much attention in both academia and industry. A large number of studies have designed mechanisms under for various tasks including heavy hitters discovery, probability distribution estimation, empirical risk minimization [@qin2016heavy; @yang2017copula; @cormode2018marginal; @Wang19Local; @bassily2015local; @wang2017locally; @wang2019collecting]. Google’s system called RAPPOR [@erlingsson2014rappor] under has been used in Chrome to collect information about users’ preferred homepages. Apple [@apple2017local; @thakurta2017emoji] has implemented LDP in recent iOS and MacOS versions. Microsoft [@ding2017collecting] has deployed an LDP-enabled data collection mechanism in Windows Insiders program to collect application usage statistics. Although LDP has drawn much attention from the research community in recent years, almost existing mechanisms are proposed under . The fundamental research on (the relaxed version of ) has not been addressed sufficiently. Moreover, existing solutions mainly [@gaboardi2018locally; @joseph2018locally; @bun2018heavy; @bassily2018linear] leverage the basic Gaussian mechanism [@dwork2006our] to achieve , which yields a low data utility. We will also demonstrate later that the data utility still remains low even the optimal Gaussian mechanism [@balle2018improving] is used. Besides, existing local protocols under [@gaboardi2018locally; @joseph2018locally; @bassily2018linear] mainly focus on the task of mean estimation for numeric attributes, without considering the frequency estimation of categorical attributes. The purpose of this paper is to propose mechanisms that can achieve with high accuracy on various estimation tasks. In particular, we focus on applying to complex multi-dimensional data collection and analysis for numeric attributes and categorical attributes. Our main contributions are summarized as follows. - First, we propose novel mechanisms for collecting and analyzing multi-dimensional numeric data under , which ensures much higher accuracies than Gaussian mechanism. Besides, we also give the theoretical analysis on the error bound of our proposed mechanisms. - Second, as for categorical attributes, we investigate several different randomized response protocols which achieve and also compare the variance of different protocols. Furthermore, we introduce an optimized local hash mechanism under which achieves lower communication overhead and higher accuracy than other mechanisms. - Third, we conduct extensive experiments on both real-world datasets and synthetic datasets to evaluate the performance of our proposed mechanisms. All the experimental results have demonstrated the high accuracy of our proposed mechanisms on both mean/frequency estimations and machine learning models. This paper is organized as follows. Section \[sec-related\] reviews the related work. Section \[sec-preliminaries\] formalizes the research problem and introduces local differential privacy as preliminaries. In Sections \[sec-numeric\] and \[sec-categorical\], we elaborate our proposed algorithms for numeric attributes and categorical attributes, respectively. Section \[sec-experiments\] presents our extensive experimental results. Finally, Section \[sec-conclution\] concludes the paper. Related Work {#sec-related} ============ Differential privacy (DP) [@dwork06Calibrating; @dwork2014algorithmic], a classical privacy protection technique with rigorous mathematical proofs, has been studied in the literature for more than a decade. It provides formal privacy guarantees for each record in the dataset [@xu2013differentially; @zhu2015correlated; @chen2015differentially; @yang2017survey]. One of the many topics in DP research is differentially private empirical risk minimization for machine learning [@dwork2009differential; @chaudhuri2011differentially], especially for deep neural networks [@abadi2016deep; @phan2016differential; @zhang2017dynamic; @acs2018differentially; @xu2019ganobfuscator]. Also, novel privacy notions related to $(\epsilon,\delta)$-differential privacy such as concentrated differential privacy have also been studied recently [@bun2016concentrated; @mironov2017renyi; @bun2018composable]. However, traditional DP in the centralized setting requires a trusted data curator, thereby limiting the application scenarios. Therefore, local differential privacy (LDP) [@kasiviswanathan2011can; @duchi2013local] has received considerable attention recently since it no longer assumes a trusted data curator. Specifically, each user applies LDP to protect her/his local information and reports only the noisy data to an aggregator. This is in the same spirit as the classical randomized response technique [@warner1965randomized]. LDP not only provides strong privacy guarantees for each user, but also protects the aggregator from data breaches since the aggregator does not collect users’ true data in the first place. Kasiviswanathan *et al.* [@kasiviswanathan2011can] have precisely addressed the powerful characterization of the local private learning algorithms. Google has developed RAPPOR [@erlingsson2014rappor] to collect user statistics for Chrome under with strong privacy protections and high analysis accuracy on the collected data. Afterward, Fanti *et al.* [@fanti2016building] extended RAPPOR to conduct complex joint distribution estimations. Current researches focus on many related problems under the LDP model, such as mean/frequency estimation [@duchi2013local; @nguyen2016collecting; @wangtt2017locally], probability distribution estimation [@fanti2016building; @yang2017copula; @Wang19Local], heavy hitter identification [@bassily2015local; @qin2016heavy; @bun2018heavy], itemset mining [@wang2018locally], marginal distribution release [@cormode2018marginal; @zhang2018calm], and empirical risk minimization [@wang2019collecting; @wang2018empirical]. Besides, Ye *et al.* [@yeprivkv] proposed PrivKV which investigates the frequency and mean estimation on key-value data. And they also proposed PrivKVM which can improve the estimation accuracy further through multiple iterations. By deploying LDP to the recommended system, Shin *et al.* [@shin2018privacy] proposed an enhanced matrix factorization mechanism which leverages random projection-based dimension reduction technique to improve the recommendation accuracy while guaranteeing per-user privacy. In the setting of , Gaboardi *et al.* [@gaboardi2018locally] have investigated the upper and lower error bounds of mean estimation when protecting privacy by adding Gaussian noise. Afterward, Joseph *et al.* [@joseph2018locally] further provided a smaller lower bound of mean estimation than Gaboardi *et al.* [@gaboardi2018locally]. As for heavy hitter discovery problem, Bun *et al.* [@bun2018heavy] have focused on the transformation of approximate local private protocol () into a pure local private protocol (). Moreover, under the constraint of , Bassily [@bassily2018linear] proposed algorithms for estimating a set of linear queries in both offline setting and adaptive setting and analyzed the accuracy bound of the proposed algorithms. So far, the above mechanisms under are all achieved by the classical Gaussian mechanism [@dwork2006our], which yields low accuracies of the estimation results. Thus, the goal of this paper is to investigate the mechanisms which can achieve with higher accuracies on estimation results. Preliminaries {#sec-preliminaries} ============= This paper considers the local setting that the server collects data from a large number of users under an untrusted data curator. Then, the collected data will be used to compute statistical models or conduct machine learning. Our goal is to design the mechanism which can not only achieve $(\epsilon, \delta)$-local differential privacy (LDP) [@kasiviswanathan2011can; @duchi2013local], but also maximize the accuracies on both statistical models and machine learning models. Formally, let $x=\{x(1),x(2),\cdots,x(N)\}$ be the data of all users, where $N$ is the user population. Each tuple $x(i)= \langle x_1(i),x_2(i),\cdots,x_d(i) \rangle$ $(i\in[1,N])$[^1] denotes the data of the $i$-th user, which consists of $d$ attributes $A_1,A_2,\cdots,A_d$. Each $x_j(i)$ $(j\in[1,d])$ denotes the value of the $j$-th attribute of the $i$-th user. These attributes are either numeric or categorical. Without loss of generality, we assume that each numeric attribute holds a domain $[-1,1]$, and each categorical has $k$ distinct values, holding a discrete domain $\{1,2,\cdots,k\}$. While collecting users’ multi-dimensional data under an untrusted data curator, each user $u_i$ adopts a randomized perturbation mechanism $\mathcal{M}$ to perturb her tuple $x(i)$. Then, the perturbed data $\mathcal{M}(x(i))$ instead of raw data will be sent to the aggregator in order to protect privacy information locally against an untrusted aggregator. This paper follows the local differential privacy model and focuses on two types of analytic tasks under : 1. Basic statistics: mean estimation and frequency estimation. For numeric attribute, we focus on estimating the mean value of each attribute $A_j(j\in[1,d])$ over all $N$ users, that is $\frac{1}{N}\sum_{i=1}^{N}x_j(i)$. As for categorical attribute, the frequency $f_j(k)$ of each possible value $k(k\in[1,k])$ in attribute $A'_j$ will be computed. 2. Advanced statistics: machine learning models analysis under empirical risk minimization. Next, we briefly review some conceptions related to $(\epsilon, \delta)$-local differential privacy and machine learning. In the following, we simplify $x(i)$ as $x$ to denote the data tuple of one user by omitting notation $i$. Local Differential Privacy -------------------------- Local differential privacy [@kasiviswanathan2011can; @duchi2013local] has been used to provide strong privacy protection for each user locally, which is defined as follows. \[defn-eps-ldp\] A randomized mechanism $\mathcal{M}$ satisfies $\epsilon$-local differential privacy if and only if for any pairs of adjacent input tuples $x$ and $x'$ in the domain of $\mathcal{M}$, and for any possible subset of outputs $\mathcal{Y}$, it always holds $$\begin{aligned} \label{eqn-eps-ldp}\mathbb{P}[\mathcal{M}(x) \in \mathcal{Y}] \leq e^\epsilon \cdot \mathbb{P}[\mathcal{M}(x') \in \mathcal{Y}],\end{aligned}$$ where the notation $\mathbb{P}[\cdot]$ denotes probability. Similar to the case that $(\epsilon, \delta)$-differential privacy [@dwork2006our] is a relaxation of $\epsilon$-differential privacy [@dwork06Calibrating], $(\epsilon, \delta)$-local differential privacy (also called *approximate* LDP) is a relaxation of $\epsilon$-local differential privacy (also called *pure* LDP). \[defn-eps-delta-ldp\] A randomized mechanism $\mathcal{M}$ satisfies $(\epsilon, \delta)$-local differential privacy if and only if for any pairs of adjacent input tuples $x$ and $x'$ in the domain of $\mathcal{M}$, and for any possible subset of outputs $\mathcal{Y}$, it always holds $$\begin{aligned} \label{eqn-eps-delta-ldp} \mathbb{P}[\mathcal{M}(x) \in \mathcal{Y}] \leq e^\epsilon \cdot \mathbb{P}[\mathcal{M}(x') \in \mathcal{Y}] + \delta,\end{aligned}$$ where $\delta$ is typically small. Loosely speaking (not exactly speaking), means that a mechanism $\mathcal{M}$ achieves with probability at least $1-\delta$. By relaxing $\epsilon$-LDP, $(\epsilon, \delta)$-LDP is more general since the latter in the special case of $\delta=0$ becomes the former. Machine Learning based on Empirical Risk Minimization ----------------------------------------------------- Machine learning models, which can be expressed as empirical risk minimization essentially, have been applied to many fields recent years. As for a machine learning task with $N$ training samples $x=\{x(1),x(2),\cdots,x(N)\}$, the loss function $\mathcal{L}(\theta)$ is used to capture how “bad” is the predictor when predicting the label of the $i$-th data point, which is parameterized by a $d$-dimensional parameter vector $\theta$ and computed as the average loss of all samples. That is, $\mathcal{L}(\theta)=\frac{1}{N}\sum _i\mathcal{L}(\theta,x(i))$, where $\mathcal{L}(\theta,x(i))$ is the loss of sample $x(i)$. Generally, the training target is to find a $\theta$ that obtains an acceptably small loss. In practice, the stochastic gradient descent (SGD) algorithm is often used to compute the target $\theta$ where we have the minimum (or hopefully) loss. At each iteration $t+1$, the parameter vector is computed as $\theta_{t+1} = \theta_{t}-\eta \cdot \nabla\mathcal{L}(\theta_{t})$, where $\eta$ is the learning rate, $\nabla\mathcal{L}(\theta_{t})$ is the gradient of loss function $\mathcal{L}(\theta_{t})$ at $\theta_{t}$. When in private settings, each user will submit a noisy gradient $\nabla\mathcal{L}^*(i)$ to the aggregator. In this paper, we assume that each iteration involves a batch $G$ of users. Then, the parameter will be updated as $$\begin{aligned} \label{eqn-sgd} \theta_{t+1} = \theta_{t}-\eta \cdot \frac{1}{|G|}\sum\nolimits_{i \in G} \nabla\mathcal{L}^*(i),\end{aligned}$$ where $|G|$ is the batch size. Existing Solutions to Achieve ------------------------------ The Gaussian mechanism is a classical solution for achieving $(\epsilon, \delta)$-differential privacy [@dwork2006our], which can also be applied to achieve $(\epsilon, \delta)$-local differential privacy. Most existing studies on are based on the Gaussian mechanism [@gaboardi2018locally; @joseph2018locally; @bun2018heavy; @bassily2018linear]. Balle and Wang [@balle2018improving] have shown that the two classical Gaussian mechanisms of Dwork and Roth [@dwork2014algorithmic] and of Dwork *et al.* [@dwork2006our] for $(\epsilon,\delta)$-differential privacy are not optimal. Moreover, they also developed the optimal Gaussian mechanism. Hence, we will discuss the optimal Gaussian mechanism in this paper and its application to $(\epsilon,\delta)$-LDP. \[thm-DP-OPT\] The optimal Gaussian mechanism for $(\epsilon,\delta)$-differential privacy adds Gaussian noise with standard deviation $\sigma$ to each dimension of a query with $\ell_2$-sensitivity $\Delta$, for $\sigma$ given by $$\begin{aligned} \label{eqn-DP-OPT} \sigma = \frac{\left(\xi+\sqrt{\xi^2+\epsilon}\right) \cdot \Delta }{\epsilon\sqrt{2}},\end{aligned}$$ where $\ell_2$-sensitivity of a query is the maximal $\ell_2$-norm difference of the true query results on neighboring datasets which differ in just one record, $\xi$ is the solution of $\operatorname*{erfc}\left(\xi \right)- e^{\epsilon} \operatorname*{erfc}\left( \sqrt{\xi^2 + \epsilon} \right) = 2 \delta$ and [erfc()]{} is the complementary error function. Then, each user’s data will be perturbed by adding randomized Gaussian noise, that is, $x^*(i)=x(i) + \langle\mathcal{N}(0, \sigma^2)\rangle^d$, where $\mathcal{N}(0, \sigma^2)$ denotes a random variable following a Gaussian distribution with mean $0$ and variance $\sigma^2$. Since we assume each user’s data lies in range $[-1,1]$, thus $\ell_2$-sensitivity is $\Delta=2$. Clearly, the estimation for $x^*(i)$ is unbiased since the injected Gaussian noises have zero mean. Besides, the worst-case variance is $\sigma^2$. As shown in Fig. \[comp-var\], we plot the worst-case noise variances of the optimal Gaussian mechanism and our solution (will be introduced later) for one-dimensional numeric data versus different privacy parameters. It can be observed our solution has much smaller variances than the optimal Gaussian mechanism especially when $\epsilon$ is small (i.e., the degree of privacy protection is high). This demonstrates that our solution can ensure high accuracy in reality while providing strong privacy guarantees. ----------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- ![The worst-case noise variances for one-dimensional numeric data.[]{data-label="comp-var"}](figures/Var_Nu.eps "fig:"){height="3.3cm"} ![The worst-case noise variances for one-dimensional numeric data.[]{data-label="comp-var"}](figures/Var_Nu_de.eps "fig:"){height="3.3cm"} \[-1pt\] (a) Variance vs. $\epsilon$ ($\delta=10^{-4}$) \(b) Variance vs. $\delta$ ($\epsilon=1$) ----------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- Mean Estimation for Numeric Attributes under {#sec-numeric} ============================================= This section introduces the solutions to achieve $(\epsilon, \delta)$-local differential privacy on multi-dimensional numeric attributes for mean estimation. Our First Solution for Multiple Numeric Attributes under {#sec-numeric-first} --------------------------------------------------------- Under , Duchi *et at.* [@duchi2018minimax] have proposed a classical randomized mechanism for numeric data which has been extended to many scenarios. However, Nguy[ê]{}n *et al.* [@nguyen2016collecting] have pointed that Duchi *et al.*’s solution doesn’t achieve when $d$ is even. But they don’t give the specific proofs. We show Duchi *et at.*’s solution in \[appen-Duchi-solution\] (e.g., Algorithm \[algorithm-duchi\]) and give the proofs. Moreover, this paper has also fixed this problem by re-defining the probability of sampling a Bernoulli variable $u$ and shown the proofs in Appendix A.2 of the online full version [@fullversion] due to space limitation. In what follows, inspired by Duchi *et al.*’s work, we propose a randomized mechanism on multiple numeric attributes for achieving . Firstly, we present and prove the Lemma \[condition-approx-ldp\] that will be used later to ensure . \[condition-approx-ldp\] For a randomized mechanism $\mathcal{M}$ whose outputs are discrete, $\mathcal{M}$ satisfies $(\epsilon, \delta)$-local differential privacy if and only if for any pairs of adjacent input tuples $x$ and $x'$ in the domain of $\mathcal{M}$, and for any possible output $x^*$, it always holds $$\begin{aligned} \label{eqn-eps-delta-ldp-single} \mathbb{P}[\mathcal{M}(x) = x^*] \leq e^\epsilon \cdot \mathbb{P}[\mathcal{M}(x')=x^*] + \delta.\end{aligned}$$ And Eq. (\[eqn-eps-delta-ldp\]) and Eq. (\[eqn-eps-delta-ldp-single\]) is equivalent to each other. **Proof.** For Eq. (\[eqn-eps-delta-ldp\]) $\Rightarrow$ Eq. (\[eqn-eps-delta-ldp-single\]), this can be easily achieved by letting $\mathcal{Y}=\{x^*\}$. For Eq. (\[eqn-eps-delta-ldp-single\]) $\Rightarrow$ Eq. (\[eqn-eps-delta-ldp\]), we have $$\begin{aligned} {{\mathbb{P}}\left[{\mathcal{M}(x)\in\mathcal{Y}}\right]} &=\sum_{x^*\in\mathcal{Y}}{{\mathbb{P}}\left[{\mathcal{M}(x)=x^*}\right]}\nonumber\\ &\leq \sum_{x^*\in\mathcal{Y}}\bigg(e^\epsilon{{\mathbb{P}}\left[{\mathcal{M}(x')=x^*}\right]}+\delta\bigg)\nonumber\\ &=\bigg( \sum_{x^*\in\mathcal{Y}}e^\epsilon{{\mathbb{P}}\left[{\mathcal{M}(x')=x^*}\right]} \bigg)+\left | \mathcal{Y} \right |\cdot\delta \nonumber\\ &\leq {{\mathbb{P}}\left[{\mathcal{M}(x')\in\mathcal{Y}}\right]} + \delta.\end{aligned}$$ Thus, it has proved Eq. (\[eqn-eps-delta-ldp\]) $\Leftrightarrow$ Eq. (\[eqn-eps-delta-ldp-single\]). [$\blacksquare$]{} Followed the definition before, each user’s $d$-dimensional data is denoted as $x=(x_1,x_2,\cdots,x_d)$ (We will omit the notation $i$ in the analysis for simplicity since we focus on one arbitrary user $i$ here). And each $x_j\in[-1,1]$ is the value of the $j$-th attribute $A_j$, where $j\in[1,d]$. Under , each user’s data $x\in[-1,1]^d$ will be perturbed into $x^* \in \{-B, B\}^d$, where $B$ is a constant decided by $d$, $\epsilon$ and $\delta$. Before chosen $B$, we first compute $C_d$ as $$\begin{aligned} C_d= \begin{cases} 2^{d-1},&\text{~if~}d\text{~is odd},\\ 2^{d-1}-\frac{1}{2}\binom{d}{d/2},&\text{~otherwise}. \end{cases}\end{aligned}$$ Then, $B$ is calculated by $$\begin{aligned} \label{B-value} B= \begin{cases} \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{(d-1)/2}\cdot(e^\epsilon +2^d\cdot \delta-1)},&\text{~if~}d\text{~is odd},\\[3pt] \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{d/2}\cdot(e^\epsilon +2^d\cdot \delta -1)},&\text{~otherwise}. \end{cases}\end{aligned}$$ \[ldp-algo-our-multi\] Generate a random vector $V : = [V_1, V_2, \ldots, V_d] \in \{-1,1\}^d$ by sampling each $V_j$ independently from the following distribution: $$\begin{aligned} \mathbb{P}[V_j=v_j]=\begin{cases} \frac{1}{2}+\frac{1}{2}x_j,~~\text{if}~~v_j=1\\ \frac{1}{2}-\frac{1}{2}x_j,~~\text{if}~~v_j=-1 \end{cases}\nonumber \end{aligned}$$\ [In the case of $V$ is sampled as $v$, let $T^+(v)$ (resp. $T^-(v)$) be the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$)]{} [Sample a Bernoulli variable $u=1$ with probability $\alpha$, for $\alpha$ given by Eq. (\[alpha-val-our\]), i.e., $\alpha:= \begin{cases} \frac{e^\epsilon+C_d\cdot\delta}{e^\epsilon+1},&\text{~if~}d\text{~is odd,} \\ \frac{e^\epsilon \cdot C_d + \delta \cdot C_d(2^d-C_d)}{(e^\epsilon-1)C_d+2^d},&\text{~if~}d\text{~is even.} \end{cases}$ for $C_d:= \begin{cases} 2^{d-1},&\text{~if~}d\text{~is odd},\\ 2^{d-1}-\frac{1}{2}\binom{d}{d/2},&\text{~otherwise}. \end{cases}$]{} Algorithm \[ldp-algo-our-multi\] shows the pseudo-code of our mechanism. It firstly discretizes the $d$-dimensional data into $V\in\{-1,1\}^d$ which will be used to sample $T^+(v)$ and $T^-(v)$. Then, a noisy tuple will be returned based on the value of a Bernoulli variable $u$, where the probability of $u=1$ is $\alpha$. In what follows, we will show the computation of $\alpha$ while achieving . Firstly, we analyze the size of $T^+(v)$ and $T^-(v)$. Recall that $T^+(v)$ (resp. $T^-(v)$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$). The analysis includes two cases, e.g., $d$ is odd and $d$ is even. **Case 1: $d$ is odd**. Since $v\in\{-1,1\}^d$ and $x^*\in\{-B,B\}^d$, suppose there are $k$ positions that vectors $x^*$ and $v$ have the same sign (i.e., $d-k$ positions have the different sign). Therefore, once $v$ is sampled based on the input $x$, it can easily know that $x^*\cdot v > 0$ will be guaranteed if and only if $k>d-k$ (i.e., $k \geq (d+1)/2$ since $d$ is odd), and $x^*\cdot v \leq 0$ will be guaranteed if and only if $k\leq d-k$ (i.e., $k \leq (d-1)/2$). Therefore, when $d$ is odd, it holds $$\begin{aligned} \label{eqn-T-odd} \left | T^+(v) \right |=\sum_{j\geq\frac{d+1}{2}}\binom{d}{j},~\left | T^-(v) \right |=\sum_{j\leq\frac{d-1}{2}}\binom{d}{j}. $$ From Eq. (\[eqn-T-odd\]), it can be observed that $\left | T^+(v) \right |=\left | T^-(v) \right |$ since $d$ is odd. Recall that $\left | T^+(v) \right |+\left | T^-(v) \right |=2^d$, thus we can obtain $$\begin{aligned} \label{eqn-T-odd-val} \left | T^+(v) \right |=\left | T^-(v) \right |=2^{d-1},\text{~~if~}d\text{~is odd}.\end{aligned}$$ As we can seen, the size of both $\left | T^+(v) \right |$ and $\left | T^-(v) \right |$ is independent of $v$. Thus, when given input $x'$ and sampled $v'$, it will hold $\left | T^+(v') \right |=\left | T^+(v) \right |$ and $\left | T^-(v') \right |=\left | T^-(v) \right |$. **Case 2: $d$ is even**. Same as **Case 1**, assume there are $k$ positions that vectors $x^*$ and $v$ have the same sign (i.e., $d-k$ positions have the different sign). Therefore, we can know that $x^*\cdot v > 0$ will be guaranteed if and only if $k>d-k$ (i.e., $k \geq (d+2)/2$ since $d$ is even), and $x^*\cdot v \leq 0$ will be guaranteed if and only if $k\leq d-k$ (i.e., $k \leq d/2$). Hence, when $d$ is even, it holds $$\begin{aligned} \label{eqn-T-even} \left | T^+(v) \right |=\sum_{j\geq\frac{d+2}{2}}\binom{d}{j},~ \left | T^-(v) \right |=\sum_{j\leq\frac{d}{2}}\binom{d}{j}. $$ Base on Eq. (\[eqn-T-even\]), it holds that $\left | T^+(v) \right | + \left | T^-(v) \right |=2^d$ and $\left | T^-(v) \right | - \left | T^+(v) \right |=\binom{d}{d/2}$. Thus, we can get $$\begin{aligned} \label{eqn-T-even-val} \begin{cases} \left | T^+(v) \right |=2^{d-1}-\frac{1}{2}\binom{d}{d/2},\\[3pt] \left | T^-(v) \right |=2^{d-1}+\frac{1}{2}\binom{d}{d/2}. \end{cases}\end{aligned}$$ Assume that we sample a Bernoulli variable $u=1$ with probability $\alpha$ (note that $\alpha>1/2$) in our mechanism. Thus, given a perturbed output $x^*$ of input $x$, it holds $$\begin{aligned} \label{eqn-prob-left} & \mathbb{P}[\mathcal{M}(x)=x^*]= \alpha \mathbb{P}[\mathcal{M}(x)=x^*~|~u=1] + (1-\alpha) \mathbb{P}[\mathcal{M}(x)=x^*~|~u=0] \nonumber\\ & = \Bigg\{ \sum_{v \in \{-1,1\}^d} \bigg[ \alpha \mathbb{P}[ x^*\in T^+(v)] + (1-\alpha) \mathbb{P}[ x^*\in T^-(v)] \bigg] \times \mathbb{P}[v~|~x] \Bigg\} \nonumber\\ & = \Bigg\{ \sum_{v \in \{-1,1\}^d} \bigg[ \alpha \mathbb{P}[ x^*\in T^+(v)] + (1-\alpha) \mathbb{P}[ x^*\in T^-(v)] \bigg] \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \Bigg\} \nonumber\\ & = \Bigg\{ \sum_{v \in \{-1,1\}^d} \bigg[ \frac{\alpha}{|T^+(v)|} \times \boldsymbol{1}[ x^*\in T^+(v)] + \frac{1-\alpha}{|T^-(v)|}\times \boldsymbol{1}[ x^*\in T^-(v)] \bigg] \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \Bigg\} \nonumber\\ & = \Bigg\{ \sum_{v \in \{-1,1\}^d} \bigg[ \frac{\alpha}{|T^+(v)|} \times \boldsymbol{1}[ x^*\cdot v>0] + \frac{1-\alpha}{|T^-(v)|}\times \boldsymbol{1}[ x^*\cdot v\leq 0] \bigg] \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \Bigg\} \nonumber\\ & = \Bigg\{ \sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \bigg[ \frac{\alpha}{|T^+(v)|} \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \bigg]\Bigg\} + \Bigg\{ \sum_{ _{x^*\cdot v \leq 0}^{v \in \{-1,1\}^d:}} \bigg[ \frac{1-\alpha}{|T^-(v)|}\times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \bigg] \Bigg\} .\end{aligned}$$ In the same way, given a perturbed output $x^*$ of input $x'$, it can also get $$\begin{aligned} \label{eqn-prob-right} & \mathbb{P}[\mathcal{M}(x')=x^*] = \alpha \mathbb{P}[\mathcal{M}(x')=x^*~|~u=1] + (1-\alpha) \mathbb{P}[\mathcal{M}(x')=x^*~|~u=0] \nonumber\\ & = \Bigg\{ \sum_{v' \in \{-1,1\}^d} \bigg[ \alpha \mathbb{P}[ x^*\in T^+(v')] + (1-\alpha) \mathbb{P}[ x^*\in T^-(v')] \bigg] \times \mathbb{P}[v'~|~x] \Bigg\} \nonumber\\ & = \Bigg\{ \sum_{ _{x^*\cdot v'>0}^{v' \in \{-1,1\}^d:}} \bigg[ \frac{\alpha}{|T^+(v')|} \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x'_j \cdot v'_j \right) \bigg]\Bigg\} + \Bigg\{ \sum_{ _{x^*\cdot v' \leq 0}^{v' \in \{-1,1\}^d:}} \bigg[ \frac{1-\alpha}{|T^-(v')|}\times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x'_j \cdot v'_j \right) \bigg] \Bigg\} .\end{aligned}$$ In order to satisfy $(\epsilon, \delta)$-local differential privacy, it needs to ensure for any $x\in [-1, 1]^d$ and $x'\in [-1, 1]^d$ that Eq. (\[eqn-eps-delta-ldp-single\]) is always satisfied for any output $x^*\in\mathcal{Y}$. Thus, it can be seen that as long as Eq. (\[eqn-eps-delta-ldp-single\]) is satisfied when $\mathbb{P}[\mathcal{M}(x)=x^*]$ takes the maximum value and $\mathbb{P}[\mathcal{M}(x')=x^*]$ takes the minimum value, then mechanism $\mathcal{M}(\cdot)$ will satisfy $(\epsilon, \delta)$-local differential privacy. Here and in the following, we may omit $v$ in the $|T^+(v)|$ and $|T^-(v)|$ for simplicity since the size of them is independent of $v$. \[max-min-value\] The Eq. (\[eqn-prob-left\]) will take the maximum value when $$\begin{aligned} x\in \{ v:v\in\{-1,1\}^d, x^*\cdot v>0 \},\end{aligned}$$ and the maximum value is $$\begin{aligned} \max{\mathbb{P}[\mathcal{M}(x)=x^*]}=\frac{\alpha}{|T^+(v)|}.\end{aligned}$$ And, the Eq. (\[eqn-prob-right\]) will take the minimum value when $$\begin{aligned} x'\in \{ v':v'\in\{-1,1\}^d, x^*\cdot v'\leq 0 \},\end{aligned}$$ and the minimum value is $$\begin{aligned} \min{\mathbb{P}[\mathcal{M}(x')=x^*]}=\frac{1-\alpha}{|T^-(v')|}.\end{aligned}$$ **Proof.** Eq. (\[eqn-prob-left\]) can be induced as $$\begin{aligned} \label{eqn-prob-left-1} & \mathbb{P}[\mathcal{M}(x)=x^*]= \nonumber\\ & \Bigg\{ \sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \bigg[ \frac{\alpha}{|T^+(v)|} \times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \bigg]\Bigg\} \nonumber\\ & \quad + \Bigg\{ \sum_{ _{x^*\cdot v \leq 0}^{v \in \{-1,1\}^d:}} \bigg[ \frac{1-\alpha}{|T^-(v)|}\times \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \bigg] \Bigg\} \nonumber\\ &= \Bigg\{ \frac{\alpha}{|T^+|} \sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \Bigg\} \nonumber\\ & \quad+\Bigg\{ \frac{1-\alpha}{|T^-|} \sum_{ _{x^*\cdot v \leq 0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \Bigg\}.\end{aligned}$$ It can be seen that $$\begin{aligned} \label{eqn-prob-left-2} & \sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) + \sum_{ _{x^*\cdot v \leq 0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \sum_{v \in \{-1,1\}^d} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \prod_{j=1}^d \left[ \left( \frac{1}{2}+\frac{1}{2} x_j \right) + \left( \frac{1}{2} - \frac{1}{2} x_j \right) \right] = \prod_{j=1}^d 1 = 1 .\end{aligned}$$ We define $A$ as $ \sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) $. Then $\sum_{ _{x^*\cdot v \leq 0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) $ equals $1-A$. Thus, Eq. (\[eqn-prob-left\]) can be deduced as $$\begin{aligned} \label{eqn-prob-left-3} \mathbb{P}[\mathcal{M}(x)=x^*]= \frac{\alpha}{|T^+|} \cdot A + \frac{1-\alpha}{|T^-|} \cdot (1-A).\end{aligned}$$ Given $\alpha > 1/2$ and $|T^+| \leq |T^-|$, it follows that $\frac{\alpha}{|T^+|} > \frac{1-\alpha}{|T^+|} \geq \frac{1-\alpha}{|T^-|}$. Since $\left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \geq 0$ for any $x_j \in [-1, 1]$, $v_j \in \{-1,1\}$, and $j \in [1,d]$, then both $A$ and $1-A$ are . Then, it holds $0 \leq A \leq 1$. Therefore, the maximum value of $A$ is $1$, and the minimum value of $A$ is $0$. Then, the Eq. (\[eqn-prob-left-3\]) will take the maximum value when $A=1$ and take the minimum value when $A=0$. Since $A=\sum_{ _{x^*\cdot v>0}^{v \in \{-1,1\}^d:}} \prod_{j=1}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right)$ and $x\in[-1,1]^d$, we can easily know that $A=1$ if $x\in \{ v:v\in\{-1,1\}^d, x^*\cdot v>0 \}$. And, $A=0$ if $x\in \{ v:v\in\{-1,1\}^d, x^*\cdot v\leq 0 \}$. Hence, the maximum value of Eq. (\[eqn-prob-left-3\]) is $\frac{\alpha}{|T^+|}$ and the minimum value of Eq. (\[eqn-prob-left-3\]) is $\frac{1-\alpha}{|T^-|}$. Similarly, it can get the same results when given input $x'$. We omit the proof for brevity. [$\blacksquare$]{} Therefore, based on Eq. (\[eqn-eps-delta-ldp-single\]) and Lemma \[max-min-value\], to achieve $(\epsilon, \delta)$-local differential privacy, we only need to guarantee $$\begin{aligned} \label{max-min-proof-ldp} \frac{\alpha}{|T^+|}\leq \frac{1-\alpha}{|T^-|}\cdot e^\epsilon + \delta.\end{aligned}$$ By combining Eqs. (\[eqn-T-odd\]), (\[eqn-T-even\]) and (\[max-min-proof-ldp\]), we can obtain $$\begin{aligned} \label{alpha-our} \alpha= \begin{cases} \frac{e^\epsilon+|T^+|\cdot\delta}{e^\epsilon+1},&\text{~if~}d\text{~is odd,} \\ \frac{|T^+|\cdot e^\epsilon+|T^+|\cdot|T^-|\cdot\delta}{|T^+|\cdot e^\epsilon+|T^-|},&\text{~if~}d\text{~is even.} \end{cases}\end{aligned}$$ By taking Eqs. (\[eqn-T-odd-val\]) and (\[eqn-T-even-val\]) into Eq. (\[alpha-our\]), we can get $$\begin{aligned} \label{alpha-val-our} \alpha= \begin{cases} \frac{e^\epsilon+C_d\cdot\delta}{e^\epsilon+1},&\text{~if~}d\text{~is odd,} \\ \frac{e^\epsilon \cdot C_d + \delta \cdot C_d(2^d-C_d)}{(e^\epsilon-1)C_d+2^d},&\text{~if~}d\text{~is even.} \end{cases}\end{aligned}$$ Additionally, it should be noted that it needs to ensure $C_d\cdot \delta<1$ in Eq. (\[alpha-val-our\]) in order to make $\alpha<1$. \[algo-multi-unbiased\] Algorithm \[ldp-algo-our-multi\] is an unbiased estimator of the input $x$ when $B$ is calculated by Eq. (\[B-value\]). **Proof.** We present the proof in Appendix A.3 of the online full version [@fullversion] due to space limitation. [$\blacksquare$]{} \[err-algo-multi\] For any $j\in[1,d]$, let $Z_j=\frac{1}{N}\sum_{i=1}^N x_j^*(i)$ and $X_j=\frac{1}{N}\sum_{i=1}^N x_j(i)$. Then Algorithm \[ldp-algo-our-multi\] ensures that with at least $1-\beta$ probability, $$\begin{aligned} \underset{j\in[1,d]}{\max}|Z_j-X_j|=O\left( \frac{\sqrt{d\log(d/\beta)}}{(\epsilon+2^d\cdot \delta)\sqrt{N}} \right).\end{aligned}$$ **Proof.** For any dimension $j\in[1,d]$ and user $i\in[1,N]$, it holds $$\begin{aligned} Var[x_j^*(i)-x_j(i)]&=Var[x_j^*(i)] =\mathbb{E}[(x_j^*(i))^2]-(\mathbb{E}[x_j^*(i)])^2 \nonumber\\ &=\sum_{x_j^*(i)}(x_j^*(i))^2{{\mathbb{P}}\left[{x_j^*(i)}\right]}-(x_j(i))^2 \nonumber\\ &=\sum_{x_j^*(i)}B^2{{\mathbb{P}}\left[{x_j^*(i)}\right]}-(x_j(i))^2 \nonumber\\ &=B^2-(x_j(i))^2 \leq B^2\end{aligned}$$ Since Algorithm \[ldp-algo-our-multi\] is an unbiased estimator of the input $x$, based on Lemma \[algo-multi-unbiased\], it holds $|x_j^*(i)-x_j(i)|\leq B+1$. Then, by the Bernstein inequality (see Definition 4.1 of [@cormode2018marginal]), we have $$\begin{aligned} &{{\mathbb{P}}\left[{|Z_j-X_j|\geq \lambda}\right]} ={{\mathbb{P}}\left[{\bigg|\frac{1}{N}\sum_{i=1}^{N}\{x_j^*(i)-x_j(i)\}\bigg|\geq \lambda}\right]} \nonumber\\ &\leq 2\cdot \exp\bigg( -\frac{N\lambda^2}{\frac{2}{N}\sum_{i=1}^N Var[x_j^*(i)-x_j(i)]+\frac{2}{3}\lambda(B+1)} \bigg) \nonumber\\ &=2\cdot \exp\bigg( -\frac{N\lambda^2}{2B^2+\frac{2}{3}\lambda(B+1)} \bigg).\end{aligned}$$ Based on the union bound, it holds that $$\begin{aligned} {{\mathbb{P}}\left[{\underset{j\in[1,d]}{\max}|Z_j-X_j|\geq \lambda}\right]} &={{\mathbb{P}}\left[{\{|Z_1-X_1|\geq \lambda\} \cup \cdots \cup \{|Z_d-X_d|\geq \lambda\}}\right]} \nonumber\\ &\leq \sum_{j=1}^{d}{{\mathbb{P}}\left[{|Z_j-X_j|\geq \lambda}\right]} \nonumber\\ &\leq 2d\cdot \exp\bigg( -\frac{N\lambda^2}{2B^2+\frac{2}{3}\lambda(B+1)} \bigg) .\nonumber\end{aligned}$$ Then, to ensure that $\underset{j\in[1,d]}{\max}|Z_j-X_j|<\lambda$ holds with at least $1-\beta$ probability, it suffices to enforce $$\begin{aligned} 2d\cdot \exp\bigg( -\frac{N\lambda^2}{2B^2+\frac{2}{3}\lambda(B+1)} \bigg) = \beta. \label{eqn-less-beta-1}\end{aligned}$$ By solving Eq. (\[eqn-less-beta-1\]), we get $\lambda = O\left( B\cdot \sqrt{\log(d/\beta)}/\sqrt{N} \right)$. We now analyze $B$ in Eq. (\[B-value\]); i.e., $B:= \begin{cases} \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{(d-1)/2}\cdot(e^\epsilon +2^d\cdot \delta-1)},&\text{~if~}d\text{~is odd},\\ \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{d/2}\cdot(e^\epsilon +2^d\cdot \delta -1)},&\text{~if~}d\text{~is even}. \end{cases}$ First, $C_d := 2^{d-1}$ for odd $d$, and $C_d:= 2^{d-1}-\frac{1}{2}\binom{d}{d/2} = 2^{d-1}-o(2^{d-1})$ for even and large $d$, where $o(2^{d-1})$ represents a quantity $f(d)$ which satisfies $\frac{f(d)}{2^{d-1}} \to 0$ as $d\to \infty$. Hence, we obtain $2^d+C_d\cdot(e^\epsilon-1) = O\left( 2^{d-1}(e^\epsilon+1) \right) = O\left( 2^{d} \right)$ for large $d$ and small $\epsilon$. We define the relation “$\sim$” such that two positive sequences $f_1(d)$ and $f_2(d)$ satisfy $f_1(d) \sim f_2(d)$ if and only if $ \frac{ f_1(d)}{f_2(d)} \to 1 $ as $d\to \infty$. Then for large and odd $d$, we obtain from Stirling’s approximation [@marsaglia1990new] that $(d-1)! \sim \sqrt{2\pi\cdot(d-1)}\cdot \left( \frac{d-1}{e} \right)^{d-1} $ and $(\frac{d-1}{2})! \sim \sqrt{2\pi\cdot \frac{d-1}{2}} \cdot\left( \frac{d-1}{2e} \right)^{\frac{d-1}{2}} $, leading to $\binom{d-1}{(d-1)/2} =\frac{(d-1)!}{[(\frac{d-1}{2})!]^2} \sim \frac{2^{d-1}}{\sqrt{\pi(d-1)/2}} \sim \frac{2^{d}}{\sqrt{d}} $. In a similar way, for large and even $d$, we obtain $\binom{d-1}{d/2} \sim \frac{2^{d}}{\sqrt{d}} $. For small $\epsilon$, we have $e^\epsilon-1 = \epsilon + o(\epsilon)$, where $o(\epsilon)$ represents a quantity $g(\epsilon)$ which satisfies $\frac{g(\epsilon)}{\epsilon} \to 0$ as $\epsilon\to 0$. Combining the above results, we finally derive $B = O\Big( \frac{2^{d}}{ \frac{2^{d}}{\sqrt{d}} \cdot (\epsilon+2^d\cdot \delta) } \Big) =O\left(\frac{\sqrt{d}}{\epsilon+2^d\cdot \delta} \right)$. Hence, there exists $\lambda=O\left( \frac{\sqrt{d\log (d/\beta)}}{(\epsilon+2^d\cdot \delta)\sqrt{N}} \right)$ such that $\underset{j\in[1,d]}{\max}|Z_j-X_j|<\lambda$ holds with at least $1-\beta$ probability. [$\blacksquare$]{} In Algorithm \[ldp-algo-our-multi\], we select $T^+(v)$ (resp. $T^-(v)$) be the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$). It should be noted that we can also select $T^+(v)$ (resp. $T^-(v)$) be the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v \geq 0$ (resp. $x^*\cdot v < 0$). In this case, when $d$ is odd, the result is the same as Eq. (\[alpha-val-our\]) since $T^+(v)=T^-(v)$. But when $d$ is even, we have $$\begin{aligned} \label{eqn-T-even-2} \left | T^+(v) \right |=\sum_{j\geq\frac{d}{2}}\binom{d}{j},~ \left | T^-(v) \right |=\sum_{j\leq\frac{d}{2}-1}\binom{d}{j}. $$ Thus, we can get $$\begin{aligned} \label{eqn-T-even-val-2} \begin{cases} \left | T^+(v) \right |=2^{d-1}+\frac{1}{2}\binom{d}{d/2},\\ \left | T^-(v) \right |=2^{d-1}-\frac{1}{2}\binom{d}{d/2}. \end{cases}\end{aligned}$$ By taking Eqs. (\[eqn-T-odd-val\]) and (\[eqn-T-even-val-2\]) into Eq. (\[alpha-our\]), it can obtain $$\begin{aligned} \label{alpha-val-our-2} \alpha= \begin{cases} \frac{e^\epsilon+C_d\cdot\delta}{e^\epsilon+1},&\text{~if~}d\text{~is odd,} \\ \frac{e^\epsilon \cdot (2^d-C_d) + \delta \cdot C_d(2^d-C_d)}{e^\epsilon \cdot (2^d-C_d)+C_d},&\text{~if~}d\text{~is even.} \end{cases}\end{aligned}$$ Our Second Solution for Multiple Numeric Attributes under ---------------------------------------------------------- Before introducing our second mechanism for multiple numeric attributes, we first show the algorithm that preserves single numeric attributes under . Based on Algorithm \[ldp-algo-our-multi\] in Section \[sec-numeric-first\], we can easily deduce the mechanism for one-dimensional numeric data. Algorithm \[ldp-algo-our-one\] presents the pseudo-code of the solution for one-dimensional numeric attribute under . It can be seen that given a tuple $x \in [-1,1]$, the algorithm returns a perturbed tuple $x^*$ that equals either $\frac{e^\epsilon + 1}{e^\epsilon+2\delta-1}$ or $-\frac{e^\epsilon + 1}{e^\epsilon+2\delta-1}$, with the following probabilities: $$\begin{aligned} \label{ldp-prob} \mathbb{P}[x^* \mid x] = \begin{cases} \frac{e^\epsilon + 2\delta -1}{2(e^\epsilon+1)}\cdot x +\frac{1}{2}, &\text{~if~}x^*=\frac{e^\epsilon +1}{e^\epsilon+2\delta-1},\\ -\frac{e^\epsilon + 2\delta -1}{2(e^\epsilon+1)}\cdot x +\frac{1}{2}, &\text{~if~}x^*=-\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}. \end{cases}\end{aligned}$$ \[ldp-algo-our-one\] Sample a Bernoulli variable $u$ such that $\mathbb{P}[u=1]=\frac{e^\epsilon + 2\delta -1}{2(e^\epsilon+1)}\cdot x +\frac{1}{2}$ **return** $x^*$ \[thm-algo-1-ldp\] Algorithm \[ldp-algo-our-one\] satisfies . We omit the proof of Theorem \[thm-algo-1-ldp\] since Algorithm \[ldp-algo-our-one\] is the simple version of Algorithm \[ldp-algo-our-multi\] when $d=1$. \[thm-algo-one-unbiased\] Algorithm \[ldp-algo-our-one\] is an unbiased estimator of the input value $x$. And, the variance of the perturbed value $x^*$ in the worst-case is $$\begin{aligned} Var[x^*]=\left(\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\right)^2.\end{aligned}$$ **Proof.** Since $x^* \in \{-\frac{e^\epsilon+1}{e^\epsilon+2\delta-1}, \frac{e^\epsilon+1}{e^\epsilon+2\delta-1} \}$, the expectation of $x^*$ is computed as $$\begin{aligned} \mathbb{E}[x^*]&=\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\cdot \frac{x \cdot(e^\epsilon+2\delta-1)+e^\epsilon+1}{2(e^\epsilon+1)} \nonumber \\ &\quad\quad + \Big(-\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\Big)\cdot\frac{-x \cdot(e^\epsilon + 2\delta -1)+e^\epsilon+1}{2(e^\epsilon+1)} \nonumber\\ &=\frac{e^\epsilon+1}{e^\epsilon+2\delta-1}\cdot\Big(\frac{2x(e^\epsilon+2\delta-1)}{2(e^\epsilon+1)}\Big) =x.\nonumber\end{aligned}$$ Then, the variance is computed as: $$\begin{aligned} \label{ldp-variance} Var[x^*]&=\mathbb{E}[(x^*)^2]-(\mathbb{E}[x^*])^2 \nonumber\\ &=\Big(\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\Big)^2\cdot\frac{x\cdot(e^\epsilon+2\delta-1)+e^\epsilon+1}{2(e^\epsilon+1)}\nonumber \\ &\quad\quad +\Big(\frac{-(e^\epsilon+1)}{e^\epsilon+2\delta-1}\Big)^2\cdot\frac{-x\cdot(e^\epsilon+2\delta-1)+e^\epsilon+1}{2(e^\epsilon+1)} - x^2 \nonumber\\ &=\Big(\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\Big)^2-x^2.\end{aligned}$$ Therefore, the worst-case variance of $x^*$ equals to $\Big(\frac{e^\epsilon +1}{e^\epsilon+2\delta-1}\Big)^2$, and it occurs when $x=0$. [$\blacksquare$]{} \[ldp-algo-our-one-d\] Initialize $x^*=\left \langle 0,0,\cdots,0 \right \rangle^d$ Let $k=\max\{1,\min\{d,\left \lfloor \frac{\epsilon}{\tau} \right \rfloor\}\}$ Sample $k$ values uniformly without replacement from $\{1,2,\cdots,d\}$ **return** $x^*$ \[err-algo-one\] Let $Z=\frac{1}{N}\sum_{i=1}^N x^*(i)$ and $X=\frac{1}{N}\sum_{i=1}^N x(i)$. The Algorithm \[ldp-algo-our-one\] holds that with at least $1-\beta$ probability, $$\begin{aligned} |Z-X|=O\left( \frac{\sqrt{\log(1/\beta)}}{(\epsilon+2\delta)\sqrt{N}} \right).\end{aligned}$$ We omit the proof of Theorem \[err-algo-one\] since it’s a special case of Theorem \[err-algo-multi\] when $d=1$ in Section \[sec-numeric-first\]. When collecting multiple numeric attributes privately, a straightforward method is to use a one-dimensional numeric data perturbation algorithm (e.g., Algorithm \[ldp-algo-our-one\]), such that the privacy parameters of each attribute are given as $\epsilon/d$ and $\delta/d$. By composition theorem [@mcsherry2009privacy; @kasiviswanathan2011can], this method satisfies . However, based on Theorem \[err-algo-one\], the noise bound of each attribute will be $O\left( \frac{d\sqrt{\log d}}{(\epsilon+2\delta)\sqrt{N}} \right)$, which is super-linear to $d$. So, this solution leads to an inferior data utility especially when $d$ becomes large. To address this problem, we follow the spirit of [@wang2019collecting] to only perturb $k$ attributes instead of $d$ attributes, which will increase the privacy budget of each attribute from $\epsilon/d$ to $\epsilon/k$, thus reducing the noise variance in turn. Algorithm \[ldp-algo-our-one-d\] shows the pseudo-code of our extension of Algorithm \[ldp-algo-our-one\] for multi-dimensional numeric data. Given any $d$-dimensional tuple $x\in[-1,1]^d$, Algorithm \[ldp-algo-our-one-d\] will return a perturbed tuple $x^*$ with $k$ non-zero values. Specifically, it uniformly at random selects $k$ attributes from all $d$ attribute and perturbs these $k$-dimensional data instead, where $k$ is chosen by Lemma \[choose-k\]. Then, for each sampled dimension $j\in[1,k]$, Algorithm \[ldp-algo-our-one-d\] takes $x_j$, $\epsilon/k$ and $\delta/k$ as inputs to Algorithm \[ldp-algo-our-one\] and outputs a noisy value $\Bar{x}_j$. Thus, the finally returned value is $x_j^* = \frac{d}{k}\Bar{x}_j$. \[choose-k\] The optimal $k$ of Algorithm \[ldp-algo-our-one-d\] is chosen as $$\begin{aligned} \label{k-value} k=\max\{1,\min\{d,\left \lfloor \frac{\epsilon}{2.17} \right \rfloor\}\}.\end{aligned}$$ **Proof.** For each dimension $j\in[1,d]$, we can compute $$\begin{aligned} \mathbb{E}[\Bar{x}_j^2] &=Var[\Bar{x}_j]+(\mathbb{E}[\Bar{x}_j])^2 =\Big(\frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1}\Big)^2-\Bar{x}_j^2+\Bar{x}_j^2 =\Big( \frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1} \Big)^2. \nonumber\end{aligned}$$ Then, the variance is computed as $$\begin{aligned} \label{var-compute-k} &Var[x_j^*]=\mathbb{E}[(x_j^*)^2]-(\mathbb{E}[x_j^*])^2 =\frac{k}{d}\mathbb{E}[(\frac{d}{k}\Bar{x}_j)^2]-x_j^2 =\frac{d}{k}\mathbb{E}[\Bar{x}_j^2]-x_j^2 \nonumber\\ &=\frac{d}{k}\Big( \frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1} \Big)^2 -x_j^2.\end{aligned}$$ Hence, the worst-case variance is $\frac{d}{k}\Big( \frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1} \Big)^2$. In order to compute the optimal $k$ that minimizes the worst-case variance, we define $a:=\epsilon/k$ and $b:=2\delta/\epsilon$ so that the worst-case variance equals $\frac{d}{\epsilon} \cdot f(a) $, for $$\begin{aligned} \label{var-reformulated} f(a) : =a\left( \frac{e^a+1}{e^a+ab-1} \right)^2.\end{aligned}$$ Thus, computing the minimum worst-case variance is equivalent to compute the minimum value of Eq. (\[var-reformulated\]). We first show that Eq. (\[var-reformulated\]) has the minimum value since it is monotonically increasing first and then monotonically decreasing in interval $(0,+\infty)$. The derivative of Eq. (\[var-reformulated\]) with respect to $a$ is computed as $$\begin{aligned} \label{derivative-a} f'(a)=\frac{(e^a+1)[(e^a+1+2ae^a)(e^a+ab-1)-2a(e^a+1)(e^a+b)]}{(e^a+ab-1)^3}.\end{aligned}$$ Making $f'(a)=0$ is equivalent to make $(e^a+1+2ae^a)(e^a+ab-1)-2a(e^a+1)(e^a+b)=0$. By reduction formula, it gets $-a(e^a+1-2ae^a)\cdot b + e^{2a}-4ae^a-1=0$. Defining $b:=g(a)$, $g_1(a):=a(e^a+1-2ae^a)$ and $g_2(a):=e^{2a}-4ae^a-1$, we have $-g_1(a)\cdot b + g_2(a)=0$ and $b=g(a)=\frac{g_2(a)}{g_1(a)}$. It can easily compute that $a=0,0.7388$ are the solutions of $g_1(a)=0$, and $a=0,2.177$ are the solutions of $g_2(a)=0$. Since $a>0$, thus we discuss the sign of $f'(a)$ in interval $(0,+\infty)$ in two cases. (*i*) When $a<0.7388$, it holds $g_1(a)>0$ and $g_2(a)<0$, so we have $f'(a)<0$. (*ii*) When $a>0.7388$, we can observe by plotting $g(a)$ that $g(a)=\frac{e^{2a}-4ae^a-1}{a(e^a+1-2ae^a)}$ is monotonically decreasing from $+\infty$ to $-\infty$. Thus, there exists one and only one $a^*>0.7388$ that satisfies $b=\frac{g_2(a^*)}{g_1(a^*)}$. For $0.7388<a<a^*$, it holds $\frac{g_2(a)}{g_1(a)}>b$ and $g_1(a)<0$, so we have $f'(a)<0$. For $0.7388<a^*<a$, it holds $\frac{g_2(a)}{g_1(a)}<b$ and $g_1(a)<0$, thus we have $f'(a)>0$. Combining the above analyses, we finally derive that $f'(a)<0$ if $a\in(0,a^*)$ and $f'(a)>0$ if $a\in(a^*,+\infty)$. Therefore, it can conclude that $f(a)$ has the minimum value. Based on above analysis, the solution $a^*$ that minimizes the value of Eq. (\[var-reformulated\]) can be computed by making the derivative of Eq. (\[var-reformulated\]) with respect to $a$ equal to 0. By solving this, we can obtain that $2.177<a^*<2.176$ when $0<b<10^{-3}$. Note that $b$ denoting $2\delta/\epsilon$ is an extremely small value (e.g., $b = 2 \times 10^{-6} $ for $\epsilon=1$ and $\delta=10^{-6}$). Hence, we can take 2.17 as an approximate value of $a^*$. Therefore, it holds that the variance (i.e., Eq. (\[var-compute-k\])) will be smallest when $k=\frac{\epsilon}{a}=\frac{\epsilon}{2.17}$. Thus, it can be known easily that the optimal $k$ is determined by $\epsilon/2.17$. Specifically, we have (*i*) if $\frac{\epsilon}{2.17}\leq 1$, then $k=1$; (*ii*) if $\frac{\epsilon}{2.17}\geq d$, then $k=d$; (*iii*) $1<\frac{\epsilon}{2.17}<d$, then $k=\lfloor \frac{\epsilon}{2.17} \rfloor$ (choosing $\lfloor \frac{\epsilon}{2.17} \rfloor$ is because it outperforms $\lceil \frac{\epsilon}{2.17} \rceil$ through experiments). This completes the proof. [$\blacksquare$]{} \[algo-our-one-d-unbiased\] Algorithm \[ldp-algo-our-one-d\] satisfies $(\epsilon, \delta)$-local differential privacy. In addition, for any $d$-dimensional input $x\in[-1,1]$, the perturbed output $x^*$ holds $\mathbb{E}[x_j^*]=x_j$ for all dimension $j\in[1,d]$. **Proof.** Because Algorithm \[ldp-algo-our-one-d\] composes $k$ number of $(\frac{\epsilon}{k}, \frac{\delta}{k})$-LDP perturbation algorithms, thus based on composition theorem [@mcsherry2009privacy; @kasiviswanathan2011can], the Algorithm \[ldp-algo-our-one-d\] satisfies . As we can seen from Algorithm \[ldp-algo-our-one-d\], each perturbed output $x_j^*$ equals to $\frac{d}{k}\Bar{x}_j$ with probability $k/d$ or equals to 0 with probability $1-k/d$. Thus, based on Lemma \[thm-algo-one-unbiased\], it holds $\mathbb{E}[x_j^*]=\frac{k}{d}\cdot\mathbb{E}[\frac{d}{k}\Bar{x}_j]=\mathbb{E}[\Bar{x}_j]=x_j.$ [$\blacksquare$]{} \[optimal-bound\] For any $j\in[1,d]$, let $Z_j=\frac{1}{N}\sum_{i=1}^N x_j^*(i)$ and $X_j=\frac{1}{N}\sum_{i=1}^N x_j(i)$. The Algorithm \[ldp-algo-our-one-d\] holds that with at least $1-\beta$ probability, $$\begin{aligned} \underset{j\in[1,d]}{\max}|Z_j-X_j|=O\left(\frac{\sqrt{d\log(d/\beta)}}{(\epsilon+2\delta)\sqrt{N}}\right).\end{aligned}$$ **Proof.** For each dimension $j\in[1,d]$, we can get $|x_j^*-x_j|\leq \frac{d}{k}\frac{e^{\epsilon/k}+1}{e^{\epsilon/k}+2\delta/k-1}=O(\frac{k}{\epsilon+2\delta})\cdot \frac{d}{k}=O(\frac{d}{\epsilon+2\delta})$ based on Lemma \[algo-our-one-d-unbiased\]. Besides, from Eq. (\[var-compute-k\]), we have $Var[x_j^*]=\frac{d}{k}\Big( \frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1} \Big)^2 -x_j^2=O\left(\frac{dk}{(\epsilon+2\delta)^2}\right)$. Then using the Bernstein inequality (see Definition 4.1 of [@cormode2018marginal]), we have $$\begin{aligned} {{\mathbb{P}}\left[{|Z_j-X_j|\geq \lambda}\right]} &={{\mathbb{P}}\left[{\bigg|\sum_{i=1}^{n}\{x_j^*(i)-x_j(i)\}\bigg|\geq n\lambda}\right]} \nonumber\\ &=2\cdot \exp\left(\frac{-N\lambda^2}{\frac{2}{N}\sum_{i=1}^{N}Var[x_j^*(i)-x_j(i)]+\frac{2}{3}\lambda\frac{d}{k}\frac{e^{\frac{\epsilon}{k}}+1}{e^{\frac{\epsilon}{k}}+\frac{2\delta}{k}-1}}\right) \nonumber\\ &=2\cdot \exp\left(\frac{-N\lambda^2}{O\left(\frac{dk}{(\epsilon+2\delta)^2}\right)+\lambda O\left(\frac{d}{\epsilon+2\delta}\right)}\right).\end{aligned}$$ Based on the union bound, we have $$\begin{aligned} {{\mathbb{P}}\left[{\underset{j\in[1,d]}{\max}|Z_j-X_j|\geq \lambda}\right]} &={{\mathbb{P}}\left[{\{|Z_1-X_1|\geq \lambda\} \cup \cdots \cup \{|Z_d-X_d|\geq \lambda\}}\right]} \nonumber\\ &\leq \sum_{i=1}^{N}{{\mathbb{P}}\left[{|Z_j-X_j|\geq \lambda}\right]} \nonumber\\ &=2d\cdot \exp\left(\frac{-n\lambda^2}{O\left(\frac{dk}{(\epsilon+2\delta)^2}\right)+\lambda O\left(\frac{d}{\epsilon+2\delta}\right)}\right) . \nonumber\end{aligned}$$ To ensure that $\underset{j\in[1,d]}{\max}|Z_j-X_j|<\lambda$ holds with at least $1-\beta$ probability, it suffices to enforce $$\begin{aligned} 2d\cdot \exp\left(\frac{-n\lambda^2}{O\left(\frac{dk}{(\epsilon+2\delta)^2}\right)+\lambda O\left(\frac{d}{\epsilon+2\delta}\right)}\right) = \beta. \label{eqn-less-beta}\end{aligned}$$ Solving Eq. (\[eqn-less-beta\]), we obtain $\lambda=O\left( \frac{\sqrt{dk\log(d/\beta)}}{(\epsilon+2\delta)\sqrt{N}} \right)$, where $k$ is determined by Lemma \[choose-k\]. Since asymptotic expressions involving $\epsilon\rightarrow0$, $\lambda$ can also be written as $O\left( \frac{\sqrt{d\log(d/\beta)}}{(\epsilon+2\delta)\sqrt{N}} \right)$. [$\blacksquare$]{} Comparison with Related Work ---------------------------- For collecting multi-dimensional numeric data, Duchi *et al.* [@duchi2018minimax] propose to perturb multi-dimensional numeric data under , which provides strong privacy guarantees and asymptotic error bound, but remains unsolved. Inspired by Duchi *et al.*’s solution [@duchi2018minimax], we firstly introduce Algorithm \[ldp-algo-our-multi\] which focuses on achieving with high data utility when handling multi-dimensional numeric data. However, Duchi *et al.*’s solution is sophisticated when handling multi-dimensional data. Afterward, Nguy[ê]{}n *et al.* [@nguyen2016collecting] proposed Harmony to only sample one dimensional data to perturb, which is simpler and achieves the optimal asymptotic error bound as [@duchi2018minimax]. Similar but differently, Wang *et al.*’s [@wang2019collecting] propose to uniformly select $k$ dimensions from $d$, which also yields optimal asymptotic error bound. However, both [@nguyen2016collecting] and [@wang2019collecting] only achieve and cannot handle the case of . In this paper, our proposed Algorithm \[ldp-algo-our-one-d\] focuses on achieving while ensuring high data utility. Particularly, following the idea of [@wang2019collecting], our proposed Algorithm \[ldp-algo-our-one-d\] requires each user to randomized report only $k$ attributes that uniformly selected from $d$ attributes, which in turn reduces the total noise variance. Frequency Estimation for Categorical Attributes under {#sec-categorical} ====================================================== This section will investigate the mechanisms $\mathcal{M}$ to achieve $(\epsilon, \delta)$-local differential privacy for categorical attributes, supporting accurate frequency estimations of each possible value in each categorical attribute’s domain. So far most existing algorithms [@erlingsson2014rappor; @kairouz2014extremal; @bassily2015local; @wang2017locally; @wangtt2017locally; @wang2018locally] are designed for estimating the frequencies of categorical attributes while ensuring . Wang *et al.* [@wangtt2017locally] have introduced a framework for pure LDP which can be used to analyze and optimize different protocols. They also proposed the optimized local hashing protocol to ensure better data utility under LDP. In this section, we firstly extend their framework for approximate LDP (e.g., ) and then analyze and optimize different protocols for frequencies estimations of categorical attributes. \[local-protocol\] Consider two probabilities $p>q$. A local protocol given by $\mathcal{M}$ such that a user reports the true value with probability $p$ and reports each of other values with probability $q$, will satisfy if and only if it holds $p\leq q\cdot e^\epsilon +\delta$. We now consider that each of $N$ users independently executes the mechanism in Definition \[local-protocol\]. In this context, from Theorem 2 of [@wangtt2017locally], the variance for the number of times that the true value occurs among $N$ users’ noisy values will be $$\begin{aligned} \label{var-ldp} \text{Var}=\frac{Nq(1-q)}{(p-q)^2}+\frac{Nf_v(1-p-q)}{p-q},\end{aligned}$$ where $f_v$ is the frequency of the value $v\in[1,k]$. Moreover, the variance of Eq. (\[var-ldp\]) will be dominated by the first term when $f_v$ is small. Hence, the approximation of the variance in Eq. (\[var-ldp\]) is denoted as $$\begin{aligned} \label{var-ldp-approx} \text{Var}^*=\frac{Nq(1-q)}{(p-q)^2}.\end{aligned}$$ In addition, it also holds Var$^*$=Var when $p+q=1$. Recall the problem statement in Section \[sec-preliminaries\], for a categorical attribute with domain $\{1,2,\cdots,k\}$, we use $[1,k]$ to denote the domain set $\{1,2,\cdots,k\}$. Then, based on Definition \[local-protocol\] and the existing protocols, we will focus on proposing local algorithms under in the following. **General Randomized Response under Approximate LDP (GRR-ALDP)**. General randomized response [@kairouz2014extremal] reports the true value with probability $p$, while reporting each incorrect value with probability $q=\frac{1-p}{k-1}$. Thus, in order to make GRR-ALDP meet with $p\leq q\cdot e^\epsilon +\delta$, a general randomized local protocol $\mathcal{M}$ is required to output the perturbed value $y$ when given any input value $v\in[1,k]$ with the following distributions: $$\begin{aligned} \label{grr-aldp} {{\mathbb{P}}\left[{\mathcal{M}(v)=y}\right]}= \begin{cases} p=\frac{e^\epsilon+(k-1)\delta}{e^\epsilon+k-1},&\text{~if~}y=v, \\ q=\frac{1-\delta}{e^\epsilon+k-1},&\text{~if~}y\neq v. \end{cases}\end{aligned}$$ Then, by plugging $p$ and $q$ in Eq. (\[grr-aldp\]) into Eq. (\[var-ldp-approx\]), the variance of GRR-ALDP is $$\begin{aligned} \text{Var}^*_{\text{GRR-ALDP}}=\frac{N(e^\epsilon+k-2+\delta)(1-\delta)}{(e^\epsilon+k\delta-1)^2}.\end{aligned}$$ **Parallel Randomized Response [@erlingsson2014rappor] under Approximate LDP (PRR-ALDP)** first encodes the value $v \in \{1,2,\ldots,k\}$ into a length-$k$ binary vector $B$ where the $v$-th bit is 1, that is $B=[0,\cdots,0,1,0,\cdots,0]$. Then, PRR-ALDP perturbs each bit of $B$ with the following probability distribution $$\begin{aligned} \label{prob-prr-aldp} {{\mathbb{P}}\left[{B^*[i]=1}\right]}= \begin{cases} p,\text{~if~}B[i]=1, \\ q,\text{~if~}B[i]=0, \end{cases}\end{aligned}$$ where $p>q$. Based on Eq. (\[prob-prr-aldp\]), for any inputs $v_1\in \{1,2,\ldots,k\}$ and $v_2\in \{1,2,\ldots,k\}$, and output $B^*$, it holds $$\begin{aligned} &{{\mathbb{P}}\left[{B^*|v_1}\right]}\leq {{\mathbb{P}}\left[{B^*|v_2}\right]}+\delta \nonumber\\ \Rightarrow\quad &\prod_{i\in[k]}{{\mathbb{P}}\left[{B^*[i]|v_1}\right]} \leq \prod_{i\in[k]}{{\mathbb{P}}\left[{B^*[i]|v_2}\right]} +\delta \nonumber\\ \Rightarrow\quad & \begin{cases} {{\mathbb{P}}\left[{B^*[v_1]=0|v_1}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=0|v_1}\right]} \leq {{\mathbb{P}}\left[{B^*[v_1]=0|v_2}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=0|v_2}\right]}+\delta, \\ {{\mathbb{P}}\left[{B^*[v_1]=0|v_1}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=1|v_1}\right]} \leq {{\mathbb{P}}\left[{B^*[v_1]=0|v_2}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=1|v_2}\right]}+\delta, \\ {{\mathbb{P}}\left[{B^*[v_1]=1|v_1}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=0|v_1}\right]} \leq {{\mathbb{P}}\left[{B^*[v_1]=1|v_2}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=0|v_2}\right]}+\delta, \\ {{\mathbb{P}}\left[{B^*[v_1]=1|v_1}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=1|v_1}\right]} \leq {{\mathbb{P}}\left[{B^*[v_1]=1|v_2}\right]}\cdot {{\mathbb{P}}\left[{B^*[v_2]=1|v_2}\right]}+\delta\end{cases} \nonumber\\ \Rightarrow\quad &p\cdot(1-q)\leq e^\epsilon\cdot q \cdot(1-p)+\delta~\text{(This last step uses $p>q$)}. \label{prr-aldp}\end{aligned}$$ Therefore, PRR-ALDP will satisfy if and only if Inequality (\[prr-aldp\]) holds. Letting the equal sign in (\[prr-aldp\]) hold, we set $p$ as follows: $$\begin{aligned} \label{prr-aldp-jz} p = \frac{qe^\epsilon+\delta}{1-q+qe^\epsilon}.\end{aligned}$$ Applying Eq. (\[prr-aldp-jz\]) to Eq. (\[var-ldp-approx\]), we obtain $$\begin{aligned} \label{var-prr} \text{Var}^*_{\text{PRR-ALDP}}=\frac{Nq(1-q)(1-q+qe^\epsilon)^2}{[q(1-q)(e^\epsilon-1)+\delta]^2}.\end{aligned}$$ **Symmetric PRR-ALDP (SPRR-ALDP)**. In RAPPOR [@erlingsson2014rappor], it chooses $p$ and $q$ such that $p+q=1$, leading to a symmetric perturbation on 1 and 0. Based on this observation and Eq. (\[prr-aldp-jz\]), we derive $$\begin{aligned} p=\frac{e^\epsilon-\sqrt{e^\epsilon(1-\delta)+\delta}}{e^\epsilon-1},~ q=\frac{\sqrt{e^\epsilon(1-\delta)+\delta}-1}{e^\epsilon-1}.\end{aligned}$$ Then, the variance is $$\begin{aligned} &\text{Var}^*_{\text{SPRR-ALDP}}=\frac{N(\sqrt{e^\epsilon(1-\delta)+\delta}-1)(e^\epsilon-\sqrt{e^\epsilon(1-\delta)+\delta})}{(e^\epsilon-2\sqrt{e^\epsilon(1-\delta)+\delta}+1)^2}.\end{aligned}$$ **Local Hashing under Approximate LDP (LH-ALDP)** first hashes the input value into a domain $[g]$ such that $g<k$, and then perturbs the hashed value by the PRR-ALDP algorithm. Denote $\mathbb{H}$ as the universal hash function family such that each hash function $H\in\mathbb{H}$ hashes each input value into a value in $[g]$. Based on [@wangtt2017locally], the universal property requires that $$\begin{aligned} \forall v_1,v_2\in[k],v_1\neq v_2:\underset{H\in\mathbb{H}}P[H(v_1)=H(v_2)]\leq \frac{1}{g}.\end{aligned}$$ Given any input value $v\in[k]$, LH-ALDP first outputs a value in $[g]$ by hashing, that is $x=H(v)$. Then, LH-ALDP perturbs $x$ with the following distribution $$\begin{aligned} \label{dis-lh-aldp} \forall i\in[g], {{\mathbb{P}}\left[{y=i}\right]}= \begin{cases} p=\frac{e^\epsilon+(g-1)\delta}{e^\epsilon+g-1},&\text{~if~}x=i,\\ q=\frac{1-\delta}{e^\epsilon+g-1},&\text{~if~}x\neq i. \end{cases}\end{aligned}$$ Based on Eq. (\[dis-lh-aldp\]), we can know that LH-ALDP satisfies since it holds $p\leq qe^\epsilon+\delta$. Then, while aggregating in the server, it holds that $$\begin{aligned} p^*=p,~q^*=\frac{1}{g}p+\frac{g-1}{g}q=\frac{1}{g}.\end{aligned}$$ Thus, by taking $p=p^*$ and $q=q^*$ into Eq. (\[var-ldp\]), the variance of LH-ALDP is $$\begin{aligned} \label{var-lh-aldp} \text{Var}^*_{\text{LH-ALDP}}=\frac{N(e^\epsilon+g-1)^2}{(g-1)(e^\epsilon+g\delta-1)^2}.\end{aligned}$$ **Optimized LH-ALDP (OLH-ALDP)**. As it can seen from Eq. (\[var-lh-aldp\]), we can minimize the variance of LH-ALDP by making the partial derivative of Eq. (\[var-lh-aldp\]) with respect to $g$ equal to 0. That is, it needs to solve the following equation: $$\begin{aligned} & -\delta^2\cdot g^3 - 3(e^\epsilon-1)\delta^2\cdot g^2 + \left[ (e^\epsilon-1)^2+2(e^\epsilon-1)\delta(\delta-2e^\epsilon+1) \right]\cdot g \nonumber\\ & + (e^\epsilon-1)^2(2\delta -e^\epsilon -1) = 0. \label{derivative-equal-0}\end{aligned}$$ Hence, the optimal $g$ is the solution to the cubic Eq. (\[derivative-equal-0\]), that is, $$\begin{aligned} g= \frac{-3e^\epsilon\delta-\sqrt{e^\epsilon-1}\sqrt{(1-\delta)(e^\epsilon+\delta-9e^\epsilon\delta-1)}+e^\epsilon+3\delta-1}{2\delta}. \nonumber\end{aligned}$$ **Optimal Gaussian Mechanism ()**. When applying Gaussian mechanism on categorical attributes, an input value $v\in[1,k]$ is also encoded into a length-$k$ binary vector $B$ firstly. The vector $B$ has the same properties as described in PRR-ALDP. After encoding $v$ into a vector $B$, the will output the noisy vector $B^*$ such that each $B^*[i]$ is obtained by perturbing $B[i] \in \{0, 1\}$ via adding noise with a Gaussian distribution $\mathcal{N}(0,\sigma^2)$, where $\sigma$ is computed by Eq. (\[eqn-DP-OPT\]). And $\ell_2$-sensitivity is $\sqrt{2}$ since the binary vectors of two different input $v$ and $v'$ differ only in two bits. After collecting the noisy vectors from $N$ users ($B^*(j)$ for user $j \in \{1,2,\ldots,N\}$), the aggregator simply computes $ \sum_{j=1}^N B^*(j)[v]$ as the count for $v$ (if the result is not an integer, rounding can be applied; and a negative result can be considered as $0$). Although this method seems naive, its performance is not terrible due to large $N$ as shown in our experiments later. If we ignore the effect of rounding, the variance of Opt-GM is $\text{Var}^*_{\text{Opt-GM}}=N\sigma^2$. ![The noise variances of mechanisms under for categorical attributes versus $\epsilon$ when $\delta=10^{-6}$[]{data-label="Var-Cate"}](figures/Var-Cate.eps){height="3.5cm"} We compare the variance of the above mechanisms as shown in Fig. \[Var-Cate\]. For GRR-ALDP, the domain is set as $k=2, 10, 100$, respectively. It can be seen that the size of domain $k$ has a big impact on the variance of GRR-ALDP when the privacy protection level is high (i.e., the $\epsilon$ is small), that is a larger domain $k$ leads to a bigger variance. And this impact can be relatively eliminated under low privacy protection level (e.g., when $\epsilon=10$). In particular, GRR-ALDP has the smallest variance among all mechanisms when $k=2$. Overall, it shows that GRR-ALDP will be more appropriate when $k$ is small or when $\epsilon$ is extremely large. In addition, we can observe that Opt-GM relatively has the biggest variance when compared with SPRR-ALDP and OLH-ALDP. Besides, the variance of SPRR-ALDP and OLH-ALDP are very close to each other when $\epsilon$ is small (i.e., when $\epsilon \leq 1$). And the OLH-ALDP will outperform SPRR-ALDP when $\epsilon$ becomes bigger (i.e., when $\epsilon > 1$). Therefore, to sum up, OLH-ALDP is always better than SPRR-ALDP and Opt-GM in a wide range of $\epsilon$. Moreover, OLH-ALDP is more applicable than GRR-ALDP in practical since the performance of the latter mechanism is overly dependent on the size of the domain. Experiments {#sec-experiments} =========== In this section, we evaluated the performance of our proposed mechanisms by using two public datasets (denoted as BR and MX) which contain census records from Brazil and Mexico, both extracted from the Integrated Public Use Microdata Series[^2]. Both BR and MX have 4M tuples (e.g., users). Specifically, BR contains 16 attributes of which 6 are numeric attributes (e.g., income) and 10 are categorical attributes (e.g., gender); and MX contains 19 attributes of which 5 are numeric attributes and 14 are categorical attributes. Without loss of generality, we normalize the data domain of each numeric attribute into $[-1,1]$. As mentioned before, we demonstrate the accuracy of our proposed mechanisms from two perspectives, that is, (i) the accuracy on mean/frequency estimation and (ii) the accuracy on building machine learning models. We implement all algorithms and experiments using Python 2.7, running on a Windows 10 PC with Intel Xeon E5-1650 3.20 GHz CPU and 16G RAM. -------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------- ![Accuracy for mean estimation on numeric attributes.[]{data-label="real-mean-vs-epsilon"}](figures/Nu_MX_de_e-6.eps "fig:"){height="3cm"} ![Accuracy for mean estimation on numeric attributes.[]{data-label="real-mean-vs-epsilon"}](figures/Nu_BR_de_e-6.eps "fig:"){height="3cm"} ![Accuracy for mean estimation on numeric attributes.[]{data-label="real-mean-vs-epsilon"}](figures/Nu_MX_eps_1.eps "fig:"){height="3cm"} ![Accuracy for mean estimation on numeric attributes.[]{data-label="real-mean-vs-epsilon"}](figures/Nu_BR_eps_1.eps "fig:"){height="3cm"} \[-3mm\] (a) MX-Numeric ($\delta=10^{-6}$) \(b) BR-Numeric ($\delta=10^{-6}$) \(c) MX-Numeric ($\epsilon=1$) \(d) BR-Numeric ($\epsilon=1$) -------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------- Results on Mean/Frequency Estimation ------------------------------------ In our first experimental settings, we consider the scenario that each user reports her/his multi-dimensional data tuple based on local differential privacy mechanisms and then the server collects and aggregates all the perturbed data and computes the estimations of the mean value for numeric attributes and the frequency value for categorical attributes. In particular, to show the accuracy of our proposed mechanisms, we evaluate the mean square error (MSE) of the estimated mean values for numeric attributes and frequencies for categorical attributes. The accuracy for mean estimation on numeric attributes varying form different privacy parameters on both datasets MX and BR is shown in Fig. \[real-mean-vs-epsilon\]. On the whole, it can be seen that both our proposed two mechanisms significantly outperform the optimal Gaussian mechanism (i.e., Opt-GM) in all cases under . And the MSE of and is close to each other and much smaller than Opt-GM. This is because and are unbiased estimations on mean values, thus holding much smaller variances than Opt-GM. Besides, Figs. \[real-mean-vs-epsilon\](a) and (b) indicate the larger the privacy budget $\epsilon$ is, the lower MSE will be. In addition, it can be seen again from Figs. \[real-mean-vs-epsilon\](c) and (d) that Opt-GM always has the biggest MSE among three mechanisms. With the increase of $\delta$, we can observe that the MSE of Opt-GM decreases gradually, while the MSEs of and are almost unchanged. This indicates the size of the privacy parameter $\delta$ has little impact on the accuracies of and . Furthermore, we also conduct extensive experiments on synthetic datasets to compare the effects of different parameters, i.e., privacy parameters $\epsilon$ and $\delta$, dimension $d$. Specifically, each synthetic dataset contains $400,000$ tuples and is generated from a Gaussian distribution $\mathcal{N}(0,1/16)$ with mean value 0 and variance 1/16. We consider four synthetic datasets with different dimension in our experiments, i.e., $d=1,5,10,15$. Fig. \[syn-mean-vs-epsilon\] presents the accuracies of mean estimation on synthetic datasets varying from different privacy budgets $\epsilon$ and different dimensions $d$. It can be seen from all figures that and hold much higher accuracy than Opt-GM in all cases. By comparing four figures in Fig. \[syn-mean-vs-epsilon\], the MSEs of all mechanisms will increase when the dimension $d$ increases from 1 to 15. Nonetheless, the MSEs of our proposed and increase much lower than that of Opt-GM. This demonstrates that our proposed and have better scalability for dimensions, which are more practical in reality. ----------------------------------------------- ----------------------------------------------- ------------------------------------------------ ------------------------------------------------ ![image](figures/Nu_Syn_d1.eps){height="3cm"} ![image](figures/Nu_Syn_d5.eps){height="3cm"} ![image](figures/Nu_Syn_d10.eps){height="3cm"} ![image](figures/Nu_Syn_d15.eps){height="3cm"} \[-3mm\] (a) $d=1$ \(b) $d=5$ \(c) $d=10$ \(d) $d=15$ ----------------------------------------------- ----------------------------------------------- ------------------------------------------------ ------------------------------------------------ In addition, Fig. \[syn-mean-vs-delta\] shows the accuracy of mean estimation for numeric attributes on synthetic datasets varying from different privacy parameters $\delta$ and different dimensions $d$. It can be seen that the MSE of Opt-GM will decrease with the increasing of privacy parameter $\delta$. However, the MSEs of and are hardly affected by privacy parameter $\delta$. By comparing the four figures in Fig. \[syn-mean-vs-delta\], we can find that the MSEs of our proposed and increase much slower than that of Opt-GM with the increasing of dimension $d$. Therefore, it demonstrates again that our proposed two mechanisms have better data utility with high scalability for dimensions. ----------------------------------------------------- ----------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------ ![image](figures/Nu_Syn_d1_delta.eps){height="3cm"} ![image](figures/Nu_Syn_d5_delta.eps){height="3cm"} ![image](figures/Nu_Syn_d10_delta.eps){height="3cm"} ![image](figures/Nu_Syn_d15_delta.eps){height="3cm"} \[-3mm\] (a) $d=1$ \(b) $d=5$ \(c) $d=10$ \(d) $d=15$ ----------------------------------------------------- ----------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------ As for categorical attributes, Fig. \[real-fre-vs-epsilon\] shows the accuracy of the frequency estimation of different mechanisms varying from different privacy parameters. On the whole, the MSEs of four mechanisms decrease with an increase of $\epsilon$ from 0.1 to 10. Among four mechanisms ensuring for categorical attributes, it can be seen that OLH-ALDP has the lowest MSE (e.g., the best data utility) in all cases, which corresponds to theoretical analysis. In addition, by comparing Fig. \[real-fre-vs-epsilon\](a) and Fig. \[real-fre-vs-epsilon\](b) (or, Fig. \[real-fre-vs-epsilon\](c) and Fig. \[real-fre-vs-epsilon\](d)), the MSEs of four mechanisms are almost unchanged when $\delta$ changes from $10^{-6}$ to $10^{-7}$. That is, the size of the privacy parameter $\epsilon$ primarily dominants the accuracy, while the size of the privacy parameter $\delta$ has a small effect on the accuracy. -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- ![image](figures/Ca_MX_de_e-6.eps){height="3cm"} ![image](figures/Ca_MX_de_e-7.eps){height="3cm"} ![image](figures/Ca_BR_de_e-6.eps){height="3cm"} ![image](figures/Ca_BR_de_e-7.eps){height="3cm"} \[-3mm\] (a) MX-Categorical ($\delta=10^{-6}$) \(b) MX-Categorical ($\delta=10^{-7}$) \(c) BR-Categorical ($\delta=10^{-6}$) \(d) BR-Categorical ($\delta=10^{-7}$) -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- In addition, we also implement different algorithms on synthetic datasets to compare the effects of the domain of categorical attributes. Each synthetic dataset is generated by following Zipf’s distribution with an exponent parameter $s$=1.3 and each synthetic dataset contains 100,000 records. Fig. \[syn-fre-vs-k\] shows the accuracy of frequency estimation for categorical attributes on synthetic datasets varying from different domain size $k$. It can be seen that the MSEs of , SPRR-ALDP, and OLH-ALDP almost remain unchanged with the increasing of domain size $k$ in all cases. This is reasonable because these three methods are not relevant to the domain size in theory. In contrast, the domain size has a great impact on the MSE of GRR-ALDP. The larger the domain size is, the larger the MSE will be. This shows that GRR-ALDP leads to a low data utility for the categorical attributes that have large domain sizes, thus resulting in limited applications in reality. Furthermore, we can see from Fig. \[syn-fre-vs-k\](a) and Fig. \[syn-fre-vs-k\](c) that the growth of MSE of GRR-ALDP are slower when privacy budget $\epsilon=0.5$. And the MSE of GRR-ALDP increases much quickly when privacy budget $\epsilon=5$, as shown in Fig. \[syn-fre-vs-k\](b) and Fig. \[syn-fre-vs-k\](d). This shows that the domain size will have a greater impact on the data utility when the privacy budget is relatively large. Thus, it demonstrates again that the data utility of GRR-ALDP suffers from both privacy parameters and domain size, leading a low data utility and poor availability. In contrast, the accuracies of both SPRR-ALDP and OLH-ALDP are much smaller in all cases and are minimally affected by the domain size. ---------------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------- ------------------------------------------------------ ![image](figures/Ca_Syn_eps05_de-6.eps){height="3cm"} ![image](figures/Ca_Syn_eps5_de-6.eps){height="3cm"} ![image](figures/Ca_Syn_eps05_de-7.eps){height="3cm"} ![image](figures/Ca_Syn_eps5_de-7.eps){height="3cm"} \[-3mm\] (a) Vary $k$ ($\epsilon = 0.5, \delta=10^{-6}$) \(b) Vary $k$ ($\epsilon = 5, \delta=10^{-6}$) \(c) Vary $k$ ($\epsilon = 0.5, \delta=10^{-7}$) \(d) Vary $k$ ($\epsilon = 5, \delta=10^{-7}$) ---------------------------------------------------------- ------------------------------------------------------ ------------------------------------------------------- ------------------------------------------------------ Results on Machine Learning Models ---------------------------------- In the second experimental setting, we build a class of machine learning models under which are solved by stochastic gradient descent (SGD) [@wang2019collecting]. We focus on three common learning tasks: linear regression, logistic regression, and support vector machines (SVM) classification. We take the numeric attribute “income” as the label attribute in three tasks. In our experiments, each categorical attribute $A_j$ with $k$ values is transformed into $k-1$ bit binary values with domain $\{-1,1\}$ such that each new binary vector satisfies that, (*i*) the $l$-th bit is set to 1 and the other $k-2$ bit are set to -1 for all $l$-th ($l<k$) value of $A_j$; (*ii*) all $k-1$ bit are set to -1 for all $k$-th value of $A_j$. Then, the new datasets of BR and MX contain 42 and 85 dimensions, respectively. And for logistic regression and SVM classification, we process “income” into binary values such that the value larger than the mean is set to 1, and -1 otherwise. Note that one tuple may be used in multiple iterations in the learning algorithms under non-private cases. However, the works [@nguyen2016collecting; @wang2019collecting] have indicated that it will degrade the accuracy of the learning algorithms by iterating one tuple multiple times in the local private setting. Therefore, in the local private setting of SGD for machine learning, we assume each user (i.e., one tuple) only participates in at most one iteration. In each iteration, each user in one batch submits her noisy gradient to the aggregator. Then, the learning parameters will be updated by using Eq. (\[eqn-sgd\]). --------------------------------------------------- --------------------------------------------------- ![image](figures/SGD_Linear_MX.eps){height="3cm"} ![image](figures/SGD_Linear_BR.eps){height="3cm"} \[-1pt\] (a) MX ($\delta=10^{-26}$) \(b) BR ($\delta=10^{-13}$) --------------------------------------------------- ---------------------------------------------------     ----------------------------------------------------- ----------------------------------------------------- ![image](figures/SGD_Logistic_MX.eps){height="3cm"} ![image](figures/SGD_Logistic_BR.eps){height="3cm"} \[-1pt\] (a) MX ($\delta=10^{-26}$) \(b) BR ($\delta=10^{-13}$) ----------------------------------------------------- ----------------------------------------------------- ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------ ![Accuracy of SVM classification.[]{data-label="svm-vs-eps"}](figures/SGD_SVM_MX.eps "fig:"){height="3cm"} ![Accuracy of SVM classification.[]{data-label="svm-vs-eps"}](figures/SGD_SVM_BR.eps "fig:"){height="3cm"} \[-1pt\] (a) MX ($\delta=10^{-26}$) \(b) BR ($\delta=10^{-13}$) ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------ Fig. \[linear-vs-eps\] shows the mean squared error (MSE) of different mechanisms on the linear regression model varying values of privacy budget $\epsilon$ from 0.1 to 10. Note that we set $\delta=10^{-26}$ under MX dataset and $\delta=10^{-13}$ under BR dataset in order to ensure $\alpha <1$. It can been seen that our proposed and outperform in all case. This demonstrates that our proposed local differential privacy algorithms can ensure much lower errors than the optimal Gaussian mechanism when applying on the linear regression model. Fig. \[logistic-vs-eps\] and Fig. \[svm-vs-eps\] present the misclassification rate of different mechanisms on logistic regression model and SVM classification model, respectively. We can observe from both figures that, with varying values of privacy budget $\epsilon$ from 0.1 to 10, our proposed two mechanisms always have a smaller misclassification rate than . Besides, the misclassification rates of our proposed mechanisms are close to that of the non-private method. In particular, when $\epsilon$ is large (i.e., $\epsilon \geq 5$), the accuracy of and approach to the non-private case, which demonstrates the high data utility of our proposed mechanism again. Conclusion {#sec-conclution} ========== This paper investigates the multi-dimensional data collection and analysis with $(\epsilon, \delta)$-local differential privacy under untrusted data curator. Aiming at both numeric data and categorical data, we have proposed novel solutions which can not only collect each user’s data record in a randomized way to provide strong privacy guarantees, but also compute accurate statistics such that ensuring high accuracies on both mean/frequency estimation and machine learning models such as linear regression, logistic regression and SVM classification. Moreover, the theoretical analysis has shown that our solutions achieve low asymptotic error bound and the minimum variance. Extensive experimental results on real data and synthetic data have demonstrated the high accuracy of our proposed solutions on both simple data statistics and complex machine learning models. Acknowledgement {#acknowledgement .unnumbered} =============== This work was supported in part by the Natural Science Foundation of China (NSFC) under grants: 61572398, 61772410 and 61802298. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. References {#references .unnumbered} ========== Appendix ======== Proof of Duchi *et al.*’s Solution for Multi-dimensional Data {#appen-Duchi-solution} ------------------------------------------------------------- Algorithm \[algorithm-duchi\] presents Duchi *et al.*’s mechanism for achieving for multi-dimensional data. Actually, $B$ is the scaling factor that can ensure the expected value of noisy data is the same as the original data. Before choosing $B$, we first compute $C_d$ as $$\begin{aligned} C_d= \begin{cases} 2^{d-1},&\text{~if}~d~\text{is odd},\\ 2^{d-1}-\frac{1}{2}\binom{d}{d/2},&\text{~otherwise}. \end{cases}\end{aligned}$$ Then, $B$ is calculated by $$\begin{aligned} B= \begin{cases} \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{(d-1)/2}\cdot(e^\epsilon-1)},&\text{~if}~d~\text{is odd},\\ \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{d/2}\cdot(e^\epsilon-1)},&\text{~otherwise}. \end{cases}\end{aligned}$$ Nguy[ê]{}n *et al.* [@nguyen2016collecting] have shown that Duchi *et al.*’s solution (i.e., Algorithm \[algorithm-duchi\]) doesn’t guarantee local differential privacy when $d$ is even. However, they don’t give the specific proofs. In the following, we will prove that Algorithm \[algorithm-duchi\] satisfies local differential privacy when $d$ is odd and Algorithm \[algorithm-duchi\] doesn’t satisfy local differential privacy when $d$ is even. To achieve $\epsilon$-local differential privacy, it needs to guarantee $\mathbb{P}[\mathcal{M}(x) = x^*] \leq e^\epsilon \cdot \mathbb{P}[\mathcal{M}(x') = x^*]$. Based on Lemma \[max-min-value\], we have $$\begin{aligned} \label{max-min-proof-ldp-1} \frac{\alpha}{|T^+|}\leq \frac{1-\alpha}{|T^-|}\cdot e^\epsilon.\end{aligned}$$ By combining (\[eqn-T-odd\]), (\[eqn-T-even\]) and (\[max-min-proof-ldp-1\]), it will obtain $$\begin{aligned} \label{alpha-duchi} \alpha\leq \begin{cases} \frac{e^\epsilon}{e^\epsilon+1},&\text{~if~}d\text{~is odd,} \\ \frac{|T^+|\cdot e^\epsilon}{|T^+|\cdot e^\epsilon+|T^-|},&\text{~if~}d\text{~is even.} \end{cases}\end{aligned}$$ Therefore, as we can see, when $d$ is odd, Algorithm \[algorithm-duchi\] satisfies $\epsilon$-local differential privacy. But when $d$ is even, Algorithm \[algorithm-duchi\] doesn’t satisfy $\epsilon$-local differential privacy since the probability of Bernoulli variable $u=1$ is no longer equal to $\frac{e^\epsilon}{e^\epsilon+1}$. \[algorithm-duchi\] Generate a random vector $V : = [V_1, V_2, \ldots, V_d] \in \{-1,1\}^d$ by sampling each $V_j$ independently from the following distribution: $$\begin{aligned} \mathbb{P}[V_j=v_j]=\begin{cases} \frac{1}{2}+\frac{1}{2}x_j,~~\text{if}~~v_j=1\\ \frac{1}{2}-\frac{1}{2}x_j,~~\text{if}~~v_j=-1 \end{cases}\nonumber \end{aligned}$$\ [In the case of $V$ is sampled as $v$, let $T^+(v)$ (resp. $T^-(v)$) be the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$)]{} [Sample a Bernoulli variable $u=1$ with probability $\frac{e^\epsilon}{e^\epsilon+1}$]{} Proof of Fixing Duchi *et al.*’s Mechanism to Satisfy LDP when $d$ is Even {#appen-fixing} -------------------------------------------------------------------------- Nguy[ê]{}n *et al.* [@nguyen2016collecting] have proposed one possible solution to fix the Algorithm \[algorithm-duchi\] to satisfy LDP while $d$ is even. Their method is to re-define a Bernoulli variable $u$ such that $$\begin{aligned} \label{fixing-1} {{\mathbb{P}}\left[{u=1}\right]}=\frac{e^\epsilon\cdot C_d}{(e^\epsilon-1)C_d+2^d}.\end{aligned}$$ Note that Eq. (\[fixing-1\]) only fix the Algorithm \[algorithm-duchi\] with the situation that $T^+$ (resp, $T^-$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$). And the proof of Eq. (\[fixing-1\]) is not given. Thus, in the following, we will firstly show the proof of Eq. (\[fixing-1\]), and then propose the solution to fix the Algorithm \[algorithm-duchi\] with the situation that $T^+$ (resp, $T^-$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v \geq 0$ (resp. $x^*\cdot v < 0$). Note that only when $d$ is even, Algorithm \[algorithm-duchi\] violates LDP. Thus, without generality, $d$ is always even and we will no longer specify this in this subsection. **Proof of Eq. (\[fixing-1\])**. Referring to Appendix \[appen-Duchi-solution\], to achieve , the probability of a Bernoulli variable $u=1$ should be $$\begin{aligned} \label{fixing-3} \alpha \leq \frac{|T^+|\cdot e^\epsilon}{|T^+|\cdot e^\epsilon+|T^-|}.\end{aligned}$$ From Eq. (\[eqn-T-even-val\]), it holds $\left | T^+ \right |=\left(2^d-\binom{d}{d/2}\right)/2$ and $\left | T^- \right |=\left(2^d+\binom{d}{d/2}\right)/2$. Thus, Eq. (\[fixing-3\]) can be re-written as $$\begin{aligned} \label{fixing-4} \alpha\leq & \frac{\frac{2^d-\binom{d}{d/2}}{2}}{e^{-\epsilon}\cdot \frac{2^d+\binom{d}{d/2}}{2}+ \frac{2^d-\binom{d}{d/2}}{2}} \nonumber\\ &=\frac{e^\epsilon [2^d-\binom{d}{d/2}]}{2^d+\binom{d}{d/2} + e^\epsilon[2^d - \binom{d}{d/2}]} \nonumber\\ &=\frac{e^\epsilon\cdot C_d}{(e^\epsilon-1)C_d+2^d}.\end{aligned}$$ Therefore, we have proved that Algorithm \[algorithm-duchi\] can achieve when $d$ is even as long as the probability of a Bernoulli variable $u=1$ is $\frac{e^\epsilon\cdot C_d}{(e^\epsilon-1)C_d+2^d}$. [$\blacksquare$]{} As mentioned before, Eq. (\[fixing-1\]) can fix Algorithm \[algorithm-duchi\] only when $T^+$ (resp, $T^-$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v > 0$ (resp. $x^*\cdot v\leq 0$). In this paper, we have proposed the solution to fix Algorithm \[algorithm-duchi\] when $T^+$ (resp, $T^-$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v \geq 0$ (resp. $x^*\cdot v < 0$), that is re-defining a Bernoulli variable $u$ such that $$\begin{aligned} \label{fixing-5} {{\mathbb{P}}\left[{u=1}\right]}=\frac{e^\epsilon(2^d-C_d)}{e^\epsilon(2^d-C_d)+C_d}.\end{aligned}$$ **Proof of Eq. (\[fixing-5\])**. When $d$ is even and $T^+$ (resp, $T^-$) is the set of all tuples $x^*\in\{-B,B\}^d$ such that $x^*\cdot v \geq 0$ (resp. $x^*\cdot v < 0$), it holds $$\begin{aligned} \label{fixing-6} \begin{cases} \left | T^+ \right |=\sum_{j\geq d/2}\binom{d}{j},\\ \left | T^- \right |=\sum_{j\leq d/2-1}\binom{d}{j}. \end{cases}\end{aligned}$$ From Eq. (\[fixing-6\]), it holds $\left | T^+ \right | + \left | T^- \right |=2^d$ and $\left | T^+ \right | - \left | T^- \right |=\binom{d}{d/2}$. Then, we get $\left | T^+ \right |=\left(2^d+\binom{d}{d/2}\right)/2$ and $\left | T^- \right |=\left(2^d-\binom{d}{d/2}\right)/2$. Thus, Eq. (\[fixing-3\]) can be re-written as $$\begin{aligned} \label{fixing-7} \alpha\leq & \frac{\frac{2^d+\binom{d}{d/2}}{2}}{e^{-\epsilon}\cdot \frac{2^d-\binom{d}{d/2}}{2}+ \frac{2^d+\binom{d}{d/2}}{2}} \nonumber\\ &=\frac{e^\epsilon[2^d+\binom{d}{d/2}]}{2^d-\binom{d}{d/2} + e^\epsilon[2^d+\binom{d}{d/2}]}\nonumber\\ &=\frac{e^\epsilon(2^d-C_d)}{e^\epsilon(2^d-C_d)+C_d}.\end{aligned}$$ This has completed the proof that Algorithm \[algorithm-duchi\] can achieve when $d$ is even as long as the probability of a Bernoulli variable $u=1$ is $\frac{e^\epsilon(2^d-C_d)}{e^\epsilon(2^d-C_d)+C_d}$. [$\blacksquare$]{} Proof of Unbiased Estimation {#appen-proof-unbiased} ---------------------------- Based on Algorithm \[ldp-algo-our-multi\], the expectation of $\mathcal{M}(x)$ is computed as $$\begin{aligned} & \mathbb{E}[\mathcal{M}(x)] =\sum_{x^*\in \{-B, B\}^d} \left\{ x^* \mathbb{P}[\mathcal{M}(x)=x^*] \right\}.\end{aligned}$$ For $k \in \{1,2,\ldots, d\}$, the $k$-th dimension of $\mathbb{E}[\mathcal{M}(x)] $ is $$\begin{aligned} \sum_{ _{x^*_k = B}^{x^*\in \{-B, B\}^d:}} \left\{B \cdot \mathbb{P}[\mathcal{M}(x)=x^*] \right\}+ \sum_{ _{x^*_k = - B}^{x^*\in \{-B, B\}^d:}} \left\{(-B) \cdot\mathbb{P}[\mathcal{M}(x)=x^*] \right\} .\end{aligned}$$ To consider $k=1$, the first dimension of $\mathbb{E}[\mathcal{M}(x)] $ is $$\begin{aligned} \label{eqn-109} & \sum_{ _{x^*_1 = B}^{x^*\in \{-B, B\}^d:}} \left\{B \cdot \mathbb{P}[\mathcal{M}(x)=x^*] \right\} + \sum_{ _{x^*_1 = - B}^{x^*\in \{-B, B\}^d:}} \left\{(-B) \cdot\mathbb{P}[\mathcal{M}(x)=x^*] \right\} \nonumber\\ & = B \Bigg\{ \Bigg[ \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[B, x^*_{2:d}]] \Bigg] - \Bigg[ \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[-B, x^*_{2:d}]] \Bigg] \Bigg\}.\end{aligned}$$ From Eq. (\[eqn-109\]), defining $J(x_{2:d}, v_{2:d})=\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right)$, it will have $$\begin{aligned} \label{eqn-110} & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[B, x^*_{2:d}]] = \nonumber\\ & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \sum_{ _{x^*\cdot v>0,x^*_{1}=B}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)J(x_{2:d}, v_{2:d}) \Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \sum_{ _{x^*\cdot v \leq 0,x^*_{1}=B}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right) J(x_{2:d}, v_{2:d}) \Bigg] \Bigg\},\end{aligned}$$ where $$\begin{aligned} & \sum_{ _{x^*\cdot v>0,x^*_{1}=B}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \sum_{ _{x^*\cdot v>0,x^*_{1}=B,v_{1}=1}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & \quad + \sum_{ _{x^*\cdot v>0,x^*_{1}=B,v_{1}=-1}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \left( \frac{1}{2}+\frac{1}{2} x_1 \right)\sum_{ _{x^*_{2:d}\cdot v_{2:d}>-B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d}>B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right),\end{aligned}$$ and $$\begin{aligned} & \sum_{ _{x^*\cdot v \leq 0,x^*_{1}=B}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \sum_{ _{x^*\cdot v \leq 0,x^*_{1}=B,v_{1}=1}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & \quad + \sum_{ _{x^*\cdot v \leq 0,x^*_{1}=B,v_{1}=-1}^{v \in \{-1,1\}^d:}} \left( \frac{1}{2}+\frac{1}{2} x_1\cdot v_1 \right)\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) .\end{aligned}$$ Therefore, Eq. (\[eqn-110\]) can be deduced as $$\begin{aligned} & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[B, x^*_{2:d}]]= \nonumber\\ & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d})\Bigg)\Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d})\Bigg)\Bigg] \Bigg\} .\end{aligned}$$ Similarly, it also holds that $$\begin{aligned} & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[-B, x^*_{2:d}]] = \nonumber\\ & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d})\Bigg)\Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \Bigg\}.\end{aligned}$$ The first dimension of $\mathbb{E}[\mathcal{M}(x)] $ dividing $B$ is $$\begin{aligned} & \Bigg[ \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[B, x^*_{2:d}]] \Bigg] \nonumber\\ & \quad - \Bigg[ \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \mathbb{P}[\mathcal{M}(x)=[-B, x^*_{2:d}]] \Bigg] = \nonumber\\ & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \Bigg\}- \nonumber\\ & \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} > -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{x^*_{2:d}\cdot v_{2:d} \leq -B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \Bigg\} \nonumber\\ & = \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times \Bigg( \left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{-B<x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad - \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{-B<x^*_{2:d}\cdot v_{2:d}\leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( -\left( \frac{1}{2}+\frac{1}{2} x_1 \right) \sum_{ _{-B < x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \nonumber\\ & \quad \quad + \left( \frac{1}{2}-\frac{1}{2} x_1 \right) \sum_{ _{-B < x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg)\Bigg] \Bigg\} \nonumber\\ & = \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \Bigg\{ \Bigg[ \frac{\alpha}{|T^+|} \times x_1 \sum_{ _{-B<x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg] \nonumber\\ & \quad+\Bigg[ \frac{1-\alpha}{|T^-|} \times \Bigg( - x_1 \sum_{ _{-B < x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) \Bigg\} \nonumber\\ & = x_1 \bigg( \frac{\alpha}{|T^+|} - \frac{1-\alpha}{|T^-|} \bigg) \times \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{-B<x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}} J(x_{2:d}, v_{2:d}) .\end{aligned}$$ To ensure that the first dimension of $\mathbb{E}[\mathcal{M}(x)] $ equals $x_1$, we set $B$ as $$\begin{aligned} & \left[\bigg( \frac{\alpha}{|T^+|} - \frac{1-\alpha}{|T^-|} \bigg) \times H \right]^{-1},\end{aligned}$$ where $$\begin{aligned} & H=\sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{-B<x^*_{2:d}\cdot v_{2:d} \leq B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \begin{cases} \mathlarger{ \sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = 0}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right)} , \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{ if $d$ is odd}, \\ \mathlarger{\sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right), } \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{ if $d$ is even}. \end{cases} \label{sum-117}\end{aligned}$$ If $d$ is odd, then we have $$\begin{aligned} &\sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = 0}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right)\nonumber\\ & = \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = 0}^{x^*_{2:d}\in \{-B, B\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \left[ \binom{d-1}{\frac{d-1}{2}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \right] \nonumber\\ & = \binom{d-1}{\frac{d-1}{2}} \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \binom{d-1}{\frac{d-1}{2}} \prod_{j=2}^d \left[ \left( \frac{1}{2}+\frac{1}{2} x_j \right) + \left( \frac{1}{2} - \frac{1}{2} x_j \right) \right] \nonumber\\ & = \binom{d-1}{\frac{d-1}{2}} \prod_{j=2}^d 1 = \binom{d-1}{\frac{d-1}{2}}.\end{aligned}$$ If $d$ is even, then we have $$\begin{aligned} &\sum_{x^*_{2:d}\in \{-B, B\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = B}^{v_{2:d} \in \{-1,1\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right)\nonumber\\ & = \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \sum_{ _{x^*_{2:d}\cdot v_{2:d} = B}^{x^*_{2:d}\in \{-B, B\}^{d-1}:}}\prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \left[ \binom{d-1}{\frac{d}{2}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \right] \nonumber\\ & = \binom{d-1}{\frac{d}{2}} \sum_{v_{2:d} \in \{-1,1\}^{d-1}} \prod_{j=2}^d \left( \frac{1}{2}+\frac{1}{2} x_j \cdot v_j \right) \nonumber\\ & = \binom{d-1}{\frac{d}{2}} \prod_{j=2}^d \left[ \left( \frac{1}{2}+\frac{1}{2} x_j \right) + \left( \frac{1}{2} - \frac{1}{2} x_j \right) \right] \nonumber\\ & = \binom{d-1}{\frac{d}{2}} \prod_{j=2}^d 1 = \binom{d-1}{\frac{d}{2}}.\end{aligned}$$ Therefore, the above result in Eq. (\[sum-117\]) equals $$\begin{aligned} H= \begin{cases} \binom{d-1}{(d-1)/2} , &\text{ if $d$ is odd}, \\ \binom{d-1}{d/2}, &\text{ if $d$ is even}. \end{cases} \end{aligned}$$ Thus, the $B$ can be calculated as $$\begin{aligned} B= \begin{cases} \left[ \left( \frac{\alpha}{|T^+|} - \frac{1-\alpha}{|T^-|} \right) \binom{d-1}{\frac{d-1}{2}} \right]^{-1} , &\text{ if $d$ is odd}, \\ \left[\left( \frac{\alpha}{|T^+|} - \frac{1-\alpha}{|T^-|} \right) \binom{d-1}{\frac{d}{2}}\right]^{-1}, &\text{ if $d$ is even}. \end{cases}\end{aligned}$$ Since $$\begin{aligned} \begin{cases} \begin{cases} |T^+|=2^{d-1},\\ |T^-|=2^{d-1}, \end{cases} &\text{ if $d$ is odd,}\\ \begin{cases} |T^+|=2^{d-1}-\frac{1}{2}\binom{d}{d/2},\\ |T^-|=2^{d-1}+\frac{1}{2}\binom{d}{d/2}, \end{cases} &\text{ if $d$ is even,} \end{cases}\end{aligned}$$ based on Eq. (\[alpha-our\]), we can obtain $$\begin{aligned} B= \begin{cases} \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{(d-1)/2}\cdot(e^\epsilon +2^d\cdot \delta-1)},&\text{~if}~d~\text{is odd},\\ \frac{2^d+C_d\cdot(e^\epsilon-1)}{\binom{d-1}{d/2}\cdot(e^\epsilon +2^d\cdot \delta -1)},&\text{~if}~d~\text{is even}. \end{cases}\end{aligned}$$ [^1]: For simplicity, in this paper, we use $i\in[1,N]$ and $j\in[1,d]$ to denote the sets $i\in\{1,2,\cdots,N\}$ and $j\in\{1,2,\cdots,d\}$, respectively. [^2]: <https://www.ipums.org>
--- abstract: 'In this paper, we present fixed-parameter tractable algorithms for special cases of the shortest lattice vector, integer linear programming, and simplex width computation problems, when matrices included in the problems’ formulations are near square. The parameter is the maximum absolute value of the rank minors in the corresponding matrices. Additionally, we present fixed-parameter tractable algorithms with respect to the same parameter for the problems, when the matrices have no singular rank submatrices.' author: - 'D. V. Gribanov' - 'D. S. Malyshev' - 'P. M. Pardalos' - 'S. I. Veselov' date: 'Received: date / Accepted: date' title: 'FPT-algorithms for some problems related to integer programming' --- Introduction ============ Let $A \in \mathbb{Z}^{d \times n}$ be an integer matrix. We denote by $A_{ij}$ the $ij$-th element of the matrix, by $A_{i*}$ its $i$-th row, and by $A_{*j}$ its $j$-th column. The set of integer values starting from $i$ and ending in $j$ is denoted by $i:j=\left\{i, i+1, \ldots, j\right\}$. Additionally, for subsets $I \subseteq \{1,\dots,d\}$ and $J \subseteq \{1,\dots,n\}$, $A_{I\,J}$ denotes the submatrix of $A$ that was generated by all rows with numbers in $I$ and all columns with numbers in $J$. When $I$ or $J$ are replaced by $*$, that implies that all rows or columns (respectively) are selected. By $0_{m\times n}$, we mean a $m\times n$-matrix with the zeroes entries only, $0$ also means the zero vector of the corresponding dimension. For example, $A_{I*}$ is the submatrix consisting of all rows in $I$ and all columns. Let $||A||_{\max}$ denote the maximum absolute value of any element in $A$. Let $\Delta_k(A)$ denote the greatest absolute value of determinants of all $k \times k$ submatrices of $A$, respectively. Additionally, let $\Delta(A) = \Delta_{\operatorname{rank}(A)}(A)$. For a vector $b\in\mathbb{Z}^{n}$, by $P(A,b)$ we denote a polyhedron $\{ x \in \mathbb{R}^{n} : A x \leq b\}$. The set of all vertices of a polyhedron $P$ is denoted by $\operatorname{vert}(P)$. For a matrix $B \in \mathbb{R}^{d \times n}$, $\operatorname{cone.hull}(B) = \{B t : t \in \mathbb{R}_+^{n} \}$ is the *cone spanned by columns of* $B$, $\operatorname{conv.hull}(B) = \{B t : t \in \mathbb{R}_+^{n},\, \sum_{i=1}^{n} t_i = 1 \}$ is the *convex hull spanned by columns of* $B$, $\operatorname{par}(B) = \{x \in \mathbb{R}^d : x = B t,\, t \in [0,1)^n \}$ is the *parallelepiped spanned by columns of* $B$, and $\Lambda(B) = \{B t : t \in \mathbb{Z}^{n} \}$ is the *lattice spanned by columns of* $B$. We refer to [@CAS71; @GRUB87; @SIEG89] for mathematical introductions to lattices. The *width of a convex body* $P$ is defined as $$\operatorname{width}(P)=\min\limits_{c \in \mathbb{Z}^n\setminus\{0\}} (\max\limits_{x \in P} c^\top x - \min\limits_{x \in P} c^\top x).$$ A vector $c$ minimizing the difference $\max\limits_{x \in P} c^\top x - \min\limits_{x \in P} c^\top x$ on $\mathbb{Z}^n\setminus\{0\}$ is called the *flat direction of* $P$. Following [@SCHR98], we define the sizes of an integer number $x$, a rational number $r = \frac{p}{q}$, a rational vector $v \in \mathbb{Q}^n$, and a rational matrix $A \in \mathbb{Q}^{d \times n}$ in the following way: $$\begin{aligned} & \operatorname{size}(x) = 1 + \lceil \log_2 (x+1) \rceil,\\ & \operatorname{size}(r) = 1 + \lceil \log_2 (p+1) \rceil + \lceil \log_2 (q+1) \rceil,\\ & \operatorname{size}(v) = n + \sum_{i=1}^n \operatorname{size}(v_i),\\ & \operatorname{size}(A) = dn + \sum_{i=1}^d \sum_{j=1}^n \operatorname{size}(A_{i\,j}).\end{aligned}$$ An algorithm parameterized by a parameter $k$ is called *fixed-parameter tractable* (FPT-*algorithm*) if its complexity can be estimated by a function from the class $f(k)\, n^{O(1)}$, where $n$ is the input size and $f(k)$ is a computable function that depends on $k$ only. A computational problem parameterized by a parameter $k$ is called *fixed-parameter tractable* (FPT-*problem*) if it can be solved by a FPT-algorithm. For more information about the parameterized complexity theory, see [@PARAM15; @PARAM99]. [**The shortest lattice vector problem**]{} The *Shortest Lattice Vector Problem* (the SLVP) consists in finding $x \in \mathbb{Z}^n \setminus \{0\}$ minimizing $||H x||$, where $H \in \mathbb{Q}^{d \times n}$ is given as an input. The SLVP is known to be NP-hard with respect to randomized reductions, cf. [@AJTAI96]. The first polynomial-time approximation algorithm for the SLVP was proposed by A. Lenstra, H. Lenstra Jr., and L. Lovász in [@LLL82]. Shortly afterwards, U. Fincke and M. Pohst in [@FP83; @FP85], R. Kannan in [@KANN83; @KANN87] described the first exact SLVP solvers. Kannan’s solver has a computational complexity of $2^{O(n\,\log n)} \operatorname{poly}(\operatorname{size}(H))$, where $\operatorname{poly}(\cdot)$ means some polynomial on its argument. The first SLVP solvers that achieve the complexity $2^{O(n)} \operatorname{poly}(\operatorname{size}(H))$ were proposed by M. Ajtai, R. Kumar, D. Sivakumar [@AJKSK01; @AJKSK02], D. Micciancio and P. Voulgaris [@MICCVOUL10]. The previously discussed SLVP solvers are used for the Euclidean norm. Recent results about SLVP-solvers for more general norms are presented in [@BLNAEW09; @DAD11; @EIS11]. The paper of G. Hanrot, X. Pujol, D. Stehlé [@SVPSUR11] is a good survey about SLVP-solvers. Recently, a novel polynomial-time approximation SLVP-solver was proposed by J. Cheon and L. Changmin in [@CHLEE15]. The algorithm is parameterized by the lattice determinant, its time-complexity and the approximation factor are the best to date for lattices with a sufficiently small determinant. In our work, we consider only integer lattices, whose generating matrices are near square. The first aim of this paper is to present an exact FPT-algorithm for the SLVP parameterized by the lattice determinant (see Section 3). Additionally, we develop a FPT-algorithm for lattices, whose generating matrices have no singular rank submatrices. The proposed algorithms work for the $l_p$ norm for any finite $p \geq 1$ and also for the $l_\infty$ norm. [**The integer linear programming problem**]{} The *Integer Linear Programming Problem* (the ILPP) can be formulated as $\min\{ c^\top x : x \in P(H,b) \cap \mathbb{Z}^n\}$ for integer vectors $c,b$ and an integer matrix $H$. There are several polynomial-time algorithms for solving linear programs. We mention Khachiyan’s algorithm [@KHA80], Karmarkar’s algorithm [@KAR84], and Nesterov’s algorithm [@NN94; @PAR91]. Unfortunately, it is well known that the ILPP is NP-hard, in the general case. Therefore, it would be interesting to reveal polynomially solvable cases of the ILPP. An example of this type is the ILPP with a fixed number of variables, for which a polynomial-time algorithm is given by H. Lenstra in [@LEN83]. Another examples can be obtained, when we add some restrictions to the structure of constraints matrices. A square integer matrix is called *unimodular* if its determinant equals $+1$ or $-1$. An integer matrix is called *totally unimodular* if all its minors are $+1$ or $-1$ or $0$. It is well known that all optimal solutions of any linear program with a totally unimodular constraints matrix are integer. Hence, for any linear program and the corresponding integer linear program with a totally unimodular constraints matrix, the sets of their optimal solutions coincide. Therefore, any polynomial-time linear optimization algorithm (like the ones in [@KAR84; @KHA80; @NN94; @PAR91]) is also an efficient algorithm for the ILPP. The next natural step is to consider the *totally bimodular* case, i.e. the ILPP having constraints matrices with the absolute values of all rank minors in the set $\{0, 1, 2\}$. The first paper that discovers fundamental properties of the bimodular ILPP is the paper of S. I. Veselov and A. Y. Chirkov [@VESCH09]. Very recently, using results of [@VESCH09], a strong polynomial-time solvability of the bimodular ILPP was proved by S. Artmann, R. Weismantel, R. Zenklusen in [@AW17]. A matrix will be called *totally $\Delta$-modular* if all its rank minors are at most $\Delta$ in the absolute value. More generally, it would be interesting to investigate the computational complexity of the problems with bounded minors constraints matrices. The maximum absolute value of rank minors of an integer matrix can be interpreted as a proximity measure to the class of totally unimodular matrices. Let the symbol ILPP$_{\Delta}$ denote the ILPP with constraints matrix, each rank minor of which has the absolute value at most $\Delta$. In [@SHEV96], a conjecture is presented that for each fixed natural number $\Delta$ the ILPP$_{\Delta}$ can be solved in polynomial-time. There are variants of this conjecture, where the augmented matrices $\dbinom{c^\top}{A}$ and $(A \; b)$ are considered [@AZ11; @SHEV96]. Unfortunately, not much is known about the computational complexity of the ILPP$_{\Delta}$. For example, the complexity status of the ILPP$_{3}$ is unknown. A step towards deriving the its complexity was done by Artmann et al. in [@AE16]. Namely, it has been shown that if the constraints matrix, additionally, has no singular rank submatrices, then the ILPP$_{\Delta}$ can be solved in polynomial-time. Some results about polynomial-time solvability of the boolean ILPP$_{\Delta}$ were obtained in [@AZ11; @BOCK14; @GRIBM17]. F. Eisenbrand and S. Vempala [@EIS16] presented a randomized simplex-type linear programming algorithm, whose expected running time is strongly polynomial if all minors of the constraints matrix are bounded by a fixed constant. In [@GRIB13; @GRIBV16], it has been shown that any lattice-free polyhedron of the ILPP$_{\Delta}$ has a relatively small width, i.e., the width is bounded by a function that is linear on the dimension and exponential on $\Delta$. Interestingly, due to [@GRIBV16], the width of any empty lattice simplex can be estimated by $\Delta$, for this case. In [@GRIBC16], it has been shown that the width of any simplex induced by a system, having the absolute values of minors bounded by a fixed constant, can be computed by a polynomial-time algorithm. As it was mentioned in [@AW17], due to E. Tardos’ results [@TAR86], linear programs with constraints matrices, whose all minors are bounded by a fixed constant, can be solved in strongly polynomial time. N. Bonifas et al. [@BONY14] showed that any polyhedron defined by a totally $\Delta$-modular matrix has a diameter bounded by a polynomial on $\Delta$ and the number of variables. The second aim of our paper is to improve results of [@AE16]. Namely, in Section 4, we will present a FPT-algorithm for the ILPP$_{\Delta}$, when the constraints matrix is close to a square matrix, i.e. it has a fixed number of additional rows. This fact gives us a FPT-algorithm for the case, when the problem’s constraints matrix has no singular rank submatrices. Indeed, such matrices can have only one additional row if the dimension is sufficiently large, due to [@AE16]. In this paper, we present an algorithm with a better complexity bound. Additionally, we improve some inequalities established in [@AE16].\ [**Computing the simplex lattice width**]{} A. Sebö shown [@SEB99] that the problem of computing the rational simplices width is NP-hard. A. Y. Chirkov and D. V. Gribanov [@GRIBC16] shown that the problem can be solved by a polynomial-time algorithm in the case, when the simplex is defined by a bounded minors constraints matrix. The final aim of this paper is to present a FPT-algorithm for the simplex width computation problem (see Section 5). Some auxiliary results ====================== Let $H$ be a $d\times n$ matrix of rank $n$ that has already been reduced to the Hermite normal form (the HNF) [@SCHR98; @STORH96; @ZHEN05]. Let us assume, without loss of generality, that the matrix $H_B = H_{1:n\,*}$ is non-singular, and let $H_N$ be the $m \times n$ matrix generated by the remaining columns of $H$. In other words, $H = \dbinom{H_B}{H_N}$ and $d = n + m$. Using additional permutations of rows and columns, we can transform $H$, such that the matrix $H_B$ has the following form: $$\label{HNF} H_B = \begin{pmatrix} 1 & 0 & \dots & 0 & 0 & 0 & \dots & 0\\ 0 & 1 & \dots & 0 & 0 &0 & \dots & 0\\ \hdotsfor{8} \\ 0 & 0 & \dots & 1 & 0 & 0 & \dots & 0\\ H_{s+1\,1} & H_{s+2\,2} & \dots & H_{s+1\,s} & H_{s+1\,s+1} & 0 & \dots & 0\\ \hdotsfor{8} \\ H_{n\,1} & H_{n\,2} & \hdotsfor{5} & H_{n\,n}\\ \end{pmatrix},$$ where $s$ is the number of 1’s on the diagonal. Hence, $H_{i\,i} \geq 2$, for $i \in (s+1) : n$. Let, additionally, $k = n - s$ be the number of diagonal elements that are not equal to $1$, $\Delta = \Delta(A)$ and $\delta = |\det(H_B)|$. The following properties are known for the HNF: - $0 \leq H_{i\,j} < H_{i\,i}$, for any $i \in 1:n$ and $j \in 1:(i-1)$, - $\Delta \geq \delta = \prod_{i=s+1}^n H_{i\,i}$, and, hence, $k \leq \log_2 \Delta$, - since $H_{i\,i} \geq 2$, for $i \in (s+1) : n$, we have $$\sum\limits_{i=s+1}^n H_{i\,i} \leq \frac{\delta}{2^{k-1}} + 2(k-1) \leq \delta.$$ In [@AE16], it was shown that $||H_N||_{\max} \leq a_q$, where $q = \lceil \log_2 \Delta \rceil$, and the sequence $\{a_i\}$ is defined, for $i \in 0:q$, as follows: $$a_0 = \Delta,\quad a_i = \Delta + \sum_{j=0}^{i-1} a_j \Delta^{\log_2 \Delta} (\log_2 \Delta)^{(\log_2 \Delta /2)}.$$ It is easy to see that $a_q = \Delta (\Delta^{\log_2 \Delta} (\log_2 \Delta)^{(\log_2 \Delta /2)}+1)^{\lceil \log_2 \Delta \rceil}$. We will show that the estimate on $||H_N||_{\max}$ can be significantly improved. \[HNFElem\] $$||H_N||_{\max} \leq \frac{\Delta}{\delta} ( \frac{\delta}{2^{k-1}} + k -1) \leq \Delta.$$ Hence, $||H||_{\max} \leq \Delta$. Let $h = H_{i\,*}$, for $i \in (n+1) : d$, and $h = t^\top H_B$, for some $t \in \mathbb{R}^n$. Let $H(j)$ be the matrix obtained from $H_B$ by replacing $j$-th row with row $h$. For any $j \in 1 : n$, we have $\det(H(j)) = t_j \det(H_B)$, hence, $|t_j| \leq \frac{\Delta}{\delta}$. Using the property 3) of the HNF, we have $$|H_{i\,j}| = |h_j| \leq \sum_{l=1}^n |t_l H_{l\,j}| < \frac{\Delta}{\delta} (1 + \sum_{ l = s+1}^n H_{l\,l} - k) \leq \frac{\Delta}{\delta} ( \frac{\delta}{2^{k-1}} + k -1).$$ We also need the following technical lemma: \[SubRankDet\] Let $H$ be an $(n+1) \times n$ integer matrix of rank $n$ that has already been reduced to the HNF, and it has the form . Then $\Delta_{n-1}(H) \leq \frac{\Delta^2}{2} (1 + \log_2 \Delta)$. Let the matrix $A$ be obtained from $H$ by deleting any two rows and any column. It is easy to see that $A$ is a lower triangular matrix with at most one additional diagonal. We can expand the determinant of $A$ by the first row, using the Laplace theorem. Then, $|\det(A)| \leq 2^k |d_1 d_2 \dots d_{k-1} c|$, where $k$ is the number of non-zero diagonal elements in $H_B$, $\{d_1,d_2,\dots,d_{k}\}$ is the sequence of diagonal elements resp., and $c = d_k$ or $c$ is some element of the last row of $H$. Since $|d_k| \geq 2$, we have $|d_1 d_2 \dots d_{k-1}| \leq \delta/2$. Lemma \[HNFElem\] provides us with an estimate on $|c|$. Finally, we have $$|\det(A)| \leq 2^{k-1} \Delta ( \frac{\delta}{2^{k-1}} + k -1) \leq \frac{\delta \Delta}{2} (1 + \log_2 \delta).$$ Let the matrix $H$ have the additional property, such that $H$ has no singular $n \times n$ submatrices. One result of [@AE16] states that if $n \geq f(\Delta)$, then the matrix $H$ has at most $n+1$ rows, where $f(\Delta)$ is a function that depends on $\Delta$ only. The paper [@AE16] contains a super-polynomial estimate on the value of $f(\Delta)$. Here, we will show the existence of a polynomial estimate. \[NRowsHNF\] If $n > \Delta (2 \Delta +1)^2 + \log_2 \Delta$, then $H$ has at most $n+1$ rows. Our proof of the theorem has the same structure and ideas as in [@AE16]. We employ Lemma \[HNFElem\] with a slight modification. Let the matrix $H$ be defined as illustrated in . Recall that $H$ has no singular $n \times n$ submatrices. For the purpose of deriving a contradiction, assume that $n > \Delta (2 \Delta +1)^2 + \log_2 \Delta$ and $H$ has exactly $n+2$ rows. Let again, as in [@AE16], $\bar H$ be the submatrix of $H$ without rows indexed by numbers $i$ and $j$, where $i,j \leq s$ and $i > j$. Observe, that $$|\det \bar H| = |\det \underbrace{\begin{pmatrix} H_{s+1\,i} & H_{s+1\,j}& H_{s+1\,s+1} & & \\ \vdots &\vdots & & \ddots & \\ H_{n\,i}& H_{s\,j}& \hdotsfor{2} & H_{n\,n} \\ H_{n+1\,i}& H_{n+1\,j} & \hdotsfor{2} & H_{n+1\,n} \\ H_{n+ 2\,i}& H_{n+2\,j} & \hdotsfor{2} & H_{n+2\,n} \\ \end{pmatrix}}_{:={\bar H}^{ij}}|.$$ The matrix ${\bar H}^{ij}$ is a non-singular $(k+2)\times(k+2)$-matrix. This implies that the first two columns of ${\bar H}^{ij}$ must be different, for any $i$ and $j$. By Lemma \[HNFElem\] and the structure of the HNF, there are at most $\Delta \cdot (2 \Delta +1)^2$ possibilities to choose the first column of ${\bar H}^{ij}$. Consequently, since $n > \Delta (2 \Delta +1)^2 + \log_2 \Delta$, then $s > \Delta (2 \Delta +1)^2$, and there must exist two indices $i \not= j$, such that $\det {\bar H}^{ij} = 0$. This is a contradiction. A FPT-algorithm for the shortest lattice vector problem ======================================================= Let $H \in \mathbb{Z}^{d \times n}$. The SLVP related to the $l_p$ norm can be formulated as follows: $$\label{ISVP} \min\limits_{x \in \Lambda(H) \setminus \{0\} } ||x||_p,$$ or equivalently $$\begin{aligned} &||x||_p \to \min\\ &\begin{cases} x = H t \\ t \in \mathbb{Z}^n \setminus \{0\}. \end{cases}\end{aligned}$$ Since there is a polynomial-time algorithm to compute the HNF, we can assume that $H$ has already been reduced to the form . \[SimpleSVP\] If $n > \Delta (2 \Delta + 1)^m + \log_{2} \Delta$, then there exists a polynomial-time algorithm to solve the problem with a bit-complexity of $O(n \log n \cdot \log \Delta (m + \log \Delta))$. Since $n = s + k$ and $k \leq \log_2 \Delta$, we have $s > \Delta (2 \Delta + 1)^m$. Consider the matrix $\bar H = H_{*\,1:s}$ that consists of the first $s$ columns of the matrix $H$. By Lemma \[HNFElem\], there are strictly less than $\Delta \cdot (2 \Delta + 1)^m$ possibilities to generate a column of $\bar H$, so if $s > \Delta (2 \Delta + 1)^m$, then $\bar H$ has two equivalent columns. Hence, the lattice $\Lambda(H)$ contains the vector $v$, such that $||v||_p = \sqrt[p]{2}$ (and $||v||_\infty = 1$). We can find equivalent rows of $\bar H$, using any sorting algorithm with the number of lexicographical comparisons $O(n \log n)$, where a bit-complexity of the two vectors lexicographical comparison operation is of $O(\log \Delta (m + \log \Delta))$. Finally, it is easy to see that the lattice $\Lambda(H)$ contains a vector of the $l_p$ norm $1$ (for $p \not= \infty$) if and only if the matrix $\bar H$ contains the zero column. In the case, when $m = 0$ and $H$ is a square non-singular matrix, we have the following trivial corollary: If $n \geq \Delta +\log_2{\Delta}$, then there exists a polynomial-time algorithm to solve problem with a bit-complexity of $O(n\log{n}\cdot \log^2{\Delta})$. Let $x^*$ be an optimal vector of the problem . The classical Minkowski’s theorem in geometry of numbers states that: $$||x^*||_p \leq 2 \left(\frac{\det \Lambda(H)}{\operatorname{Vol}(B_p)}\right)^{1/n},$$ where $B_p$ is the unit sphere for the $l_p$ norm. Using the inequalities $\det \Lambda(H) = \sqrt{\det H^\top H} \leq \Delta \sqrt{\dbinom{d}{n}} \leq \Delta \left(\cfrac{e d}{n}\right)^{n/2}$, we can conclude that $||x^*||_p \leq 2 \sqrt{\cfrac{e d}{n}} \sqrt[n]{\cfrac{\Delta}{\operatorname{Vol}(B_p)}}$. On the other hand, by Lemma \[HNFElem\], the last column of $H$ has the norm equals $\Delta \sqrt[p]{m+1}$. Let $$\label{MConst} M = \min \Bigl\{\Delta \sqrt[p]{m+1},\, 2 \sqrt{\frac{e d}{n}} \sqrt[n]{\frac{\Delta}{\operatorname{Vol}(B_p)}}\Bigr\}$$ be the minimum value between these two estimates. There is an algorithm with a complexity of $$O( (\log \Delta + m) \cdot n^{m+1} \cdot \Delta^{m+1} \cdot M^{m+1} \cdot \operatorname{mult}(\log \Delta + \log n + \log M) )$$ to solve the problem . Since $M \leq \Delta \sqrt[p]{m+1}$ (cf. ), the problem parameterized by $\Delta$ is included in the FPT-complexity class, for any fixed $m$. After splitting the variables $x$ into two groups $x_B$ and $x_N$ with relation to $H_B$ and $H_N$, the problem becomes: $$\begin{aligned} & ||x||_p^p \to \min\\ &\begin{cases} x_B - H_B t = 0\\ x_N - H_N t = 0\\ x_B \in \mathbb{Z}^n,\, x_N \in \mathbb{Z}^m\\ t \in \mathbb{Z}^n \setminus \{0\}. \end{cases}\end{aligned}$$ Using the formula $t = H_B^{-1} x_B$, we can eliminate the variables $t$ from the restriction $x_N - H_N t = 0$. The restriction can be additionally multiplied by $\delta$ to become integer, where $H_B^* = \delta H_B^{-1}$ is the adjoint matrix for $B$. $$\begin{aligned} & ||x||_p^p \to \min\\ &\begin{cases} x_B - H_B t = 0\\ \delta x_N - H_N H_B^{*} x_B = 0\\ x_B \in \mathbb{Z}^n,\, x_N \in \mathbb{Z}^m\\ t \in \mathbb{Z}^n \setminus \{0\}. \end{cases}\end{aligned}$$ Finally, we transform the matrix $H_B$ into the Smith normal form (the SNF) [@SCHR98; @STORS96; @ZHEN05], such that $H_B = P^{-1} S Q^{-1}$, where $P^{-1}$, $Q^{-1}$ are unimodular matrices and $S$ is the SNF of $H_B$. After applying the transformation $t \to Q t$, the initial problem becomes equivalent to the following problem: $$\begin{aligned} & ||x||_p^p \to \min\\ &\begin{cases} G x_B \equiv 0 (\text{mod }S)\\ R x_B = \delta x_N\\ x_B \in \mathbb{Z}^n \setminus \{0\},\, x_N \in \mathbb{Z}^m\\ ||x||_\infty \leq M, \end{cases}\end{aligned}$$ where $G = P\text{ mod }S$, $R = H_N H_B^{*}$. The inequality $||x||_{\infty} \leq M$ is an additional tool to localize an optimal integer solution. We also have that $||R||_{\max} = ||H_N H_B^{*}||_{\max} \leq \Delta$. Actually, the considered problem is the classical Gomory’s group minimization problem [@GOM65] (cf. [@HU70]) with additional linear constraints. As in [@GOM65], it can be solved using the dynamic programming approach. To this end, let us define the subproblems $Prob(l,\gamma,\eta)$: $$\begin{aligned} & ||x||_p^p \to \min\\ &\begin{cases} G_{*\,1:l} x \equiv \gamma (\text{mod }S)\\ R_{*\,1:l} x = \eta\\ x \in \mathbb{Z}^l \setminus \{0\},\\ \end{cases}\end{aligned}$$ where $l \in 1:n$, $\gamma \in \mathbb{Z}^n\text{ mod }S$, $\eta \in \mathbb{Z}^m$, and $||\eta||_{\infty} \leq n M \Delta$. Let $\sigma(l,\gamma,\eta)$ be the objective function optimal value of $Prob(l,\gamma,\eta)$. When the problem $Prob(l,\gamma,\eta)$ is unfeasible, we put $\sigma(l,\gamma,\eta) = +\infty$. In the beginning, we put $\sigma(l,\gamma,\eta)=+\infty$, for all values $l$, $\gamma \not= 0$, $\eta \not= 0$ and $\sigma(l, 0, 0) =0$. Trivially, the optimum of is $$\min\limits_{\eta : ||\eta||_{\infty} \leq M} \{\sigma(n,0,\delta \eta) + ||\eta||^p_p\}.$$ The following formula gives the relation between $\sigma(l,\cdot,\cdot)$ and $\sigma(l-1,\cdot,\cdot)$: $$\sigma(l,\gamma,\eta) = \min \{f(z) : |z| \leq M\},$$ where $$f(z) = \begin{cases} \sigma(l-1,\gamma,\eta),\text{ for } z = 0\\ |z|^p + [z R_{*\,l} \not= \eta] \cdot \sigma(l-1,\gamma - z G_{*\,l},\eta - z R_{*\,l}), \end{cases}$$ where the symbol $[z R_{*\,l} \not= \eta]$ equals $1$ if and only if the condition $z R_{*\,l} \not= \eta$ is true. The value of $\sigma(1,\gamma,\eta)$ can be computed using the following formula: $$\sigma(1,\gamma,\eta) = \min \{|z|^p : z G_{*\,1} \equiv \gamma \,(\text{mod } S),\, z R_{*\,1} = \eta,\, 0 < |z| \leq M \}.$$ Both the computational complexity of computing $\sigma(1,\gamma,\eta)$ and the reduction complexity of $\sigma(l,\gamma,\eta)$ to $\sigma(l-1,\cdot,\cdot)$, for all $\gamma$ and $\eta$, can be roughly estimated as: $$O( (\log \Delta + m) \cdot \Delta M \cdot (n M \Delta)^m \cdot \operatorname{mult}(\log \Delta + \log n + \log M) ).$$ The final complexity result can be obtained by multiplying the last formula by $n$. Let us consider the special case, when all $n \times n$ submatrices of $H$ are non-singular. In this case, by Lemma \[NRowsHNF\], for $n > \Delta(2 \Delta + 1)^2 + \log_2 \Delta$, the matrix $H$ can have at most $n+1$ rows ($m \leq 1$), and we have the following corollary. Let $H$ be the matrix defined as illustrated in . Let also $H$ have no singular $n \times n$ submatrices. If $n > \Delta (2 \Delta + 1)^2 + \log_2 \Delta$, then there is an algorithm with a complexity of $O(n \log n \cdot \log^2 \Delta)$ that solves the problem . We have $n > \Delta (2 \Delta + 1)^2 + \log_2 \Delta > \Delta (2 \Delta +1)^m + \log_2 \Delta$. The last inequality meets the conditions of Theorem \[SimpleSVP\], and the corollary follows. Due to the objective function separability, it is easy to see that the same approach is applicable for the Closest Lattice Vector problem (cf. [@SVPSUR11]), that can be formulated as follows: $$\min\limits_{x \in \Lambda(H) } ||x-r||_p,$$ where $r \in \mathbb{Q}^n$. The resulting algorithm has the same complexity on $n$ and $\Delta$, and it is polynomial-time on $\operatorname{size}(H)$ and $\operatorname{size}(r)$. The integer linear programming problem ====================================== Let $H \in \mathbb{Z}^{d \times n}$, $c \in \mathbb{Z}^n$, $b \in \mathbb{Z}^d$, $rank(H) = n$. Let us consider the ILPP: $$\label{IPP} \max\{c^\top x : x \in P(H,b) \cap \mathbb{Z}^n \}.$$ Since there is a polynomial-time algorithm to compute the HNF, we can assume that $H$ has already been reduced to the form . \[IPPT\] The problem can be solved by an algorithm with a complexity of $$O( (\log \Delta + m) \cdot n^{2(m+1)} \cdot \Delta^{2(m+1)} \cdot \operatorname{mult}(\operatorname{size}(c) + \log \Delta) ).$$ Let $v$ be an optimal solution of the linear relaxation of the problem . We can suppose without loss of generality that $H_B v = b_{1:n}$. As in [@AE16], after an introduction of the slack variables $y \in \mathbb{Z}^n_+$, the problem becomes: $$\begin{aligned} & c^\top x \to \max\\ &\begin{cases} H_B x + y = b_{1:n}\\ H_N x \leq b_{(n+1) : m}\\ x \in \mathbb{Z}^n,\, y \in \mathbb{Z}^n_+. \end{cases}\\\end{aligned}$$ Due to the classical result of W. Cook, A. Gerards, A. Schrijver, and E. Tardos [@COGST86; @SCHR98], we have that $$\label{TardoshT} ||y||_\infty \leq n \Delta.$$ Now, using the formula $x = H_B^{-1} (b_{1:n} - y)$, we can eliminate the $x$ variables from the last constraint and from the objective function: $$\begin{aligned} & c^\top H_B^{-1} b_{1:n} - c^\top H_B^{-1} y \to \min\\ &\begin{cases} H_B x + y = b_{1:n}\\ -H_N H_B^{*} y \leq \delta b_{(n+1) : m} - H_N H_B^{*} b_{1:n}\\ x \in \mathbb{Z}^n,\, y \in \mathbb{Z}^n_+, \end{cases}\end{aligned}$$ where the last line was additionally multiplied by $\delta$ to become integer, and where $H_B^* = \delta H_B^{-1}$ is the adjoint matrix for $B$. Finally, we transform the matrix $H_B$ into the SNF, such that $H_B = P^{-1} S Q^{-1}$, where $P^{-1}$, $Q^{-1}$ are unimodular matrices and $S$ is the SNF of $H_B$. After making the transformation $x \to Q x$, the initial problem becomes equivalent to the following problem: $$\begin{aligned} & w^\top x \to \min \label{GroupMin}\\ &\begin{cases} G x \equiv g \,(\text{mod}\, S)\\ R x \leq r \\ x \in \mathbb{Z}_+^n,\, ||x||_{\infty} \leq n \Delta, \end{cases}\notag\end{aligned}$$ where $w^\top = - c^\top H_B^{-1}$, $G = P\text{ mod }S$, $g = P b_{1:n}\text{ mod }S$, $R = -H_N H_B^{*}$, and $r = \delta b_{(n+1) : m} - H_N H_B^{*} b_{1:n}$. The inequalities $||x||_{\infty} \leq n \Delta$ are additional tools to localize an optimal integer solution that follows from inequality . Additionally, we have that $||R||_{\max} = ||H_N H_B^{*}||_{\max} \leq \Delta$. Actually, the problem is the classical Gomory’s group minimization problem [@GOM65] (cf. [@HU70]) with an additional linear constraints. As in [@GOM65], it can be solved using the dynamic programming approach. To this end, let us define the subproblems $Prob(l,\gamma,\eta)$: $$\begin{aligned} & w_{1:l}^\top x \to \min\\ &\begin{cases} G_{*\,1:l} x \equiv \gamma \,(\text{mod}\, S)\\ R_{*\,1:l} x \leq \eta \\ x \in \mathbb{Z}_+^l, \end{cases}\end{aligned}$$ where $l \in 1:n$, $\gamma \in \Lambda(G)\text{ mod }S$, $\eta \in \mathbb{Z}^m$, and $||\eta||_{\infty} \leq n^2 \Delta^2$. Let $\sigma(l,\gamma,\eta)$ be the objective function optimal value of $Prob(l,\gamma,\eta)$. When the problem $Prob(l,\gamma,\eta)$ is unfeasible, we put $\sigma(l,\gamma,\eta) = +\infty$. In the beginning, we put $\sigma(l,\gamma,\eta)=+\infty$, for all values $l$, $\gamma \not= 0$, $\eta \not= 0$. Trivially, the optimum of is $$\sigma(n,g,\min\{r,\,n^2 \Delta^2 \vec 1\}).$$ The following formula gives the relation between $\sigma(l,\cdot,\cdot)$ and $\sigma(l-1,\cdot,\cdot)$: $$\sigma(l,\gamma,\eta) = \min \{\sigma(l-1,\gamma - z G_{*\,l},\eta-z R_{*\,l}) +z w_l : |z| \leq n \Delta \}.$$ The value of $\sigma(1,\gamma,\eta)$ can be computed using the following formula: $$\sigma(1,\gamma,\eta) = \min \{z w_1 : z G_{*\,1} \equiv \gamma \,(\text{mod } S),\, z R_{*\,1} \leq \eta,\, |z| \leq n \Delta \}.$$ Both, the computational complexity of computing $\sigma(1,\gamma,\eta)$ and the reduction complexity of $\sigma(l,\gamma,\eta)$ to $\sigma(l-1,\cdot,\cdot)$, for all $\gamma$ and $\eta$, can be roughly estimated as: $$O( (\log \Delta + m) \cdot n \Delta^2 \cdot (n^2 \Delta^2)^m \cdot \operatorname{mult}(\log \Delta + \log n + \log ||w||_\infty) ).$$ By Lemma \[SubRankDet\], $||w||_\infty \leq ||c||_1 \delta \log \delta$ and $\log ||w||_\infty = O(\log \Delta + \operatorname{size}(c))$. Finally, the result can be obtained multiplying the last formula by $n$. Let us consider the special case, when all $n \times n$ submatrices of $H$ are non-singular. In this case, by Lemma \[NRowsHNF\], for $n > \Delta(2 \Delta + 1)^2 + \log_2 \Delta$, the matrix $H$ can have at most $n+1$ rows ($m \leq 1$), and we have following corollary. If all $n \times n$ submatrices of $H$ are non-singular and $n > \Delta(2 \Delta + 1)^2 + \log_2 \Delta$, then the problem can be solved by an algorithm with a complexity of $$O( \log \Delta \cdot n^4 \cdot \Delta^4 \cdot \operatorname{mult}(\operatorname{size}(c) + \log \Delta) ).$$ Simplex width computation ========================= Let $H \in \mathbb{Z}^{(n+1)\times n}$, $b \in \mathbb{Z}^{n+1}$, $\operatorname{rank}(H) = n$, and $P(H,b)$ be a simplex. Let us consider the problem of finding the width$(P(H,b))$ and a flat direction of $P(H,b)$. The main result in [@GRIBC16] states that $\operatorname{width}(P(H,b))$ can be computed by an algorithm with a complexity of $$O(n^{2 \log \Delta_{n-1}(H)} \cdot \Delta(H) \cdot \Delta(H,b) \cdot \operatorname{poly}(n,\, \log \Delta(H,b)) ),$$ where $\Delta(H,b)$ is the maximum absolute value of $n \times n$ minors of the extended matrix $(H\,b)$. In this section, we are going to develop an FPT-algorithm for the simplex width computation problem. Let us discuss our main tool. Let $C \in \mathbb{Z}^{n \times n}$, $p \in \mathbb{Q}^n$, $\det(C) \not= 0$, $A \in \mathbb{Z}^{m \times n}$, $b \in \mathbb{Z}^n$, and $c \in \mathbb{Z}^n$. Suppose, for any $i \in 1:m$, one of the following equivalent conditions is true. $$\begin{aligned} 1)\,& \label{ConeCond1} ({A_{i\,*})}^\top \in \operatorname{cone.hull}({(C^{-1})}^\top) \text{ and } c \in \operatorname{cone.hull}(-{(C^{-1})}^\top),\\ 2)\,& \label{ConeCond2} \quad p = \arg\min\{(A_{i\,*}) x : x \in p + \operatorname{cone.hull}(C)\} = \\ &= \arg\max\{c^\top x : x \in p + \operatorname{cone.hull}(C)\}, \notag \\ 3)\,& \label{ConeCond3} \quad c^\top C \leq 0\text{ and }AC \geq 0_{m \times n}.\end{aligned}$$ Let us consider the following problem that depends on the input vectors and the matrices $p,\,C,\,A,\,b,\,c$ with the conditions –. $$\begin{aligned} &c^\top x \to \max \label{ConeProg} \\ &\begin{cases} x \in p + \operatorname{cone.hull}(C) \\ x \in P(A,b) \cap \mathbb{Z}^n \\ \end{cases}\notag\end{aligned}$$ The following lemma was proved in [@GRIBC16], and it gives an algorithm for the problem . \[ConeProgLmOld\] There is an algorithm with a complexity of $$O(n^{2 \log \Delta(C)} \cdot \operatorname{poly}(n,\, \log \Delta(C),\, \operatorname{size}(A),\, \log ||b||_\infty,\, \log ||c||_\infty) )$$ to solve the problem . The main idea of the algorithm is the unimodular decomposition procedure from [@GRIBC16]. Actually, the technique based on the unimodular decomposition is very redundant, and it is better to use a simple procedure of enumerating integer points in some rational $n$-dimensional parallelepiped. The following lemma (and the corresponding proof) is required to estimate the complexity of the enumeration procedure. \[ParEnumLm\] Let $A \in \mathbb{Q}^{n \times n}$, $p \in \mathbb{Q}^n$, $|\det(A)| = \Delta > 0$, and $M = p + \operatorname{par}(A)$. Let, additionally, $A = Q H$, where $Q \in \mathbb{Z}^{n \times n}$ is an unimodular matrix and $H^\top$ is the HNF for $A^\top$ of the form . Then $$\label{ParNum} \prod_{i = 1}^{n} \lfloor H_{i\,i} \rfloor \leq |M \cap \mathbb{Z}^n| \leq \prod_{i = 1}^{n} \lceil H_{i\,i} \rceil.$$ After the unimodular map $x \to Q^{-1} x$ the set $M$ becomes $M = r + \{ x \in \mathbb{R}^n : x = H t,\, t \in [0,1)^n \}$, where $r = Q p$. Let $y \in M \cap \mathbb{Z}^n$, then $$y_n = r_n + H_{n\,n} t_n, \qquad t_n = \frac{y_n - r_n}{H_{n\,n}},$$ $$t_n \in S_n =\{ \frac{\lceil r_n \rceil - r_n}{H_{n\,n}},\, \frac{\lceil r_n \rceil - r_n+1}{H_{n\,n}},\, \dots,\, \frac{ \lceil r_n \rceil - r_n + \lfloor H_{n\,n} \rfloor }{H_{n\,n}} \}.$$ If $\lceil r_n \rceil - r_n \geq \{H_{n\,n}\}$, then the last element must be deleted from the set $S_n$, and $\lfloor H_{n\,n} \rfloor \leq |S_n| \leq \lceil H_{n\,n} \rceil$. Let $s = n - k$, for $k \in 1:n$. Then $$y_s = H_{s\,s} t_s + \tau_s, \qquad t_s = \frac{y_s - \tau_s}{H_{s\,s}},$$ where $\tau_s = r_s + \sum_{i=1}^{k} H_{s\,s+i} t_{s+ i}$. Finally, we have: $$t_s \in S_s = \{ \frac{\lceil \tau_s \rceil - \tau_s}{H_{s\,s}},\, \frac{\lceil \tau_s \rceil - \tau_s + 1}{H_{s\,s}},\,\dots,\, \frac{\lceil \tau_s \rceil - \tau_s + \lfloor H_{s\,s} \rfloor}{H_{s\,s}} \}.$$ If $\lceil \tau_s \rceil - \tau_s \geq \{H_{s\,s}\}$, then the last element must be deleted from the set $S_s$, and $\lfloor H_{s\,s} \rfloor \leq |S_s| \leq \lceil H_{s\,s} \rceil$. \[EnumComplx\] Let $A$ be the integral $n \times n$ matrix, $p \in \mathbb{Q}^n$, $|\det(A)| = \Delta > 0$. Then there is an algorithm with a complexity of $$O(\log \Delta \cdot n \Delta \cdot \operatorname{mult}(n \operatorname{size}(p) + \operatorname{size}(A) + n \log \Delta) + T_H(A))$$ to enumerate all integer points of the set $M = p + \operatorname{par}(A)$, where $T_H(\cdot)$ is the HNF computational complexity. The proof of previous Lemma \[ParEnumLm\] contains the enumeration algorithm, so we need only to estimate its complexity. Let $A = Q H$ and $r = Q p$ as in the proof of Lemma \[ParEnumLm\]. Since $Q = A H^{-1}$, by Lemma \[SubRankDet\], we have $\operatorname{size}(r) = O(n \log \Delta + n \operatorname{size}(p) + \operatorname{size}(A))$. Since $|y_i| \leq |H_{i\,*} t| \leq i |H_{i\,i}|$, we have $\operatorname{size}(y) = O(n \log n + \log \Delta)$ and $\operatorname{size}(y-r) = O(\operatorname{size}(r)+n \log n+\log \Delta) = O(\operatorname{size}(A) + n \operatorname{size}(p) + n \log \Delta)$. Let $H^\prime$ be the matrix obtained from $H$ by replacing $j$-th column with column $y-r$. By Lemma \[SubRankDet\], we have $\operatorname{size}(\det H^\prime) = O(n \operatorname{size}(p) + \operatorname{size}(A) + n \log \Delta)$. Since $t_j = \frac{\det(H^\prime)}{\det(H)}$, we have $\operatorname{size}(t_j) = O(n \log \Delta + n \operatorname{size}(p) + \operatorname{size}(A))$, for any $j \in 1:n$. Let $k$ be the number of diagonal elements of $H$ that are not equal to $1$, and $s = n - k$. Due to the proof of Lemma \[ParEnumLm\], we need $$O\Bigl(\sum_{i = 0}^{k} i \prod_{j=n-i}^n H_{j\,j}\Bigr) = O(\Delta k^2)$$ arithmetic operations to determine all possible values of the variables $y_i$ and $\tau_i$, for any $i \in (s+1):n$. When the values of $y_i$ have already been determined, for any $i \in (s+1):n$, then we can determine values of $\tau_i$ and $y_i = \lceil \tau_i \rceil$, for any $i \in 1:s$. The number of arithmetic operations for the last observation is $O(\Delta s k) = O(\Delta (n-k)k)$. Totally, we have $$O(\Delta k^2 + \Delta (n-k)k) = O(\log \Delta \cdot \Delta n)$$ arithmetic operations with values of a size of $O(n \operatorname{size}(p) + \operatorname{size}(A) + n \log \Delta)$. So, the total complexity becomes $O(\log \Delta \cdot n \Delta \cdot \operatorname{mult}(n \operatorname{size}(p) + \operatorname{size}(A) + n \log \Delta))$. Now, we can give a simple algorithm to determine the feasibility of the problem . \[ConeProgLm\] There is an algorithm with a complexity of $$O(\Delta \cdot n^2 \cdot \operatorname{mult}(n \operatorname{size}(p) + \operatorname{size}(C) + \log ||A||_{\max} + n \log \Delta) + \Delta \operatorname{size}(b) + T_H(C))$$ to determine the feasibility of the problem , where $\Delta = |\det(C)|$ and $m = O(n)$. Let us show that the set $p + \operatorname{par}(C)$ contain an optimal point of the problem , if the set of feasible integer points is not empty. Let us consider the following decomposition: $$p + \operatorname{cone.hull}(C) = \bigcup\limits_{z \in \mathbb{Z}^n_+} (p + C z + \operatorname{par}(C)).$$ For the purpose of deriving a contradiction, assume that the set $p + \operatorname{par}(C)$ contains no optimal points. Let $x^*$ be an optimal point of the problem and $x^* \in p + C z + \operatorname{par}(C)$, for $z \not= 0$. Then we have $y \in p + \operatorname{par}(C)$, for the point $y = x^* - C z$. By the condition , we have $c^\top C \leq 0$ and $A C \geq 0_{m \times n}$. Since $A C \geq 0_{m \times n}$ and $x^* \in P(A,b)$, we have $y \in P(A,b)$. Since $c^\top C \leq 0$, we have $c^\top y \geq c^\top x^*$. The last two statements provide the contradiction. Finally, we can use Lemma \[EnumComplx\] to find an optimal point in the set $p + \operatorname{par}(C)$. Each point $x \in p + \operatorname{par}(C)$ must be checked by the condition $x \in P(A,b)$. The total complexity of the checking procedure is $$O(\Delta \cdot n m \cdot \operatorname{mult}(\log ||A||_{\max} + \log \Delta) + \Delta \operatorname{size}(b) ).$$ It was shown in [@GRIBC16] (cf. Theorem 8 and Lemmas 4,5) that the width computation problem for the simplex $P(H,b)$ is equivalent to $O(n^2)$ feasibility problems of the following type: $$\label{NormProb} (p^{(i)} + \operatorname{cone.hull}(C)) \cap (q^{(i)} - \operatorname{cone.hull}(C)) \cap \mathbb{Z}^{n-1},$$ where $p^{(i)},q^{(i)} \in \mathbb{Q}^{n-1}$, for $i \in 1:\gamma$, $C \in \mathbb{Z}^{(n-1)\times(n-1)}$ and $$\begin{aligned} &\gamma = O(n \Delta(H,b) \Delta(H)),\label{GammaTag}\\ &|\det(C)| \leq \Delta_{n-1}(H).\label{DetCTag}\end{aligned}$$ The sizes, for $p^{(i)}$, $g^{(i)}$, and $C$, satisfy the following formulae: $$\begin{aligned} 1)\,& \operatorname{size}(p^{(i)}) = O(n \log n + n \log \Delta(H,b)),\label{SizePQTag}\\ 2)\,& \text{the same relation is true for }\operatorname{size}(q^{(i)}),\notag\\ 3)\,& ||C||_{\max} \leq n\Delta^4(H,b),\label{SizeCTag}\\ 4)\,& \operatorname{size}(C) = O(n^2 \log \Delta(H,b)).\notag\end{aligned}$$ Now, we can prove the main result of the section. Let $H$ be an $(n+1) \times n$ integral matrix of the rank $n$ that have already been reduced to the HNF. Let $P(H,b)$ be a simplex, for $b \in \mathbb{Z}^{n+1}$, $\Delta = \Delta(H)$, and $\Delta(H,b)$ be the maximum absolute value of $n \times n$ minors of the augmented matrix $(H\,b)$. The problem to compute $\operatorname{width}(P(H,b))$ and a flat direction of $P(H,b)$ can be solved by an algorithm with a complexity of $$O( \log \Delta \cdot n^5 \cdot \Delta^3 \cdot \Delta(H,b) \cdot \operatorname{mult}(n^3 \log \Delta(H,b) + n^3 \log n)).$$ Let $C^* = \det(C) C^{-1}$ be the adjoint matrix of $C$. Since $$q^{(i)} - \operatorname{cone.hull}(C) = P(C^*,C^*q^{(i)}),$$ the problem is equivalent to the problem $$\label{NormProbDual} (p^{(i)} + cone(C)) \cap P(C^*,C^* q^{(i)}) \cap \mathbb{Z}^{n-1}.$$ By Lemma \[SubRankDet\] and the estimates , , we have $$||C^*||_{\max} \leq \Delta^2_{n-1}(H) \log \Delta_{n-1}(H) \leq 3 \Delta^4 \log^3 \Delta,$$ $\operatorname{size}(C^*) = O(n^2 \log \Delta)$ and $$\operatorname{size}(C^*q^{(i)}) = O(n \log\Delta + n \operatorname{size}(q^{(i)})) = O(n^2 \log n + n^2 \log \Delta(H,b)).$$ Hence, by Lemma \[ConeProgLm\], the feasibility problem can be solved by an algorithm with a complexity of $$O(T_H(C) + \log \Delta \cdot n^2 \cdot \Delta^2 \cdot \operatorname{mult}(n^3 \log \Delta(H,b) + n^3 \log n)).$$ Let us note that the computational complexity for computing $C^*$ is $O(T_H(C))$, so we did not include it to the formula. There are $\gamma = O(n \Delta(H,b) \Delta)$ (cf. ) problems of that type, for any $i \in 1:\gamma$. And we are need to compute the HNF only one time, for each $C$. Therefore, the complexity becomes: $$O(T_H(C) + \log \Delta \cdot n^3 \cdot \Delta^3 \cdot \Delta(H,b) \cdot \operatorname{mult}(n^3 \log \Delta(H,b) + n^3 \log n)).$$ Due to [@STORH96], $T_H(C) = O^{\sim}(n^{\Theta} \operatorname{mult}(n \log ||C||_{\max}))$, where $\Theta$ is the matrix multiplication exponent and the symbol $O^{\sim}$ means that we omit some logarithmic factor. Hence, we can eliminate $T_H(C)$ from the complexity estimation. The final complexity result can be obtained multiplying the last formula by $n^2$, since the problem is equivalent to $O(n^2)$ subproblems of the type . Due to [@GRIBC16] (cf. Theorem 9), if additionally the simplex $P(H,b)$ is empty, or in other words $P(H,b) \cap \mathbb{Z}^n = \emptyset$, then $\gamma \leq \Delta$ (cf. ). This fact gives us a possibility to avoid an exponential dependence on $\operatorname{size}(b)$. If $P(H,b) \cap \mathbb{Z}^n = \emptyset$, then the problem to compute $\operatorname{width}(P(H,b))$ and a flat direction of $P(H,b)$ can be solved by an algorithm with a complexity of $$O( \log \Delta \cdot n^4 \cdot \Delta^4 \cdot \operatorname{mult}(n^3 \log \Delta(H,b) + n^3 \log n) ).$$ Conclusion {#conclusion .unnumbered} ========== In Section 3, we presented FPT-algorithms for SLVP instances parameterized by the lattice determinant on lattices induced by near square matrices and on lattices induced by matrices without singular submatrices. Both algorithms can be applied to the $l_p$ norm, for any $p > 0$, and to the $l_\infty$ norm. In the future work, it could be interesting to develop FPT-algorithms for the SLVP for more general classes of norms defined by gauge functions $||\cdot||_K$, where $||x||_K = \inf\{s \geq 0: x \in s K\}$, $K$ is a convex body and $0 \in \operatorname{int}(K)$. In Section 4, we presented a FPT-algorithm for ILPP instances with near square constraints matrices parameterized by the maximum absolute value of rank minors of constraints matrices. Additionally, the last result gives us a FPT-algorithm for the case, when the ILPP constraints matrix has no singular rank submatrices, since these matrices can have only one additional row if the dimension is sufficiently large, due to [@AE16]. It is an interesting open problem to avoid the restriction for constraints matrices to be almost square and develop a FPT-algorithm for this case. It was mentioned in [@AW17] that the ILPP is NP-hard for values of parameter $\Delta = \Omega(n^\epsilon)$, for $\epsilon > 0$. So, the existence of a FPT-algorithm for the general class of matrices is unlikely. In Section 5, we presented a FPT-algorithm for the simplex width computation problem parameterized by the maximum absolute value of rank minors of the augmented constraints matrix. The dependence on the augmented matrix minors can be avoided for empty lattice simplices. In the future work, it could be interesting to develop polynomial-algorithms or FPT-algorithms for wider types of polyhedra. Acknowledgments {#acknowledgments .unnumbered} =============== Results of Section 3 were obtained under financial support of Russian Science Foundation grant No 14-41-00039. Results of Section 4 were obtained under financial support of Russian Science Foundation grant No 17-11-01336. Results of Section 5 were obtained under financial support of Russian Foundation for Basic Research, grant No 16-31-60008-mol-a-dk, and LATNA laboratory, NRU HSE. [99]{} Ajtai, M. (1996) Generating hard instances of lattice problems. Proceedings of 28th Annual ACM Symposium on the Theory of Computing 99–108. Ajtai, M., Kumar, R., Sivakumar, D. (2001) A sieve algorithm for the shortest lattice vector problem. Proceedings of the 33rd Annual ACM Symposium on Theory of Computing 601–610. Ajtai, M., Kumar, R., Sivakumar, D. (2002) Sampling short lattice vectors and the closest lattice vector problem. Proceedings of 17th IEEE Annual Conference on Computational Complexity 53–57. Alekseev, V. V., Zakharova, D. (2011) Independent sets in the graphs with bounded minors of the extended incidence matrix. Journal of Applied and Industrial Mathematics 5:14–18. Artmann, S., Eisenbrand, F., Glanzer, C., Timm, O., Vempala, S., Weismantel, R. (2016) A note on non-degenerate integer programs with small subdeterminants. Operations Research Letters 44(5):635–639. Artmann, S., Weismantel, R., Zenklusen, R. (2017) A strongly polynomial algorithm for bimodular integer linear programming. Proceedings of 49th Annual ACM Symposium on Theory of Computing 1206–1219. Blömer, J., Naewe, S. (2009) Sampling methods for shortest vectors, closest vectors and successive minima. Theoretical Computer Science 410(18):1648–1665. Bock, A., Faenza, Y., Moldenhauer, C., Vargas, R., Jacinto, A. (2014) Solving the stable set problem in terms of the odd cycle packing number. Proceedings of 34th Annual Conference on Foundations of Software Technology and Theoretical Computer Science 187–198. Bonifas, N., Di Summa, M., Eisenbrand, F., Hähnle, N., Niemeier, M. (2014) On subdeterminants and the diameter of polyhedra. Discrete & Computational Geometry 52(1):102–115. Cassels, J. W. S. (1971) An introduction to the geometry of numbers, 2nd edition. Springer. Cheon, J. H., Lee, C. (2015) Approximate algorithms on lattices with small determinant. Cryptology ePrint Archive, Report 2015/461, http://eprint.iacr.org/2015/461. Cook, W., Gerards, A. M. H., Schrijver, A., Tardos, E. (1986) Sensitivity theorems in integer linear programming. Mathematical Programming 34:251–264. Cygan, M., Fomin, F. V., Kowalik, L., Lokshtanov, D., Marx, D., Pilipczuk, M., Pilipczuk, M., Saurabh, S. (2015) Parameterized algorithms. Springer. Dadush, D., Peikert, C., Vempala, S. (2011) Enumerative algorithms for the shortest and closest lattice vector problems in any norm via M-ellipsoid coverings. 52nd IEEE Annual Symposium on Foundations of Computer Science 580–589. Eisenbrand, F., Hähnle, N., Niemeier, M. (2011) Covering cubes and the closest vector problem. Proceedings of 27th Annual Symposium on Computational Geometry 417–423. Eisenbrand, F., Vempala, S. (2016) Geometric random edge. https://arxiv.org/abs/1404.1568v5. Downey, R. G., Fellows, M. R. (1999) Parameterized complexity. Springer. Fincke, U., Pohst, M. (1983) A procedure for determining algebraic integers of given norm. Lecture Notes in Computer Sceince 162:194–202. Fincke, U., Pohst, M. (1985) Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Mathematics of Computation 44(170):463–471. Gomory, R. E. (1965) On the relation between integer and non-integer solutions to linear programs. Proceedings of the National Academy of Sciences of the United States of America 53(2):260–265. Gribanov, D. V. (2013) The flatness theorem for some class of polytopes and searching an integer point. Springer Proceedings in Mathematics & Statistics 104:37-45. Gribanov, D. V., Malyshev, D. S. (2017) The computational complexity of three graph problems for instances with bounded minors of constraint matrices. Discrete Applied Mathematics 227:13–20. Gribanov, D. V., Chirkov, A. J. (2016) The width and integer optimization on simplices with bounded minors of the constraint matrices. Optimization Letters 10(6):1179-1189. Gribanov, D. V., Veselov, S. I. (2016) On integer programming with bounded determinants. Optimization Letters 10(6):1169-1177. Gruber, M., Lekkerkerker, C. G. (1987) Geometry of numbers. North-Holland. Hanrot, G., Pujol, X., Stehle, D. (2011) Algorithms for the shortest and closest lattice vector problems. Lecture Notes in Computer Science 6639:159–190. Hu, T. C. (1970) Integer programming and network flows. Addison-Wesley Publishing Company. Kannan, R. (1983) Improved algorithms for integer programming and related lattice problems. Proceedings of 15th Annual ACM Symposium on Theory of Computing 99–108. Kannan, R. (1987) Minkowski’s convex body theorem and integer programming. Mathematics of Operations Research 12(3):415-440. Karmarkar, N. (1984) A new polynomial time algorithm for linear programming. Combinatorica 4(4):373–391. Khachiyan, L. G. (1980) Polynomial algorithms in linear programming. Computational Mathematics and Mathematical Physics 20(1):53–72. Lenstra, A. K., Lenstra, H. W. Jr., Lovasz, L. (1982) Factoring polynomials with rational coefficients. Mathematische Annalen 261:515–534. Lenstra, H. W. (1983) Integer programming with a fixed number of variables. Mathematics of operations research 8(4):538–548 Micciancio, D., Voulgaris, P. (2010) A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. Proceedings of 42nd Annual ACM Symposium on Theory of Computing 351–358. Nesterov, Y. E., Nemirovsky, A. S. (1994) Interior point polynomial methods in convex programming. Society for Industrial and Applied Math, USA. Papadimitriou, C.H. (1981) On the complexity of integer programming. Journal of the Association for Computing Machinery 28:765–768. Pardalos, P. M., Han, C. G., Ye, Y. (1991) Interior point algorithms for solving nonlinear optimization problems. COAL Newsl. 19:45–54. Sebö, A. (1999) An introduction to empty lattice simplicies. Lecture Notes in Computer Science 1610:400–414. Siegel, C. L. (1989) Lectures on the geometry of numbers. Springer. Shevchenko, V.N. (1996) Qualitative topics in integer linear programming (translations of mathematical monographs). AMS Book. Schrijver, A. (1998) Theory of linear and integer programming. John Wiley & Sons. Storjohann, A. (1996) Near optimal algorithms for computing Smith normal forms of integer matrices. Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation 267–274. Storjohann, A., Labahn, G. (1996) Asymptotically fast computation of Hermite normal forms of integer matrices. Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation 259–266. Tardos, E. (1986) A strongly polynomial algorithm to solve combinatorial linear programs. Operations Research 34(2): 250–256 Veselov, S. I., Chirkov, A. J. (2009) Integer program with bimodular matrix. Discrete Optimization 6(2): 220–222. Zhendong, W. (2005) Computing the Smith forms of integer matrices and solving related problems. University of Delaware Newark, USA.
--- abstract: 'It is well known that by dualizing the Bochner–Lichnerowicz–Weitzenböck formula, one obtains Poincaré-type inequalities on Riemannian manifolds equipped with a density, which satisfy the Bakry–Émery Curvature-Dimension condition (combining the Ricci curvature with the “curvature" of the density). When the manifold has a boundary, the Reilly formula and its generalizations may be used instead. By systematically dualizing this formula for various combinations of boundary conditions of the domain (convex, mean-convex) and the function (Neumann, Dirichlet), we obtain new Poincaré-type inequalities on the manifold and on its boundary. For instance, we may handle Neumann conditions on a mean-convex domain, and obtain generalizations to the weighted-manifold setting of a purely Euclidean inequality of Colesanti, yielding a Brunn–Minkowski concavity result for geodesic extensions of convex domains in the manifold setting. All other previously known Poincaré-type inequalities of Lichnerowicz, Brascamp–Lieb, Bobkov–Ledoux and Veysseire are recovered, extended to the Riemannian setting and generalized into a single unified formulation, and their appropriate versions in the presence of a boundary are obtained. Finally, a new geometric evolution equation is proposed which extends to the Riemannian setting the Minkowski addition operation of convex domains, a notion previously confined to the linear setting, and for which a novel Brunn–Minkowski inequality in the weighted-Riemannian setting is obtained. Our framework allows to encompass the entire class of Borell’s convex measures, including heavy-tailed measures, and extends the latter class to weighted-manifolds having negative “dimension".' author: - 'Alexander V. Kolesnikov^1^ and Emanuel Milman^2^' bibliography: - '../ConvexBib.bib' title: 'Poincaré and Brunn–Minkowski inequalities on weighted Riemannian manifolds with boundary' --- Introduction ============ Throughout the paper we consider a compact *weighted-manifold* $(M,g,\mu)$, namely a compact smooth complete connected and oriented $n$-dimensional Riemannian manifold $(M,g)$ with boundary $\partial M$, equipped with a measure: $$\mu = \exp(-V) d {{\textrm{Vol}}}_M ~,$$ where ${\textrm{Vol}}_M$ is the Riemannian volume form on $M$ and $V \in C^2(M)$ is twice continuously differentiable. The boundary $\partial M$ is assumed to be a $C^2$ manifold with outer unit-normal $\nu = \nu_{\partial M}$. The corresponding symmetric diffusion operator with invariant measure $\mu$, which is called the weighted-Laplacian, is given by: $$L = L_{(M,g,\mu)} := \exp(V) {\text{div}}( \exp(-V) \nabla) = \Delta - {\left \langle \nabla V,\nabla \right \rangle} ~,$$ where ${\left \langle \cdot,\cdot \right \rangle}$ denotes the Riemannian metric $g$, $\nabla = \nabla_g$ denotes the Levi-Civita connection, ${\text{div}}= {\text{div}}_g = tr(\nabla \cdot)$ denotes the Riemannian divergence operator, and $\Delta = {\text{div}}\nabla$ is the Laplace-Beltrami operator. Indeed, note that with these generalized notions, the usual integration by part formula is satisfied for $f,g \in C^2(M)$: $$\int_M L(f) g d\mu = \int_{\partial M} f_\nu g d\mu - \int_M {\left \langle \nabla f,\nabla g \right \rangle} d\mu = \int_{\partial M} (f_\nu g - g_\nu f) d\mu + \int_M L(g) f d\mu ~,$$ where $u_\nu = \nu \cdot u$, and integration on $\partial M$ with respect to $\mu$ means with respect to $\exp(-V) d{\textrm{Vol}}_{\partial M}$. The second fundamental form ${\text{II}}= {\text{II}}_{\partial M}$ of $\partial M \subset M$ at $x \in \partial M$ is as usual (up to sign) defined by ${\text{II}}_x(X,Y) = {\left \langle \nabla_X \nu, Y \right \rangle}$, $X,Y \in T \partial M$. The quantities $$H_g(x) := tr({\text{II}}_x) ~,~ H_\mu (x) := H_g(x) - {\left \langle \nabla V(x) ,\nu(x) \right \rangle} ~,$$ are called the Riemannian mean-curvature and *generalized* mean-curvature of $\partial M$ at $x \in \partial M$, respectively. It is well-known that $H_g$ governs the first variation of ${\textrm{Vol}}_{\partial M}$ under the normal-map $t \mapsto \exp(t \nu)$, and similarly $H_\mu$ governs the first variation of $\exp(-V) d{\textrm{Vol}}_{\partial M}$ in the weighted-manifold setting, see e.g. [@EMilmanGeometricApproachPartI] or Subsection \[subsec:Full-BM\]. In the purely Riemannian setting, it is classical that positive lower bounds on the Ricci curvature tensor ${\mbox{\rm{Ric}}}_g$ and upper bounds on the topological dimension $n$ play a fundamental role in governing various Sobolev-type inequalities on $(M,g)$, see e.g. [@ChavelEigenvalues; @GallotBourbaki; @GallotIsoperimetricInqs; @LiYauEigenvalues; @YauIsoperimetricConstantsAndSpectralGap] and the references therein. In the weighted-manifold setting, the pertinent information on *generalized* curvature and *generalized* dimension may be incorporated into a single tensor, which was put forth by Bakry and Émery [@BakryEmery; @BakryStFlour] following Lichnerowicz [@Lichnerowicz1970GenRicciTensorCRAS; @Lichnerowicz1970GenRicciTensor]. The $N$-dimensional Bakry–Émery Curvature tensor ($N \in (-\infty,\infty]$) is defined as (setting $\Psi = \exp(-V)$): $$\mbox{\rm{Ric}}_{\mu,N} := \rm{Ric}_g + \nabla^2 V - \frac{1}{N-n} d V\otimes d V = \rm{Ric}_g - (N-n) \frac{\nabla^2 \Psi^{1/(N-n)}}{\Psi^{1/(N-n)}} ~,$$ and the Bakry–Émery Curvature-Dimension condition $CD(\rho,N)$, $\rho \in {\mathbb{R}}$, is the requirement that as 2-tensors on $M$: $$\mbox{\rm{Ric}}_{\mu,N} \geq \rho g ~.$$ Here $\nabla^2 V$ denotes the Riemannian Hessian of $V$. Note that the case $N=n$ is only defined when $V$ is constant, i.e. in the classical non-weighted Riemannian setting where $\mu$ is proportional to ${\textrm{Vol}}_M$, in which case ${\mbox{\rm{Ric}}}_{\mu,n}$ boils down to the usual Ricci curvature tensor. When $N= \infty$ we set: $${\mbox{\rm{Ric}}}_\mu := {\mbox{\rm{Ric}}}_{\mu,\infty} = {\mbox{\rm{Ric}}}_g + \nabla^2 V ~.$$ It is customary to only treat the case when $N \in [n,\infty]$, with the interpretation that $N$ is an upper bound on the “generalized dimension" of the weighted-manifold $(M,g,\mu)$; however, our method also applies with no extra effort to the case when $N \in (-\infty,0]$, and so our results are treated in this greater generality, which in the Euclidean setting encompasses the entire class of Borell’s convex (or “$1/N$-concave") measures [@BorellConvexMeasures] (cf.[@BrascampLiebPLandLambda1; @BobkovLedouxWeightedPoincareForHeavyTails]). It will be apparent that the more natural parameter is actually $1/N$, with $N=0$ interpreted as $1/N = -\infty$, and so our results hold in the range $1/N \in [-\infty,1/n]$. Clearly, the $CD(\rho,N)$ condition is monotone in $1/N$ in that range, so for all $N_+ \in [n,\infty], N_- \in (-\infty,0]$: $$CD(\rho,n) \Rightarrow CD(\rho,N_+) \Rightarrow CD(\rho,\infty) \Rightarrow CD(\rho,N_-) \Rightarrow CD(\rho,0) ~;$$ note that $CD(\rho,0)$ is the weakest condition in this hierarchy. It seems that outside the Euclidean setting, this extension of the Curvature-Dimension condition to the range $N \leq 0$, has not attracted much attention in the weighted-Riemannian and more general metric-measure space setting (cf. [@SturmCD12; @LottVillaniGeneralizedRicci]); an exception is the work of Ohta and Takatsu [@OhtaTakatsuEntropies1; @OhtaTakatsuEntropies2]. We expect this gap in the literature to be quickly filled (in fact, concurrently to posting our work on the arXiv, Ohta [@OhtaNegativeN] has posted a first attempt of a systematic treatise of the range $N \leq 0$). A convenient equivalent form of the $CD(\rho,N)$ condition may be formulated as follows. Let $\Gamma_2$ denote the iterated carré-du-champ operator of Bakry–Émery: $$\Gamma_2(u) := {\left\Vert\nabla^2 u\right\Vert}^2 + {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u,\nabla u \right \rangle} ~,$$ where ${\left\Vert\nabla^2 u\right\Vert}$ denotes the Hilbert-Schmidt norm of $\nabla^2 u$. Then the $CD(\rho,N)$ condition is equivalent when $1/N \in (-\infty,1/n]$ (see [@BakryStFlour Section 6] for the case $N \in [n,\infty]$ or Lemma \[lem:CS\] in the general case) to the requirement that: $$\label{cdrhon} \Gamma_2(u) \geq \rho {\left\vert\nabla u\right\vert}^2 + \frac{1}{N} (L u)^2 \;\;\; \forall u \in C^2(M) ~.$$ Denote by ${\mathcal{S}}_0(M)$ the class of functions $u$ on $M$ which are $C^2$ smooth in the interior of $M$ and $C^1$ smooth on the entire compact $M$. Denote by ${\mathcal{S}}_N(M)$ the subclass of functions which in addition satisfy that $u_\nu$ is $C^1$ smooth on $\partial M$. The main tool we employ in this work is the following: \[thm:Reilly\] For any function $u \in {\mathcal{S}}_N(M)$: $$\begin{gathered} \label{Reilly} \int_M (L u)^2 d\mu = \int_M {\left\Vert\nabla^2 u\right\Vert}^2 d\mu + \int_M {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} d\mu + \\ \int_{\partial M} H_\mu (u_\nu)^2 d\mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu - 2 \int_{\partial M} {\left \langle \nabla_{\partial M} u_\nu, \nabla_{\partial M} u \right \rangle} d\mu ~.\end{gathered}$$ Here $\nabla_{\partial M}$ denotes the Levi-Civita connection on $\partial M$ with its induced Riemannian metric. This natural generalization of the (integrated) Bochner–Lichnerowicz–Weitzenböck formula for manifolds with boundary was first obtained by R.C. Reilly [@ReillyOriginalFormula] in the classical Riemannian setting ($\mu={\textrm{Vol}}_M$). The version above (with a minor modification) is due to M. Li and S.-H. Du in [@MaDuGeneralizedReilly]. For completeness, we sketch in Section \[sec:prelim\] the proof of the version (\[Reilly\]) which we require for deriving our results. Poincaré-type inequalities on $M$ --------------------------------- It is known that by dualizing the Bochner–Lichnerowicz–Weitzenböck formula, various Poincaré-type inequalities such as the Lichnerowicz [@LichnerowiczBook], Brascamp–Lieb [@BrascampLiebPLandLambda1; @LedouxSpinSystemsRevisited] and Veysseire [@VeysseireSpectralGapEstimateCRAS] inequalities, may be obtained under appropriate bounds on curvature and dimension. Recently, heavy-tailed versions of the Brascamp–Lieb inequalities have been obtained in the Euclidean setting by Bobkov–Ledoux [@BobkovLedouxWeightedPoincareForHeavyTails] and sharpened by Nguyen [@NguyenDimensionalBrascampLieb]. By employing the generalized Reilly formula, we begin this work by unifying, extending and generalizing many of these previously known results to various new combinations of boundary conditions on the domain (locally convex, mean-convex) and the function (Neumann, Dirichlet) in the weighted-Riemannian setting. We mention in passing another celebrated application of the latter duality argument in the Complex setting, namely Hörmander’s $L^2$ estimate [@Hormander1965L2EstimatesAndDBarProblem], but we refrain from attempting to generalize it here; further more recent applications may be found in [@Helffer-DecayOfCorrelationsViaWittenLaplacian; @LedouxSpinSystemsRevisited; @KlartagUnconditionalVariance; @BartheCorderoVariance; @KlartagMomentMap]. Given a finite measure $\nu$ on a measurable space $\Omega$, and a $\nu$-integrable function $f$ on $\Omega$, we denote: $$\dashint_\Omega f d\nu := \frac{1}{\nu(\Omega)} \int_\Omega f d\nu ~,~ Var_\nu(f) := \int_{\Omega} {\left(f - \dashint_\Omega f d\nu\right)}^2 d\nu ~.$$ The following result is proved in Section \[sec:BLN\]. \[thm:intro-BLN\] Assume that ${\mbox{\rm{Ric}}}_{\mu,N} > 0$ on $M$ with $1/N \in (-\infty,1/n]$. The generalized Reilly formula implies all of the inequalities below for any $f \in C^{1}(M)$: 1. (Neumann Dimensional Brascamp–Lieb inequality on locally convex domain) Assume that ${\text{II}}_{\partial M}\geq 0$ ($M$ is locally convex). Then: $$\frac{N}{N-1} Var_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu ~.$$ 2. (Dirichlet Dimensional Brascamp–Lieb inequality on generalized mean-convex domain) Assume that $H_\mu \geq 0$ ($M$ is generalized mean-convex), $f \equiv 0$ on $\partial M \neq \emptyset$. Then: $$\frac{N}{N-1} \int_M f^2 d\mu\leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu ~.$$ 3. (Neumann Dimensional Brascamp–Lieb inequality on strictly generalized mean-convex domain) Assume that $H_\mu > 0$ ($M$ is strictly generalized mean-convex). Then for any $C \in {\mathbb{R}}$: $$\frac{N}{N-1} \text{Var}_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu + \int_{\partial M} \frac{1}{H_\mu} \Bigl(f - C\Bigr)^2 d\mu ~.$$ In particular, if $\int_{\partial M} \frac{1}{H_\mu} d\mu < \infty$, we have: $$\frac{N}{N-1} \text{Var}_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu + \text{Var}_{\mu / H_\mu}(f|_{\partial M}) ~.$$ In the Euclidean setting with $1/N=0$, recall that ${\mbox{\rm{Ric}}}_{\mu,\infty} = \nabla^2 V$, reducing Case (1) to an inequality obtained by H. J. Brascamp and E. H. Lieb [@BrascampLiebPLandLambda1] as an infinitesimal version of the Prekopá–Leindler inequality, a functional infinite-dimensional version of the Brunn–Minkowski inequality (see Section \[sec:BM\]). When ${\mbox{\rm{Ric}}}_{\mu,N} \geq \rho g$ with $\rho > 0$ (i.e. $(M,g,\mu)$ satisfies the $CD(\rho,N)$ condition), by replacing the $\int_M {\langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f,\nabla f \rangle} d\mu$ term with the larger $\frac{1}{\rho} \int_M {\left\vert\nabla f\right\vert}^2 d\mu$ one in all occurrences above, we obtain a generalization of the classical Lichnerowicz estimate [@LichnerowiczBook] on the spectral-gap of the weighted-Laplacian $-L$. When $N \leq -1$, Case (1) was obtained in the *Euclidean setting* (and under the stronger assumption that ${\mbox{\rm{Ric}}}_{\mu,\infty} = \nabla^2 V > 0$) with a constant better than $\frac{N}{N-1}$ on the left-hand-side above by V. H. Nguyen [@NguyenDimensionalBrascampLieb], improving a previous estimate of S. Bobkov and M. Ledoux [@BobkovLedouxWeightedPoincareForHeavyTails] valid when $N \leq 0$. However, on a general *weighted Riemannian manifold*, our constant $\frac{N}{N-1}$ is best possible in Case (1) for the entire range $N \in (-\infty,-1] \cup [n,\infty]$, see Subsection \[subsec:sharp-constant\]. We refer to Subsection \[subsec:prev-known\] for a long exposition on some of the previously known generalizations in these directions; with few exceptions, Cases (2) and (3) and also Case (1) when $1/N \neq 0$ seem new. When ${\mbox{\rm{Ric}}}_\mu \geq \rho g$ for a function $\rho : M \rightarrow {\mathbb{R}}_+$ which is not necessarily bounded away from zero, we also extend in Section \[sec:Vey\] a result of L. Veysseire [@VeysseireSpectralGapEstimateCRAS] who obtained a spectral-gap estimate of $1 / \dashint_M (1/\rho) d\mu$, to the case of Neumann boundary conditions when $M$ is locally convex. \[rem:non-compact\] Although all of our results are formulated for compact weighted-manifolds with boundary, the results easily extend to the non-compact case, if the manifold $M$ can be exhausted by compact submanifolds ${\left\{M_k\right\}}$ so that each $(M_k,g|_{M_k},\mu|_{M_k})$ has an appropriate boundary (locally-convex or generalized mean-convex, in accordance with the desired result). In the Dirichlet case, the asserted inequalities then extend to all functions in $C^1_0(M)$ having compact support and vanishing on the boundary $\partial M$. In the Neumann cases, the asserted inequalities extend to all functions $f \in C^1_{loc}(M) \cap L^2(M,\mu)$ when $\mu$ is a finite measure. Poincaré-type inequalities on $\partial M$ ------------------------------------------ Next, we obtain various Poincaré-type inequalities on the boundary of $(M,g,\mu)$. \[thm:Colesanti-intro\] Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$) and that ${\text{II}}_{\partial M} > 0$ ($M$ is locally strictly-convex). Then the following inequality holds for any $f \in C^1(\partial M)$: $$\label{eq:Colesanti-intro} \int_{\partial M} H_\mu f^2 d\mu - \frac{N-1}{N}\frac{{\left(\int _{\partial M} f d\mu\right)}^2}{\mu(M)} \leq \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} d\mu ~.$$ Theorem \[thm:Colesanti-intro\] was obtained by A. Colesanti in [@ColesantiPoincareInequality] with $N=n$ for a compact subset $M$ of Euclidean space ${\mathbb{R}}^n$ endowed with the Lebesgue measure ($V=0$) and having a $C^2$ strictly convex boundary. Colesanti derived this inequality as an infinitesimal version of the celebrated Brunn-Minkowski inequality, and so his method is naturally confined to the Euclidean setting. In contrast, we derive in Section \[sec:Col\] Theorem \[thm:Colesanti-intro\] directly from the generalized Reilly formula, and thus obtain in the Euclidean setting another proof of the Brunn–Minkowski inequality for convex domains (see more on this in the next subsection). We also obtain a dual-version of Theorem \[thm:Colesanti-intro\], which in fact applies to mean-convex domains (a slightly more general version is given in Theorem \[thm:dual-Colesanti\]): \[thm:dual-Colesanti-intro\] Assume that $(M,g,\mu)$ satisfies the $CD(\rho,0)$ condition, $\rho \in {\mathbb{R}}$, and that $H_\mu > 0$ on $\partial M$ ($M$ is strictly generalized mean-convex). Then for any $f \in C^{2,\alpha}(\partial M)$ and $C \in {\mathbb{R}}$: $$\int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} d\mu \leq \int_{\partial M} \frac{1}{H_\mu} \Bigl(L_{\partial M} f + \frac{\rho (f-C) }{2} \Bigr)^2 d\mu ~.$$ Here $L_{\partial M} = L_{(\partial M, g|_{\partial M}, \exp(-V) d{\textrm{Vol}}_{\partial M})}$ is the weighted-Laplacian on the boundary. By specializing to the constant function $f \equiv 1$, various mean-curvature inequalities for convex and mean-convex boundaries of $CD(0,N)$ weighted-manifolds are obtained in Section \[sec:Col-App\], immediately recovering (when $N \in [n,\infty]$) and extending (when $N \leq 0$) recent results of Huang–Ruan [@HuangRuanMeanCurvatureEstimates]. Under various combinations of non-negative lower bounds on $H_\mu$, ${\text{II}}_{\partial M}$ and $\rho$, spectral-gap estimates on convex boundaries of $CD(\rho,0)$ weighted-manifolds are deduced in Sections \[sec:Col-App\] and \[sec:boundaries\]. For instance, we show: \[thm:IIHRho-Poincare-intro\] Assume that $(M,g,\mu)$ satisfies $CD(\rho,0)$, $\rho \geq 0$, and that ${\text{II}}_{\partial M} \ge \sigma g|_{\partial M}$, $H_\mu \ge \xi$ on $\partial M$ with $\sigma,\xi > 0$. Then: $$\lambda_1 Var_\mu(f) \leq \int_{\partial M} |\nabla_{\partial M} f|^2 d \mu ~,~ \forall f \in C^1(\partial M) ~,$$with: $$\lambda_1 \geq \frac{\rho + a + \sqrt{2 a \rho + a^2}}{2} \geq \max{\left(a,\frac{\rho}{2}\right)} ~,~ a := \sigma \xi ~.$$ This extends and refines the estimate $\lambda_1 \geq (n-1) \sigma^2$ of Xia [@XiaSpectralGapOnConvexBoundary] in the classical unweighted Riemannian setting ($V \equiv 0$) when $Ric_g \geq 0$ ($\rho=0$), since in that case $\xi \geq (n-1) \sigma$. Other estimates where $\sigma$ and $\xi$ are allowed to vary on $\partial M$ are obtained in Section \[sec:boundaries\]. To this end, we show that the boundary $(\partial M,g|_{\partial M}, \exp(-V) d{\textrm{Vol}}_{\partial M})$ satisfies the $CD(\rho_0,N-1)$ condition for an appropriate $\rho_0$. Connections to the Brunn–Minkowski Theory ----------------------------------------- Recall that the classical Brunn–Minkowski inequality in Euclidean space [@Schneider-Book; @GardnerSurveyInBAMS] asserts that: $$\label{eq:BM-intro} Vol((1-t) K + t L)^{1/n} \geq (1-t) Vol(K)^{1/n} + t Vol(L)^{1/n} ~,~ \forall t \in [0,1] ~,$$ for all convex $K,L \subset {\mathbb{R}}^n$; it was extended to arbitrary Borel sets by Lyusternik. Here $Vol$ denotes Lebesgue measure and $A + B := {\left\{a + b \; ; \; a \in A , b \in B\right\}}$ denotes Minkowski addition. We refer to the excellent survey by R. Gardner [@GardnerSurveyInBAMS] for additional details and references. By homogeneity of $Vol$, (\[eq:BM-intro\]) is equivalent to the concavity of the function $t \mapsto Vol(K + t L)^{1/n}$. By Minkowski’s theorem, extending Steiner’s observation for the case that $L$ is the Euclidean ball, $Vol(K+t L)$ is an $n$-degree polynomial $\sum_{i=0}^n {n \choose i} W_{n-i}(K,L) t^i$, whose coefficients $$\label{eq:W-def} W_{n-i}(K,L) := \frac{(n-i)!}{n!} {\left(\frac{d}{dt}\right)}^{i} Vol(K + t L)|_{t=0} ~,$$ are called mixed-volumes. The above concavity thus amounts to the following “Minkowski’s second inequality", which is a particular case of the Alexandrov–Fenchel inequalities: $$\label{eq:Mink-II} W_{n-1}(K,L)^2 \geq W_{n-2}(K,L) W_n(K,L) = W_{n-2}(K,L) Vol(K) ~.$$ It was shown by Colesanti [@ColesantiPoincareInequality] that (\[eq:Mink-II\]) is equivalent to (\[eq:Colesanti-intro\]) in the Euclidean setting. In fact, a Poincaré-type inequality on the sphere, which is a reformulation of (\[eq:Colesanti-intro\]) obtained via the Gauss-map, was established already by Hilbert (see [@BusemannConvexSurfacesBook; @HormanderNotionsOfConvexityBook]) in his proof of (\[eq:Mink-II\]) and thus the Brunn–Minkowski inequality for convex sets. Going in the other direction, the Brunn–Minkowski inequality was used by Colesanti to establish (\[eq:Colesanti-intro\]). See e.g. [@BrascampLiebPLandLambda1; @BobkovLedoux; @Ledoux-Book] for further related connections. In view of our generalization of (\[eq:Colesanti-intro\]) to the weighted-Riemannian setting, it is all but natural to wonder whether there is a Riemannian Brunn–Minkowski theory lurking in the background. Note that when $L$ is the Euclidean unit-ball $D$, then $K+ t D$ coincides with $K_t := {\left\{ x \in {\mathbb{R}}^n \; ; \; d(x,K) \leq t \right\}}$, where $d$ is the Euclidean distance. The corresponding distinguished mixed-volumes $W_{n-i}(K) = W_{n-i}(K,D)$, which are called intrinsic-volumes or quermassintegrals, are obtained (up to normalization factors) as the $i$-th variation of $t \mapsto Vol(K_t)$. Analogously, we may define $K_t$ on a general Riemannian manifold with $d$ denoting the geodesic distance, and given $1/N \in (-\infty,1/n]$, define the following *generalized* quermassintegrals of $K$ as the $i$-th variations of $t \mapsto \mu(K_t)$, $i=0,1,2$ (up to normalization): $$W_N(K) := \mu(K) ~,~ W_{N-1}(K) := \frac{1}{N} \int_{\partial K} d\mu ~,~ W_{N-2}(K) := \frac{1}{N(N-1)} \int_{\partial K} H_\mu d\mu ~.$$ Applying (\[eq:Colesanti-intro\]) to the constant function $f \equiv 1$, we obtain in Section \[sec:BM\] the following interpretation of the resulting inequality: [(Riemannian Geodesic Brunn-Minkowski for Convex Sets)]{} Let $K$ denote a submanifold of $(M,g,\mu)$ having $C^2$ boundary and bounded away from $\partial M$. Assume that $(K,g|_K,\mu|_K)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$) and that ${\text{II}}_{\partial K} > 0$ ($K$ is locally strictly-convex). Then the following generalized Minkowski’s second inequality for geodesic extensions holds: $$W_{N-1}(K)^2 \geq W_{N}(K) W_{N-2}(K) ~.$$ Equivalently, $(d/dt)^2 N \mu(K_t)^{1/N} |_{t=0} \leq 0$, so that the function $t \mapsto N \mu(K_t)^{1/N}$ is concave on any interval $[0,T]$ so that for all $t \in [0,T)$, $K_t$ is $C^2$ smooth, locally strictly-convex, bounded away from $\partial M$, and $(K_t,g|_{K_t},\mu|_{K_t})$ satisfies $CD(0,N)$. A greater challenge is to find an extension of the Minkowski sum $K + t L$ beyond the linear setting for a general convex $L$. Observe that due to lack of homogeneity, this is *not* the same as extending the operation of Minkowski interpolation $(1-t)K + t L$, a trivial task on any geodesic metric space by using geodesic interpolation. Motivated by the equivalence between (\[eq:Colesanti-intro\]) and (\[eq:Mink-II\]) which should persist in the weighted-Riemannian setting, we propose in Section \[sec:BM\] a generalization of $K + t L$ based on a seemingly novel geometric-flow we dub “Parallel Normal Flow", which is characterized by having parallel normals to the evolving surface along the trajectory. Given a locally strictly-convex $K$ and $\varphi \in C^2(\partial K)$, this flow produces a set denoted by $K_{\varphi,t} := K + t \varphi$ (which coincides in the Euclidean setting with $K + t L$ when $\varphi$ is the support function of $L$). We do not go into justifications here for the existence of such a flow on an interval $[0,T]$, but rather observe in Theorem \[thm:Full-BM\] that in such a case and under the $CD(0,N)$ condition, $t \mapsto N \mu(K_{\varphi,t})^{1/N}$ is concave on $[0,T]$; indeed, the latter concavity turns out to be precisely equivalent to our generalized Colesanti inequality (\[eq:Colesanti-intro\]). In view of the remarks above, this observation should be interpreted as a version of the Brunn–Minkowski inequality in the weighted Riemannian setting. Furthermore, this leads to a natural way of defining the mixed-volumes of $K$ and $\varphi$ in this setting, namely as variations of $t \mapsto \mu(K + t \varphi)$. Yet another natural flow producing the aforementioned concavity is also suggested in Section \[sec:BM\]; however, this flow does not seem to produce Minkowski summation in the Euclidean setting. See Remark \[rem:other-gen-BM\] for a comparison with other known extensions of the Brunn–Minkowski inequality to the metric-measure space setting. To conclude this work, we provide in Section \[sec:Apps\] some further applications of our results to the study of isoperimetric inequalities on weighted Riemannian manifolds. Additional applications will be developed in a subsequent work. **Acknowledgements.** We thank Andrea Colesanti, Dario Cordero-Erausquin, Bo’az Klartag, Michel Ledoux, Frank Morgan, Van Hoang Nguyen and Shin-ichi Ohta for their comments and interest. Generalized Reilly Formula and Other Preliminaries {#sec:prelim} ================================================== Notation -------- We denote by $int(M)$ the interior of $M$. Given a compact differentiable manifold $\Sigma$ (which is at least $C^k$ smooth), we denote by $C^{k}(\Sigma)$ the space of real-valued functions on $\Sigma$ with continuous derivatives ${\left(\frac{\partial}{\partial x}\right)}^a f$, for every multi-index $a$ of order $|a| \leq k$ in a given coordinate system. Similarly, the space $C^{k,\alpha}(\Sigma)$ denotes the subspace of functions whose $k$-th order derivatives are Hölder continuous of order $\alpha$ on the $C^{k,\alpha}$ smooth manifold $\Sigma$. When $\Sigma$ is non-compact, we may use $C_{loc}^{k,\alpha}(\Sigma)$ to denote the class of functions $u$ on $M$ so that $u|_{\Sigma_0} \in C^{k,\alpha}(\Sigma_0)$ for all compact subsets $\Sigma_0 \subset \Sigma$. These spaces are equipped with their usual corresponding topologies. Throughout this work we employ Einstein summation convention. By abuse of notation, we denote different covariant and contravariant versions of a tensor in the same manner. So for instance, $Ric_\mu$ may denote the $2$-covariant tensor $(Ric_\mu)_{\alpha,\beta}$, but also may denote its $1$-covariant $1$-contravariant version $(Ric_\mu)^{\alpha}_{\beta}$, as in: $${\left \langle {\mbox{\rm{Ric}}}_\mu \nabla f , \nabla f \right \rangle} = g_{i,j} ({\mbox{\rm{Ric}}}_{\mu})^i_k \nabla^k f \nabla^j f = ({\mbox{\rm{Ric}}}_\mu)_{i,j} \nabla^i f \nabla^j f = {\mbox{\rm{Ric}}}_\mu(\nabla f,\nabla f) ~.$$ Similarly, reciprocal tensors are interpreted according to the appropriate context. For instance, the $2$-contravariant tensor $(\text{II}^{-1})^{\alpha,\beta}$ is defined by: $$(\text{II}^{-1})^{i,j} \text{II}_{j,k} = \delta^i_k ~.$$ We freely raise and lower indices by contracting with the metric when there is no ambiguity regarding which underlying metric is being used; this is indeed the case throughout this work, with the exception of Subsection \[subsec:strange-flow\]. Since we mostly deal with $2$-tensors, the only possible contraction is often denoted by using the trace notation $tr$. In addition to the already mentioned notation in the weighted-Riemannian setting, we will also make use of ${\text{div}}_{g,\mu} = {\text{div}}_{(M,g,\mu)}$ to denote the weighted-divergence operator on the weighted-manifold $(M,g,\mu)$, so that if $\mu = \exp(-V) d{\textrm{Vol}}_{M}$ then: $$\text{div}_{g,\mu}(X) := \exp(V) {\text{div}}_{g} (\exp(-V) X) = {\text{div}}_{g}(X) - g(\nabla_{g} V , X) ~,~ \forall X \in T M ~;$$ this is the natural notion of divergence in the weighted-manifold setting, satisfying the usual integration by parts formula (say if $M$ is closed): $$\int_{M} f \cdot \text{div}_{g,\mu}(X) d\mu = - \int_{M} g(\nabla_{g} f,X) d\mu ~,~\forall X \in T M ~.$$ Finally, when studying consequences of the $CD(\rho,N)$ condition, the various expressions in which $N$ appears are interpreted in the limiting sense when $1/N= 0$. For instance, $N/(N-1)$ is interpreted as $1$, and $N f^{1/N}$ is interpreted as $\log f$ (since $\lim_{1/N \rightarrow 0} N (x^{1/N} -1) = \log(x)$; the constant $-1$ in the latter limit does not influence our application of this convention). Proof of the Generalized Reilly Formula --------------------------------------- For completeness, we sketch the proof of our main tool, Theorem \[thm:Reilly\] from the Introduction, following the proof given in [@MaDuGeneralizedReilly]. The generalized Bochner–Lichnerowicz–Weitzenböck formula [@Lichnerowicz1970GenRicciTensorCRAS; @BakryEmery] states that for any $u \in C^3_{loc}(int(M))$, we have: $$\label{eq:Bochner} \frac{1}{2} L {\left\vert\nabla u\right\vert}^2 = {\left\Vert\nabla^2 u\right\Vert}^2 + {\left \langle \nabla L u, \nabla u \right \rangle} + {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u , \nabla u \right \rangle} ~.$$ We introduce an orthonormal frame of vector fields $e_1,\ldots,e_n$ so that that $e_n = \nu$ on $\partial M$, and denote $u_i = du(e_i)$, $u_{i,j} = \nabla^2 u(e_i,e_j)$. Assuming in addition that $u \in C^2(M)$, we may integrate by parts: $$\int_M \frac{1}{2} L {\left\vert\nabla u\right\vert}^2 d\mu = \int_{\partial M} \sum_{i=1}^n u_i u_{i,n} d\mu ~,~ \int_M {\left \langle \nabla L u, \nabla u \right \rangle} d\mu = \int_{\partial M} u_n (L u) d\mu - \int_{M} (L u)^2 d\mu ~.$$ Consequently, integrating (\[eq:Bochner\]) over $M$, we obtain: $$\int_M {\left((L u)^2 - {\left\Vert\nabla^2 u\right\Vert}^2 - {\left \langle {\mbox{\rm{Ric}}}_\mu \nabla u , \nabla u \right \rangle}\right)} d\mu = \int_{\partial M} {\left(u_n (Lu) - \sum_{i=1}^n u_i u_{i,n}\right)} d\mu ~.$$ Now: $$u_n (Lu) - \sum_{i=1}^n u_i u_{i,n} = \sum_{i=1}^{n-1} {\left(u_n u_{i,i} - u_i u_{i,n}\right)} - u_n {\left \langle \nabla u,\nabla V \right \rangle} ~.$$ Computing the different terms: $$\begin{aligned} \sum_{i=1}^{n-1} u_{i,i} &=& \sum_{i=1}^{n-1} {\left(e_i (e_i u) - (\nabla_{e_i} e_i) u\right)} = \sum_{i=1}^{n-1} {\left(e_i (e_i u) - ((\nabla_{\partial M})_{e_i} e_i) u\right)} + {\left(\sum_{i=1}^{n-1} (\nabla_{\partial M})_{e_i} e_i - \nabla_{e_i} e_i \right)} u \\ &=& \Delta_{\partial M} u + {\left(\sum_{i=1}^{n-1} {\text{II}}_{i,i}\right)} e_n u = \Delta_{\partial M} u + tr({\text{II}}) u_n ~;\end{aligned}$$ $$\sum_{i=1}^{n-1} u_i u_{i,n} = \sum_{i=1}^{n-1} u_i {\left(e_i (e_n u) - (\nabla_{e_i} e_n) u\right)} = {\left \langle \nabla_{\partial M} u,\nabla_{\partial M} u_n \right \rangle} - {\left \langle {\text{II}}\;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} ~.$$ Putting everything together: $$\begin{aligned} & & \int_M {\left((L u)^2 - {\left\Vert\nabla^2 u\right\Vert}^2 - {\left \langle {\mbox{\rm{Ric}}}_\mu \nabla u , \nabla u \right \rangle}\right)} d\mu = \int_{\partial M} {\left(u_n (\Delta_{\partial M} u - {\left \langle \nabla u,\nabla V \right \rangle}) + \text{tr(II)} (u_n)^2\right)} d\mu \\ &-& \int_{\partial M} {\left \langle \nabla_{\partial M} u,\nabla_{\partial M} u_n \right \rangle} d\mu + \int_{\partial M} {\left \langle {\text{II}}\;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu ~.\end{aligned}$$ This is the formula obtained in [@MaDuGeneralizedReilly] for smooth functions. To conclude the proof, simply note that: $${\left \langle \nabla u,\nabla V \right \rangle} = {\left \langle \nabla_{\partial M} u, \nabla_{\partial M} V \right \rangle} + u_n V_n ~,~ L_{\partial M} = \Delta_{\partial M} - {\left \langle \nabla_{\partial M} V,\nabla_{\partial M} \right \rangle} ~,~ H_\mu = tr({\text{II}}) - V_n ~,$$ and thus: $$\int_{\partial M} {\left(u_n (\Delta_{\partial M} u - {\left \langle \nabla u,\nabla V \right \rangle}) + tr({\text{II}}) (u_n)^2\right)} d\mu = \int_{\partial M} {\left(u_n L_{\partial M} u + H_\mu u_n^2\right)} d\mu ~.$$ Integrating by parts one last time, this time on $\partial M$, we obtain: $$\int_{\partial M} u_n L_{\partial M} u \; d\mu = - \int_{\partial M} {\left \langle \nabla_{\partial M} u_n , \nabla_{\partial M} u \right \rangle} d\mu ~.$$ Finally, plugging everything back, we obtain the asserted formula for $u$ as above: $$\begin{aligned} & & \int_M {\left((L u)^2 - {\left\Vert\nabla^2 u\right\Vert}^2 - {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u , \nabla u \right \rangle}\right)} d\mu \\ & = & \int_{\partial M} H_\mu u_n^2 d\mu - 2 \int_{\partial M} {\left \langle \nabla_{\partial M} u_n , \nabla_{\partial M} u \right \rangle} d\mu + \int_{\partial M} {\left \langle {\text{II}}\;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu ~.\end{aligned}$$ To conclude that the assertion in fact holds for $u \in {\mathcal{S}}_N(M)$, we employ a standard approximation argument using a partition of unity and mollification. Since the metric is assumed at least $C^3$ and $\partial M$ is $C^2$, we may approximate any $u \in {\mathcal{S}}_N(M)$ by functions $u_k \in C^3_{loc}(int(M)) \cap C^2(M)$, so that $u_k \rightarrow u$ in $C^2_{loc}(int(M))$ and $C^1(M)$, and $(u_k)_\nu \rightarrow u_\nu$ in $C^1(\partial M)$. The assertion then follows by passing to the limit. For minor technical reasons, it will be useful to record the following variants of the generalized Reilly formula, which are obtained by analogous approximation arguments to the one given above: - If $u_\nu$ or $u$ are constant on $\partial M$ and $u \in {\mathcal{S}}_0(M)$ (recall ${\mathcal{S}}_0(M) := C^2_{loc}(int(M)) \cap C^1(M)$), then: $$\begin{gathered} \label{Reilly3} \int_M (L u)^2 d\mu = \int_M {\left\Vert\nabla^2 u\right\Vert}^2 d\mu + \int_M {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} d\mu + \\ \int_{\partial M} H_\mu (u_\nu)^2 d\mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu ~.\end{gathered}$$ - If $u \in {\mathcal{S}}_D(M) := {\mathcal{S}}_0(M) \cap C^2(\partial M)$, then integration by parts yields: $$\begin{gathered} \label{Reilly2} \int_M (L u)^2 d\mu = \int_M {\left\Vert\nabla^2 u\right\Vert}^2 d\mu + \int_M {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} d\mu + \\ \int_{\partial M} H_\mu (u_\nu)^2 d\mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu + 2 \int_{\partial M} {\left \langle u_\nu, L_{\partial M} u \right \rangle} d\mu ~.\end{gathered}$$ Throughout this work, when integrating by parts, we employ a slightly more general version of the textbook Stokes Theorem $\int_M d\omega = \int_{\partial M} \omega$, in which one only assumes that $\omega$ is a continuous differential $(n-1)$-form on $M$ which is differentiable on $int(M)$ (and so that $d\omega$ is integrable there); a justification may be found in [@MacdonaldGeneralizedStokes]. This permits us to work with the classes $C^k_{loc}(int(M))$ occurring throughout this work. The $CD(\rho,N)$ condition for $1/N \in [-\infty,1/n]$ ------------------------------------------------------ The results in this subsection for $1/N \in [0,1/n]$ are due to Bakry (e.g. [@BakryStFlour Section 6]). \[lem:CS\] For any $u \in C^2_{loc}(M)$ and $1/N \in [-\infty,1/n]$: $$\label{eq:Bakry-CS} \Gamma_2(u) = {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} + {\left\Vert\nabla^2 u\right\Vert}^2 \geq {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}\; \nabla u, \nabla u \right \rangle} + \frac{1}{N} (Lu)^2 ~.$$ Our convention throughout this work is that $-\infty \cdot 0 = 0$, and so if $Lu = 0$ at a point $p \in M$, the assertion when $\frac{1}{N} = -\infty$ is that: $$\Gamma_2(u) \geq {\left \langle {\mbox{\rm{Ric}}}_{\mu,0}\; \nabla u, \nabla u \right \rangle} ~,$$ at that point. Recalling the definitions, this is equivalent to showing that: $${\left\Vert\nabla^2 u\right\Vert}^2 + \frac{1}{N-n} {\left \langle \nabla u,\nabla V \right \rangle}^2 \geq \frac{1}{N} (Lu)^2 ~.$$ Clearly the case that $1/N = 0$ ($N = \infty$) follows. But by Cauchy–Schwartz: $${\left\Vert\nabla^2 u\right\Vert}^2 \geq \frac{1}{n} (\Delta u)^2 ~,$$ and so the case $N=n$, which corresponds to a constant function $V$ so that ${\mbox{\rm{Ric}}}_\mu = {\mbox{\rm{Ric}}}_{\mu,n} = {\mbox{\rm{Ric}}}_g$ and $L = \Delta$, also follows. It remains to show that: $$\frac{1}{n} (\Delta u)^2 + \frac{1}{N-n} {\left \langle \nabla u,\nabla V \right \rangle}^2 \geq \frac{1}{N} (Lu)^2 ~.$$ The case $1/N = -\infty$ ($N=0$) follows since when $0 = L u = \Delta u - {\left \langle \nabla u,\nabla V \right \rangle}$ then: $$\frac{1}{n} (\Delta u)^2 - \frac{1}{n} {\left \langle \nabla u,\nabla V \right \rangle}^2 = \frac{1}{n} (\Delta u + {\left \langle \nabla u,\nabla V \right \rangle}) (\Delta u - {\left \langle \nabla u,\nabla V \right \rangle}) = 0 ~.$$ In all other cases, the assertion follows from another application of Cauchy–Schwartz: $$\frac{1}{\alpha} A^2 + \frac{1}{\beta} B^2 \geq \frac{1}{\alpha + \beta} (A+B)^2 \;\;\; \forall A,B \in {\mathbb{R}}~,$$ valid as soon as $(\alpha,\beta)$ lay in either the set ${\left\{\alpha, \beta > 0\right\}}$ or the set ${\left\{ \alpha + \beta < 0 \text{ and }\alpha \beta < 0\right\}}$. It is immediate to deduce from Lemma \[lem:CS\] that for $1/N \in (-\infty,1/n]$, ${\mbox{\rm{Ric}}}_{\mu,N} \geq \rho g$ on $M$, $\rho \in {\mathbb{R}}$, if and only if: $$\Gamma_2(u) \geq \rho {\left\vert\nabla u\right\vert}^2 + \frac{1}{N} (Lu)^2 ~,~ \forall u \in C^2_{loc}(M) ~.$$ Indeed, the necessity follows from Lemma \[lem:CS\]. The sufficiency follows by locally constructing given $p \in M$ and $X \in T_p M$ a function $u$ so that $\nabla u = X$ at $p$ and equality holds in both applications of the Cauchy–Schwartz inequality in the proof above, as this implies that ${\mbox{\rm{Ric}}}_{\mu,N}(X,X) \geq \rho {\left\vertX\right\vert}^2$. Indeed, equality in the first application implies that $\nabla^2 u$ is a multiple of $g$ at $p$, whereas the equality in the second implies when $1/N \notin {\left\{0,1/n\right\}}$ that ${\left \langle \nabla u,\nabla V \right \rangle}$ and $\Delta u$ are appropriately proportional at $p$; clearly all three requirements can be simultaneously met. The cases $1/N \in {\left\{0,1/n\right\}}$ follow by approximation. Solution to Poisson Equation on Weighted Riemannian Manifolds ------------------------------------------------------------- As our manifold is smooth, connected, compact, with $C^2$ smooth boundary and strictly positive $C^2$-density all the way up to the boundary, all of the classical elliptic existence, uniqueness and regularity results immediately extend from the Euclidean setting to our weighted-manifold one (see e.g. [@Taylor-PDEBook-I Chapter 5] and [@MorreyBook]); for more general situations (weaker regularity of metric, Lipschitz domains, etc.) see e.g. [@MitreaTaylor-PDEonLipManifolds] and the references therein. We summarize the results we require in the following: Given a weighted-manifold $(M,g,\mu)$ , $\mu = \exp(-V) d{\textrm{Vol}}_M$, we assume that $\partial M$ is $C^2$ smooth. Let $\alpha \in (0,1)$, and assume that $g$ is $C^{2,\alpha}$ smooth and $V \in C^{1,\alpha}(M)$. Let $f \in C^{\alpha}(M)$, $\varphi_D \in C^{2,\alpha}(\partial M)$ and $\varphi_N \in C^{1,\alpha}(\partial M)$. Then there exists a function $u \in C^{2,\alpha}_{loc}(int(M)) \cap C^{1,\beta}(M)$ for all $\beta \in (0,1)$, which solves: $$L u = f ~ \text{on $M$} ~,$$ with either of the following boundary conditions on $\partial M$: 1. Dirichlet: $u|_{\partial M} = \varphi_D$, assuming $\partial M \neq \emptyset$. 2. Neumann: $u_\nu|_{\partial M} = \varphi_N$, assuming the following compatibility condition is satisfied: $$\int_{M} f d\mu = \int_{\partial M} \varphi_N d\mu ~.$$ In particular, $u \in S_0(M)$ in either case. Moreover, $u \in S_N(M)$ in the Neumann case and $u \in {\mathcal{S}}_D(M)$ in the Dirichlet case. Since $\partial M$ is only assumed $C^2$, when writing $\varphi_D \in C^{2,\alpha}(\partial M)$ we mean that $\varphi_D$ may be extended to a $C^{2,\alpha}$ function on the entire $M$. If we were to assume that $\partial M$ is $C^{2,\alpha}$ smooth instead of merely $C^2$, we could conclude that $u \in C^{2,\alpha}(M)$ and hence $u \in {\mathcal{S}}_N(M)$ for both Neumann and Dirichlet boundary conditions. But we make the extra effort to stay with the $C^2$ assumption. For future reference, we remark that it is enough to only assume in the proof of the generalized Reilly formula (including the final approximation argument) that the metric $g$ is $C^3$ smooth, so in particular the above regularity results apply. We will not require the uniqueness of $u$ above, but for completeness we mention that this is indeed the case for Dirichlet boundary conditions, and up to an additive constant in the Neumann case. Spectral-gap on Weighted Riemannian Manifolds --------------------------------------------- Let $\lambda_1^N$ denote the best constant in the Neumann Poincaré inequality: $$\lambda_1^N Var_\mu(f) \leq \int_M {\left\vert\nabla f\right\vert}^2 d\mu ~,~ \forall f \in H^1(M) ~,$$ and let $\lambda_1^D$ denote the best constant in the Dirichlet Poincaré inequality: $$\lambda_1^D \int_M f^2 d\mu \leq \int_M {\left\vert\nabla f\right\vert}^2 d\mu ~,~ \forall f \in H^1_0(M) ~.$$ Here $H^1(M)$ and $H^1_0(M)$ denote the Sobolev spaces obtained by completing $C^1(M)$ and $C^1_0(M)$ in the $H^1$-norm $\sqrt{\int_M f^2 d{\textrm{Vol}}+ \int_M {\left\vert\nabla f\right\vert}^2 d{\textrm{Vol}}}$. It is well-known that $\lambda_1^N$ and $\lambda_1^D$ coincide with the spectral-gaps of the self-adjoint positive semi-definite extensions of $-L$ to the appropriate dense subspaces of $L^2(M)$; furthermore, since $M$ is assumed compact, both instances have an orthonormal complete basis of eigenfunctions with corresponding discrete non-negative spectra. In the first case, $\lambda_1^N$ is the first positive eigenvalue of $-L$ with zero Neumann boundary conditions: $$-L u = \lambda^N_1 u \text{ on $M$ } ~,~ u_\nu \equiv 0 \text{ on $\partial M$} ~;$$ the zero eigenvalue corresponds to the eigenspace of constant functions, and so only functions $u$ orthogonal to constants are considered. In the second case, $\lambda_1^D$ is the first (positive) eigenvalue of $-L$ with zero Dirichlet boundary conditions: $$-L u = \lambda^D_1 u \text{ on $M$ } ~,~ u \equiv 0 \text{ on $\partial M$} ~.$$ Our assumptions on the smoothness of $M$, its boundary, and the density $\exp(-V)$, guarantee by elliptic regularity theory that in either case, all eigenfunctions are in ${\mathcal{S}}_0(M)$ (in fact, in ${\mathcal{S}}_N(M)$ in the Neumann case and in ${\mathcal{S}}_D(M)$ in the Dirichlet case). Generalized Poincaré-type inequalities on $M$ {#sec:BLN} ============================================= In this section we provide a proof of Theorem \[thm:intro-BLN\] from the Introduction, which we repeat here for convenience: \[thm:BLN\] Assume that ${\mbox{\rm{Ric}}}_{\mu,N} > 0$ on $M$ with $1/N \in (-\infty,1/n]$. The generalized Reilly formula implies all of the inequalities below for any $f \in C^{1}(M)$: 1. (Neumann Dimensional Brascamp–Lieb inequality on locally convex domain) Assume that ${\text{II}}_{\partial M}\geq 0$ ($M$ is locally convex). Then: $$\frac{N}{N-1} Var_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu ~.$$ 2. (Dirichlet Dimensional Brascamp–Lieb inequality on generalized mean-convex domain) Assume that $H_\mu \geq 0$ ($M$ is generalized mean-convex), $f \equiv 0$ on $\partial M \neq \emptyset$. Then: $$\frac{N}{N-1} \int_M f^2 d\mu\leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu ~.$$ 3. (Neumann Dimensional Brascamp–Lieb inequality on strictly generalized mean-convex domain) Assume that $H_\mu > 0$ ($M$ is strictly generalized mean-convex). Then for any $C \in {\mathbb{R}}$: $$\frac{N}{N-1} \text{Var}_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu + \int_{\partial M} \frac{1}{H_\mu} \Bigl(f - C\Bigr)^2 d\mu ~.$$ In particular, if $\int_{\partial M} \frac{1}{H_\mu} d\mu < \infty$, we have: $$\frac{N}{N-1} \text{Var}_\mu(f) \leq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \nabla f, \nabla f \right \rangle} d\mu + Var_{\mu / H_\mu}(f|_{\partial M}) ~.$$ Previously Known Partial Cases {#subsec:prev-known} ------------------------------ ### $1/N=0$ - Generalized Brascamp–Lieb Inequalities Recall that when $1/N = 0$, ${\mbox{\rm{Ric}}}_{\mu,N} = {\mbox{\rm{Ric}}}_\mu$, and $\frac{N}{N-1} = 1$. When $(M,g)$ is Euclidean space ${\mathbb{R}}^n$ and $\mu = \exp(-V) dx$ is a finite measure, the Brascamp–Lieb inequality [@BrascampLiebPLandLambda1] asserts that: $$Var_{\mu}(f) \leq \int_{{\mathbb{R}}^n}{\left \langle (\nabla^2 V)^{-1} \; \nabla f , \nabla f \right \rangle} d\mu ~,~ \forall f \in C^1({\mathbb{R}}^n) ~.$$ Observe that in this case, ${\mbox{\rm{Ric}}}_{\mu} = \nabla^2 V$, and so taking into account Remark \[rem:non-compact\], we see that the Brascamp–Lieb inequality follows from Case (1). The latter is easily seen to be sharp, as witnessed by testing the Gaussian measure in Euclidean space. The extension to the weighted-Riemannian setting for $1/N = 0$, at least when $(M,g)$ has no boundary, is well-known to experts, although we do not know who to accredit this to (see e.g. the Witten Laplacian method of Helffer–Sjöstrand [@Helffer-DecayOfCorrelationsViaWittenLaplacian] as exposed by Ledoux [@LedouxSpinSystemsRevisited]). The case of a locally-convex boundary with Neumann boundary conditions (Case 1 above) can easily be justified in Euclidean space by a standard approximation argument, but this is less clear in the Riemannian setting; probably this can be achieved by employing the Bakry–Émery semi-group formalism (see Qian [@QianGradientEstimateWithBoundary] and Wang [@Wang2010SemiGroupOnManifoldsWithBoundary; @WangYanConvexManifolds]). To the best of our knowledge, the other two Cases (2) and (3) are new even for $1/N = 0$. ### ${\mbox{\rm{Ric}}}_{\mu,N} \geq \rho g$ with $\rho > 0$ - Generalized Lichnerowicz Inequalities {#subsec:Lich} Assume that ${\mbox{\rm{Ric}}}_{\mu,N} \geq \rho g$ with $\rho > 0$, so that $(M,g,\mu)$ satisfies the $CD(\rho,N)$ condition. It follows that: $$\label{eq:Lich-silly} \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f, \nabla f \right \rangle} d\mu \leq \frac{1}{\rho} \int_M {\left\vert\nabla f\right\vert}^2 d\mu ~,$$ and so we may replace in all three cases of Theorem \[thm:BLN\] every occurrence of the left-hand term in (\[eq:Lich-silly\]) by the right-hand one. So for instance, Case (1) implies that: $$\label{eq:Lich-Estimate} \frac{N}{N-1} \text{Var}_\mu(f) \leq \frac{1}{\rho} \int_M {\left\vert\nabla f\right\vert}^2 d\mu ~,$$ and similarly for the other two cases; we refer to the resulting inequalities as Cases (1’), (2’) and (3’). Clearly, Cases (1’) and (2’) are spectral-gap estimates for $-L$ with Neumann and Dirichlet boundary conditions, respectively. Recall that in the non-weighted Riemannian setting ($\mu = {\textrm{Vol}}_M$ and $N=n$), ${\mbox{\rm{Ric}}}_{\mu,N} = {\mbox{\rm{Ric}}}_g$. In this classical setting, the above spectral-gap estimates are due to the following authors: when $\partial M = \emptyset$ all three cases degenerate to a single statement, due to Lichnerowicz [@LichnerowiczBook], and by Obata’s theorem [@Obata-EqualityInLichnerowicz] equality is attained if and only if $M$ is the $n$-sphere. When $\partial M \neq \emptyset$, Case (1’) is due to Escobar [@EscobarLichnerowiczWithConvexBoundary] and independently Xia [@XiaLichnerowiczWithConvexBoundary] ; Case (2’) is due to Reilly [@ReillyOriginalFormula] ; in both cases, one has equality if and only if $M$ is the $n$-hemisphere ; Case (3’) seems new even in the classical case. On weighted-manifolds with $N \in [n,\infty]$, Case (1’) is certainly known, see e.g. [@LiWei-SpectralGapEstimatesAndRigidityForWeightedManifolds] (in fact, a stronger log-Sobolev inequality goes back to Bakry and Émery [@BakryEmery]); Case (2’) was recently obtained under a slightly stronger assumption by Ma and Du [@MaDuGeneralizedReilly Theorem 2]; for an adaptation to the $CD(\rho,N)$ condition see Li and Wei [@LiWei-SpectralGapEstimatesAndRigidityForWeightedManifolds Theorem 3], who also showed that in both cases one has equality if and only if $N=n$ and $M$ is the $n$-sphere or $n$-hemisphere endowed with its Riemannian volume form, corresponding to whether $\partial M$ is empty or non-empty, respectively. As already mentioned, Case (3’) seems new. To the best of our knowledge, the case of $N<0$ has not been previously treated in the Riemannian setting. ### Generalized Bobkov–Ledoux–Nguyen Inequalities In the *Euclidean setting* with $N \leq -1$ (and under the stronger assumption that ${\mbox{\rm{Ric}}}_\mu = \nabla^2 V > 0$), Case (1) with a better constant of $\frac{n-N-1}{n-N}$ instead of our $\frac{N}{N-1} = \frac{-N}{-N+1}$ is due to Nguyen [@NguyenDimensionalBrascampLieb Proposition 10], who generalized and sharpened a previous version valid for $N \leq 0$ by Bobkov–Ledoux [@BobkovLedouxWeightedPoincareForHeavyTails]. However, on a general *weighted Riemannian manifold*, our constant $\frac{N}{N-1}$ is best possible in the range $N \in (-\infty,-1] \cup [n,\infty]$, see Subsection \[subsec:sharp-constant\] below. Note that in the Euclidean case, $CD(0,N)$ condition with $N \in {\mathbb{R}}$ corresponds to Borell’s class of convex measures [@BorellConvexMeasures], also known as “$1/N$-concave measures" (cf. [@EMilmanRotemHomogeneous]). When $N<0$, these measures are heavy-tailed, having tails decaying to zero only polynomially fast, and consequently the corresponding generator $-L$ may not have a strictly positive spectral-gap. This is compensated by the weight ${\mbox{\rm{Ric}}}_{\mu,N}^{-1}$ in the resulting Poincaré-type inequality. A prime example is given by the Cauchy measure in ${\mathbb{R}}^n$, which satisfies $CD(0,0)$ (it is $-\infty$-concave). See [@BobkovLedouxWeightedPoincareForHeavyTails; @NguyenDimensionalBrascampLieb] for more information. Still in the Euclidean setting with $N \geq n$ (in fact $N > n-1$), a dimensional version of the Brascamp–Lieb inequality which is reminiscent of Case (1) was obtain by Nguyen [@NguyenDimensionalBrascampLieb Theorem 9]. The Bobkov–Ledoux results were obtained as an infinitesimal version of the Borell–Brascamp–Lieb inequality [@BorellConvexMeasures; @BrascampLiebPLandLambda1] (see Subsection \[subsec:BBL\]) - a generalization of the Brunn-Minkowski inequality, which is strictly confined to the Euclidean setting. Nguyen’s approach is already more similar to our own, dualizing an ad-hoc Bochner formula obtained for a non-stationary diffusion operator. In any case, our unified formulation (and treatment) of both regimes $N \leq 0$ and $N \in [n,\infty]$, the weaker assumption that ${\mbox{\rm{Ric}}}_{\mu,N} > 0$, the extension to the Riemannian setting with sharp constant $\frac{N}{N-1}$ and the treatment of the different boundary conditions in Cases (1), (2) and (3) seem new. Sharpness of the $\frac{N}{N-1}$ constant in the Riemannian setting {#subsec:sharp-constant} ------------------------------------------------------------------- We briefly comment on the sharpness of the constant $\frac{N}{N-1}$ for the range $N \in (-\infty,-1] \cup [n,\infty]$ in the more traditional setting of Case (1); the sharpness of Case (2) is also shown for $N \geq n$. This constant is no longer sharp in Case (1) for $N < 0$ with ${\left\vertN\right\vert} \ll 1$, since under the $CD(\rho,N)$ condition with $\rho > 0$, the spectral-gap remains bounded below as $N<0$ increases to $0$, see [@EMilmanNegativeDimension]. As described in Subsection \[subsec:Lich\], it is classical that equality in the Lichnerowicz estimate (\[eq:Lich-Estimate\]) is attained by the $n$-sphere and $n$-hemisphere in Cases (1) (and (3)) and by the $n$-hemisphere in Case (2), both endowed with the usual Riemannian volume. This demonstrates the sharpness of the constant $\frac{N}{N-1}$ when $N=n$. For general $N \in (-\infty,-1] \cup (n,\infty]$, the sharpness may be shown as follows. Given $\rho > 0$, set $\delta = \frac{\rho}{N-1}$ and: $$\beta := \begin{cases} \frac{\pi}{2 \sqrt{\delta}} & \delta > 0 \\ \infty & \delta < 0 \end{cases} \; , \; \alpha := \begin{cases} -\beta & \text{Case (1)} \\ 0 & \text{Case (2)} \end{cases} .$$ Define the following functions of $t \in [\alpha,\beta]$: $$R(t) := \begin{cases}\cos(\sqrt{\delta} t) & \delta > 0 \\ \cosh(\sqrt{-\delta} t) & \delta < 0 \end{cases} ~,~ \Psi_{N-1}(t) := R^{N-1}(t) ~.$$ If we extend our setup to include the case of one-dimensional ($n=1$) weighted manifolds, namely the case of the real line endowed with a density, then it is immediate to check that $([\alpha,\beta],{\left\vert\cdot\right\vert},\mu = \Psi_{N-1}(t) dt)$ satisfies the $CD(\rho,N)$ condition, since: $${\mbox{\rm{Ric}}}_{\mu,N} = -(N-1) \frac{(\Psi_{N-1}^{\frac{1}{N-1}})''}{\Psi_{N-1}^{\frac{1}{N-1}}} = -(N-1) \frac{R''}{R} = (N-1) \delta = \rho ~.$$ Note that when $n=1$, our constant $\frac{N}{N-1}$ and Nguyen’s one $\frac{n-N-1}{n-N}$ coincide. As we have learned from Nguyen, his constant is sharp in the Euclidean setting for any $n \geq 1$. One consequently verifies the sharpness for $n=1$ by using the same test function used by Nguyen in [@NguyenDimensionalBrascampLieb], namely $f(t) = \frac{d}{dt} R(t)$. Indeed, when $N < -1$ or $N > 1$ (to ensure convergence of the integrals below) we have: $$\int f(t) d\mu = \int_{-\beta}^\beta R'(t) R^{N-1}(t) dt = \frac{1}{N} \int_{-\beta}^\beta (R^N(t))' dt = 0 ~,$$ since $\lim_{t \rightarrow \beta} R^N(t) = 0$, and since also $f(0) = R'(0) = 0$ (so that the Dirichlet boundary condition at $t=0$ is satisfied in Case (2)), we may integrate by parts: $$\int f^2(t) d\mu = \frac{1}{N} \int_{\alpha}^\beta R'(t) (R^N(t))' dt = -\frac{1}{N}\int_{\alpha}^\beta R''(t) R^N(t) dt = \frac{\rho}{N (N-1)} \int_{\alpha}^\beta R^{N+1}(t) dt ~.$$ On the other hand: $$\int {\mbox{\rm{Ric}}}_{\mu,N}^{-1} f'(t)^2 d\mu = \frac{1}{\rho} \int_{\alpha}^\beta (R''(t))^2 R^{N-1}(t) dt = \frac{\rho}{(N-1)^2} \int_{\alpha}^\beta R^{N+1}(t) dt ~.$$ Comparing the last two expressions, we conclude the sharpness of the constant $\frac{N}{N-1}$ for $n=1$ in Case (1) when ${\left\vertN\right\vert} > 1$ and in Case (2) when $N > 1$ (the function $f(t)$ does not vanish at infinity when $N < 0$ so this range is excluded in Case (2)). When $N = -1$, one uses an appropriately truncated version of the above test function. In any case, to assert sharpness for a *compact* weighted manifold with strictly positive density, we truncate the above construction at a finite $\beta_{\epsilon}\in (0,\beta)$, and let $\beta_{\epsilon}$ tend to $\beta$. To see the sharpness for $n \geq 2$, we proceed by repeating the construction from [@EMilmanSharpIsopInqsForCDD], which emulates the above $1$-dimensional model space on a thin weighted $n$-dimensional manifold of revolution. For $n \geq 3$, define: $$\Psi_{N-n}(t) := R^{N-n}(t) ,$$ and given ${\epsilon}> 0$, consider the $n$-dimensional manifold $M := [\alpha,\beta] \times S^{n-1}$ endowed with the metric $g_{\epsilon}$ and measure $\mu_{\epsilon}$ given by: $$\begin{aligned} g_{\epsilon}& := dt^2 + {\epsilon}^2 R(t)^2 g_{S^{n-1}} ~; \\ \mu_{\epsilon}& := \Psi(t,\theta) dvol_{g_{\epsilon}}(t,\theta) ~,~ \Psi(t,\theta) = \Psi_{N-n}(t) ~,~ (t,\theta) \in [\alpha,\beta] \times S^{n-1} ~.\end{aligned}$$ The intuition behind this construction is that when ${\epsilon}> 0$ is small enough, the geometry of $(M,g_{\epsilon})$ will contribute (at least) $(n-1)\delta$ to the generalized Ricci curvature tensor ${\mbox{\rm{Ric}}}_{g,\mu,N}$, and a factor of $R^{n-1}(t)$ to the density $d\mu_{{\epsilon}} {\left((-\infty,t] \times S^{n-1}\right)}/dt$, whereas the measure $\mu_{\epsilon}$ will contribute $(N-n) \delta g_{\epsilon}$ to the former and a factor of $\Psi_{N-n}(t) = R^{N-n}(t)$ to the latter, totaling $(N-1) \delta = \rho$ and $R^{N-1}(t) = \Psi_{N-1}(t)$, respectively. Consequently $(M,g_{\epsilon},\mu_{\epsilon})$ satisfies the $CD(\rho,N)$ condition for small enough ${\epsilon}> 0$, and its measure projection onto the axis of revolution is $c_{\epsilon}\Psi_{N-1}(t)$; the sharpness of the constant then follows from our previous one-dimensional analysis. Note that in Case (2), the boundary component ${\left\{0\right\}} \times S^{n-1}$ is totally geodesic and hence satisfies our boundary curvature assumptions. In practice, when $N \geq n$ (and thus $\beta < \infty$), we need to ensure that the resulting compact weighted manifold is smooth at its vertices (at $t \in {\left\{-\beta,\beta\right\}}$ in Case (1) and $t = \beta$ in Case (2)), and this is achieved as in [@EMilmanSharpIsopInqsForCDD] by gluing appropriate caps. When $N \leq -1$ (and thus $\beta = \infty$), in order to obtain a compact manifold as in the formulation of Theorem \[thm:BLN\], we also need to truncate the above construction at a finite $\beta_{\epsilon}> 0$; the resulting boundary ${\left\{-\beta_{\epsilon},\beta_{\epsilon}\right\}} \times S^{n-1}$ turns out to indeed be locally convex since $R'(\beta_{\epsilon}) = - R'(-\beta_{\epsilon}) > 0$, according to the calculation in [@EMilmanSharpIsopInqsForCDD]. The construction is even more complicated for the case $n=2$; we refer to [@EMilmanSharpIsopInqsForCDD] for further precise details and rigorous justifications. Proof of Theorem \[thm:BLN\] ---------------------------- Plugging (\[eq:Bakry-CS\]) into the generalized Reilly formula, we obtain for any $u \in {\mathcal{S}}_N(M)$: $$\begin{gathered} \label{eq:CD-Reilly-BLN} \frac{N-1}{N} \int_M (L u)^2 d\mu \geq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N} \; \nabla u, \nabla u \right \rangle} d\mu + \\ \int_{\partial M} H_\mu (u_\nu)^2 d\mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu - 2 \int_{\partial M} {\left \langle \nabla_{\partial M} u_\nu, \nabla_{\partial M} u \right \rangle} d\mu ~.\end{gathered}$$ Recall that this remains valid for $u \in {\mathcal{S}}_0(M)$ if $u$ or $u_\nu$ are constant on $\partial M$. Lastly, note that if $Lu = f$ in $M$ with $f \in C^{1}(M)$ and $u \in {\mathcal{S}}_0(M)$, then: $$\label{eq:base-BLN} \int_{M} f^2 d\mu = \int_M (Lu)^2 d\mu = \int_M f Lu \; d\mu = -\int_M {\left \langle \nabla f ,\nabla u \right \rangle} d\mu + \int_{\partial M} f u_\nu d\mu ~.$$ Consequently, by Cauchy–Schwartz: $$\label{eq:duality-BLN} \int_M f^2 d\mu \leq {\left(\int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N} \; \nabla u, \nabla u \right \rangle} d\mu\right)}^{1/2} {\left( \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f, \nabla f \right \rangle} d\mu\right)}^{1/2} + \int_{\partial M} f u_\nu d\mu ~.$$ We now proceed to treat the individual three cases. 1. Assume that $\int_M f d\mu = 0$ and solve the Neumann Poisson problem for $u \in {\mathcal{S}}_0(M)$: $$Lu =f \text{ on $M$ } ~,~ u_\nu \equiv 0 \text{ on $\partial M$} ~;$$ note that the compatibility condition $\int_{\partial M} u_\nu d\mu = \int_M f d\mu = 0$ is indeed satisfied, so a solution exists. Since $u_\nu|_{\partial M} \equiv 0$ and ${\text{II}}_{\partial M}\geq 0$, we obtain from (\[eq:CD-Reilly-BLN\]): $$\label{eq:CD-BLN} \frac{N}{N-1}\int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N} \; \nabla u, \nabla u \right \rangle} d\mu \leq \int_M (Lu)^2 d\mu = \int_M f^2 d\mu ~.$$ Plugging this back into (\[eq:duality-BLN\]) and using that $u_\nu \equiv 0$ yields the assertion of Case (1). 2. Assume that $f|_{\partial M} \equiv 0$ and solve the Dirichlet Poisson problem for $u \in {\mathcal{S}}_0(M)$: $$Lu =f \text{ on $M$ } ~,~ u \equiv 0 \text{ on $\partial M$} ~.$$ Observe that (\[eq:CD-BLN\]) still holds since $u|_{\partial M} \equiv 0$ and $H_{\mu} \geq 0$. Plugging (\[eq:CD-BLN\]) back into (\[eq:duality-BLN\]) and using that $f|_{\partial M}\equiv 0$ yields the assertion of Case (2). 3. Assume that $\int_M f d\mu = 0$ and solve the Dirichlet Poisson problem: $$Lu =f \text{ on $M$ } ~,~ u \equiv 0 \text{ on $\partial M$} ~.$$ The difference with the previous case is that the $\int f u_\nu d\mu$ term in (\[eq:base-BLN\]) does not vanish since we do not assume that $f|_{\partial M} \equiv 0$. Consequently, we cannot afford to omit the positive contribution of $\int_{\partial M} H_{\mu} (u_\nu)^2 d\mu$ in (\[eq:CD-Reilly-BLN\]): $$\frac{N-1}{N} \int_M f^2 d\mu \geq \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N} \; \nabla u, \nabla u \right \rangle} d\mu + \int_{\partial M} H_\mu u_\nu^2 d\mu ~.$$ Applying the duality argument, this time in additive form, we obtain for any $\lambda > 0$: $$\begin{aligned} \int_M f^2 d\mu & = & -\int_M {\left \langle \nabla f,\nabla u \right \rangle} d\mu + \int_{\partial M} f u_\nu d\mu \\ &\leq & \frac{1}{2 \lambda} \int_M {\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f, \nabla f \right \rangle} d\mu + \frac{\lambda}{2} \int_M{\left \langle {\mbox{\rm{Ric}}}_{\mu,N} \; \nabla u, \nabla u \right \rangle} d\mu + \int_{\partial M} f u_\nu d\mu ~. \end{aligned}$$ Since $\int_{\partial M} u_\nu d\mu = \int_M f d\mu = 0$, we may as well replace the last term by $\int_{\partial M} (f-C) u_\nu d\mu$. Plugging in the previous estimate and applying the Cauchy–Schwartz inequality again to eliminate $u_\nu$, we obtain: $$\begin{aligned} {\left(1 - \frac{\lambda}{2} \frac{N-1}{N}\right)} \int f^2 d\mu &\leq& \frac{1}{2 \lambda} \int _M{\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f, \nabla f \right \rangle} d\mu + \int_{\partial M} (f-C) u_\nu d\mu - \frac{\lambda}{2} \int_{\partial M} H_\mu u_\nu^2 d\mu \\ &\leq &\frac{1}{2 \lambda} \int _M{\left \langle {\mbox{\rm{Ric}}}_{\mu,N}^{-1} \; \nabla f, \nabla f \right \rangle} d\mu + \frac{1}{2 \lambda} \int_{\partial M} \frac{1}{H_\mu} (f - C)^2 d\mu ~. \end{aligned}$$ Multiplying by $2 \lambda$ and using the optimal $\lambda = \frac{N}{N-1}$, we obtain the assertion of Case (3). Generalized Veysseire Spectral-gap inequality on convex $M$ {#sec:Vey} =========================================================== The next result was recently obtained by L. Veysseire [@VeysseireSpectralGapEstimateCRAS] for compact weighted-manifolds without boundary. It may be thought of as a spectral-gap version of the Generalized Brascamp–Lieb inequality. We provide an extension in the case that $M$ is locally convex. \[thm:Veysseire\] Assume that as $2$-tensors on $M$: $${\mbox{\rm{Ric}}}_\mu \geq \rho g ~,$$ for some measurable function $\rho : M \rightarrow {\mathbb{R}}_+$. The generalized Reilly formula implies that for any $f \in C^{1}(M)$: 1. (Neumann Veysseire inequality on locally convex domain) Assume that ${\text{II}}_{\partial M}\geq 0$ ($M$ is locally convex). Then: $$Var_\mu(f) \leq \dashint_M \frac{1}{\rho} d\mu \; \int_M {\left\vert\nabla f\right\vert}^2 d\mu.$$ We do not know whether the analogous results for Dirichlet or Neumann boundary conditions (Cases (2) and (3) in the previous section) hold on a generalized mean-convex domain, as the proof given below breaks down in those cases. As in Veysseire’s work [@VeysseireSpectralGapEstimateCRAS], further refinements are possible. For instance, if in addition the $CD(\rho_0,N)$ condition is satisfied for $\rho_0 > 0$ and $1/N \in [-\infty,1/n]$, then one may obtain an estimate on the corresponding spectral-gap $\lambda_1^N$ of the form: $$\lambda_1^N \geq \frac{N}{N-1} \rho_0 + \frac{1}{\dashint_M \frac{1}{\rho - \rho_0} d\mu} ~.$$ As explained in [@VeysseireSpectralGapEstimateCRAS], this may be obtained by using an appropriate convex combination of the Lichnerowicz estimate (Case (1) of Theorem \[thm:intro-BLN\] after replacing $Ric_{\mu,N}^{-1}$ with $1/\rho_0$) and the estimates obtained in this section, with a final application of the Cauchy–Schwartz inequality. Similarly, it is possible to interpolate between the Lichnerowicz estimates and the Dimensional Brascamp–Lieb ones of Theorem \[thm:intro-BLN\]. We leave this to the interested reader. Veysseire’s proof in [@VeysseireSpectralGapEstimateCRAS] is based on the Bochner formula and the following observation, valid for any $u \in C^2(M)$ at any point so that $\nabla u \neq 0$: $$\label{eq:Vey} {\left\VertD^2 u\right\Vert} \geq {\left\vert\nabla {\left\vert\nabla u\right\vert}\right\vert} ~.$$ At a point where $\nabla u = 0$, we define ${\left\vert\nabla {\left\vert\nabla u\right\vert}\right\vert} := 0$. Plugging (\[eq:Vey\]) into the generalized Reilly formula and integrating the $\int_M (L u)^2 d\mu$ term by parts, we obtain for any $u \in {\mathcal{S}}_N(M)$ so that $L u \in C^1(M)$: $$\begin{gathered} \label{eq:VeyReilly} \int_{\partial M} u_\nu Lu d\mu - \int_M {\left \langle \nabla u,\nabla Lu \right \rangle} d\mu \geq \int_M {\left\vert\nabla {\left\vert\nabla u\right\vert}\right\vert}^2 d\mu + \int_M {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} d\mu + \\ \int_{\partial M} H_\mu (u_\nu)^2 d\mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} u,\nabla_{\partial M} u \right \rangle} d\mu - 2 \int_{\partial M} {\left \langle \nabla_{\partial M} u_\nu, \nabla_{\partial M} u \right \rangle} d\mu ~.\end{gathered}$$ 1. Let $u \in {\mathcal{S}}_N(M)$ denote an eigenfunction of $-L$ with zero Neumann boundary conditions corresponding to $\lambda_1^N$, so that in particular $Lu = -\lambda_1^N u \in C^1(M)$, and denote $h = {\left\vert\nabla u\right\vert} \in H^1(M)$. Applying (\[eq:VeyReilly\]) to $u$, using that ${\text{II}}_{\partial M} \geq 0$, and that $\int_{{\left\{h=0\right\}}} {\left\vert\nabla h\right\vert}^2 d{\textrm{Vol}}_M = 0$ for any $h \in H^1(M)$, we obtain: $$\lambda_1^N \int_M h^2 d\mu \geq \int_M {\left\vert\nabla h\right\vert}^2 d\mu + \int_M \rho h^2 d\mu ~.$$ Applying the Neumann Poincaré inequality to the function $h$, we obtain: $$\lambda_1^N \int_M h^2 d\mu \geq \lambda_1^N {\left( \int_M h^2 d\mu - \frac{1}{\mu(M)} (\int_M h d\mu)^2\right)} + \int_M \rho h^2 d\mu ~.$$ It follows by Cauchy–Schwartz that: $$\lambda_1^N \geq \frac{\mu(M) \int_M \rho h^2 d\mu}{(\int_M h d\mu)^2} \geq \frac{\mu(M)}{\int_M \frac{1}{\rho} d\mu} ~,$$ concluding the proof. The proof above actually yields a meaningful estimate on the spectral-gap $\lambda_1^N$ even when ${\text{II}}_{\partial M}$ is negatively bounded from below. However, this estimate depends on upper bounds on ${\left\vert\nabla u\right\vert}$, where $u$ is the first non-trivial Neumann eigenfunction, both in $M$ and on its boundary. Poincaré-type inequalities on $\partial M$ {#sec:Col} ========================================== Generalized Colesanti Inequality -------------------------------- \[thm:Colesanti\] Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in [-\infty,1/n]$) and that ${\text{II}}_{\partial M} > 0$ ($M$ is locally strictly-convex). Then the following inequality holds for any $f \in C^{1}(\partial M)$: $$\label{eq:gen-full0} \int_{\partial M} H_\mu f^2 d\mu - \frac{N-1}{N}\frac{{\left(\int _{\partial M} f d\mu\right)}^2}{\mu(M)} \leq \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} d\mu ~.$$ Theorem \[thm:Colesanti\] was obtained by A. Colesanti in [@ColesantiPoincareInequality] with $N=n$ for a compact subset $M$ of Euclidean space ${\mathbb{R}}^n$ having a $C^2$ strictly convex boundary and endowed with the Lebesgue measure ($V=0$). Colesanti was mainly interested in the case that $f$ has zero mean $\int_{\partial M} f \ d \mu=0$, but his proof yields the additional second term above. Colesanti derived this inequality as an infinitesimal version of the Brunn-Minkowski inequality, and so his method is naturally confined to the Euclidean setting; see [@ColesantiEugenia-PoincareFromAF] for further possible extensions in the Euclidean setting. As observed in [@ColesantiPoincareInequality], Theorem \[thm:Colesanti\] yields a sharp Poincaré inequality on $S^{n-1}$ when $M$ is a Euclidean ball in ${\mathbb{R}}^n$. By applying the Cauchy–Schwartz inequality to the last-term in the generalized Reilly formula: $$2 {\left \langle \nabla_{\partial M} u_\nu, \nabla_{\partial M} u \right \rangle} \leq {\left \langle {\text{II}}_{\partial M} \; \nabla_{\partial M} u , \nabla_{\partial M} u \right \rangle} + {\left \langle {\text{II}}_{\partial M}^{-1} \nabla_{\partial M} u_\nu , \nabla_{\partial M} u_\nu \right \rangle} ~,$$ we obtain for any $u \in {\mathcal{S}}_N(M)$: $$\int_M (L u)^2 d\mu \geq \int_M {\left({\left\Vert\nabla^2 u\right\Vert}^2 + {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle}\right)} d\mu + \int_{\partial M} H_\mu (u_\nu)^2 d\mu - \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} u_\nu,\nabla_{\partial M} u_\nu \right \rangle} d\mu ~.$$ Using the $CD(0,N)$ condition as in Lemma \[lem:CS\] with the convention that $-\infty \cdot 0 = 0$, we conclude: $$\frac{N-1}{N} \int_M (L u)^2 d\mu \geq \int_{\partial M} H_\mu (u_\nu)^2 d\mu - \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} u_\nu,\nabla_{\partial M} u_\nu \right \rangle} d\mu ~.$$ Now assume $f \in C^{1,\alpha}(\partial M)$ and solve the following Neumann Laplace problem for $u \in {\mathcal{S}}_N(M)$ satisfying: $$L u \equiv \frac{1}{\mu(M)} \int_{\partial M} f d\mu \text{ on $M$ } ~,~ u_\nu = f \text{ on $\partial M$} ~;$$ note that the compatibility condition $\int_{\partial M} u_\nu d\mu = \int_M (L u) d\mu$ is indeed satisfied, so that a solution exists. Plugging this back into the previous estimate, the generalized Colesanti inequality follows for $f \in C^{1,\alpha}(\partial M)$. The result for $f \in C^{1}(\partial M)$ follows by approximating $f$ in $C^1(\partial M)$ by $C^{1,\alpha}$ functions using a standard partition of unity and mollification argument (this is possible since $\partial M$ is assumed $C^2$ smooth). Peculiarly, it is possible to strengthen this inequality by using it for $f+z$ and optimizing over $z \in {\mathbb{R}}$; alternatively and equivalently, we may solve in the last step above: $$L u \equiv z \text{ on $M$ } ~,~ u_\nu = f - \dashint_{\partial M} f d\mu + z \frac{\mu(M)}{\mu(\partial M)} \text{ on $\partial M$} ~.$$ This results in the following stronger inequality: $$\int_{\partial M} H_\mu f^2 d\mu - \frac{N-1}{N}\frac{{\left(\int _{\partial M} f d\mu\right)}^2}{\mu(M)} + \frac{{\left( \int_{\partial M} f \beta d\mu \right)}^2}{\int_{\partial M} \beta d\mu} \leq \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} d\mu ~,$$ where: $$\beta(x) := \frac{N-1}{N} \frac{\mu(\partial M)}{\mu(M)} - H_\mu(x) ~.$$ Note that $\int_{\partial M} \beta d\mu \geq 0$ by testing (\[eq:gen-full0\]) on the constant function $f \equiv 1$. It may be shown that this integral is in fact strictly positive, unless $M$ is isometric to a Euclidean ball and $V$ is constant - see Remark \[rem:HR-equality\]; so in all other cases, this yields a strict improvement over (\[eq:gen-full0\]). By Colesanti’s argument in the Euclidean setting, the weaker (\[eq:gen-full0\]) inequality constitutes an infinitesimal version of the (sharp) Brunn–Minkowski inequality (for convex sets), and so one cannot hope to improve (\[eq:gen-full0\]) in the corresponding cases where Brunn–Minkowski is sharp. On the other hand, it would be interesting to integrate back the stronger inequality and obtain a refined version of Brunn–Minkowski, which would perhaps be better suited for obtaining delicate stability results. A Dual Version -------------- Next, we establish a dual version of Theorem \[thm:Colesanti\], which in fact applies whenever $M$ is only assumed *mean-convex* and under a general $CD(\rho,N)$ condition; however, the price we pay is that we do not witness the dependence on $N$ in the resulting inequality, so we might as well assume $CD(\rho,0)$. \[thm:dual-Colesanti\] Assume that $(M,g,\mu)$ satisfies the $CD(\rho,0)$ condition, $\rho \in {\mathbb{R}}$, and that $H_\mu > 0$ on $\partial M$ ($M$ is strictly generalized mean-convex). Then for any $f \in C^{2,\alpha}(\partial M)$ and $C \in {\mathbb{R}}$: $$\int_{\partial M} {\left \langle {\text{II}}_{\partial M} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} d\mu \leq \int_{\partial M} \frac{1}{H_\mu} \Bigl(L_{\partial M} f + \frac{\rho (f-C) }{2} \Bigr)^2 d\mu ~.$$ Moreover, if we assume that $\partial M$ is $C^{2,\alpha}$ smooth, then the result holds for $f \in C^2(\partial M)$. First, assume that $f \in C^{2,\alpha}(\partial M)$. This time, we solve the Dirichlet Laplace problem for $u \in {\mathcal{S}}_D(M)$ satisfying: $$Lu \equiv 0 \text{ on $M$ } ~,~ u = f \text{ on $\partial M$} ~.$$ By the generalized Reilly formula as in (\[Reilly2\]) and the $CD(\rho,0)$ condition: $$0 \ge \rho \int_{M} {\left\vert\nabla u\right\vert}^2 d \mu + \int_{\partial M} H_\mu (u_\nu)^2 d \mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \; \nabla_{\partial M} f, \nabla_{\partial M} f \right \rangle} d \mu + 2 \int_{\partial M} L_{\partial M} f \cdot u_\nu d \mu ~.$$ Integrating by parts we obtain: $$\begin{aligned} 0 \ge \rho \int_{\partial M} f u_\nu d \mu + \int_{\partial M} H_\mu (u_\nu)^2 d \mu + \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \; \nabla_{\partial M} f, \nabla_{\partial M} f \right \rangle} d \mu + 2 \int_{\partial M} L_{\partial M} f \cdot u_\nu d \mu ~.\end{aligned}$$ Since $\int_{\partial M} u_\nu d\mu = \int_M (L u) d\mu = 0$, we may as well replace the first term above by $\int_{\partial M} (f - C) u_\nu d \mu$. The asserted inequality for $f \in C^{2,\alpha}(\partial M)$ is obtained following an application of the Cauchy–Schwartz inequality: $$H_\mu u^2_\nu + 2 u_\nu \Bigl( L_{\partial M} f + \frac{\rho (f-C)}{2} \Bigr) \ge - \frac{1}{H_\mu} \Bigl(L_{\partial M} f + \frac{\rho (f-C)}{2} \Bigr)^2 ~.$$ When $\partial M$ is $C^{2,\alpha}$ smooth, the result remains valid for $f \in C^2(\partial M)$ by approximating $f$ in $C^2(\partial M)$ by functions in $C^{2,\alpha}(\partial M)$ by the usual partition of unity and mollification argument. When ${\text{II}}_{\partial M} > 0$ and $\rho=0$, Theorem \[thm:dual-Colesanti\] for the $CD(0,\infty)$ condition may be heuristically obtained from Theorem \[thm:Colesanti\] by a non-rigorous duality argument: $$\begin{aligned} \int_{\partial M} {\left \langle {\text{II}}_{\partial M} \; \nabla_{\partial M} f , \nabla_{\partial M} f \right \rangle} d\mu & =^? & \sup_{g} \frac{{\left(\int_{\partial M} {\left \langle \nabla_{\partial M} f,\nabla_{\partial M} g \right \rangle} d\mu \right)}^2 }{ \int_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \; \nabla_{\partial M} g, \nabla_{\partial M} g \right \rangle} d\mu} \\ &\leq& \sup_{g} \frac{{\left(\int_{\partial M} g L_{\partial M} f d\mu \right)}^2 }{ \int_{\partial M} H_\mu g^2 d\mu} \leq \int_{\partial M} \frac{1}{H_\mu} (L_{\partial M} f)^2 d\mu ~,\end{aligned}$$ where the supremum above is over all functions $g \in C^{1}(\partial M)$ with $\int_{\partial M} g d\mu = 0$. The delicate point is justifying the equality in question above: Cauchy–Schwartz implies the $\geq$ direction, and so given $f \in C^{1}(\partial M)$ it remains to find a function $g \in C^{1}(\partial M)$ so that $\nabla_{\partial M} g = {\text{II}}_{\partial M} \; \nabla_{\partial M} f$ on $\partial M$. It is well known that on a simply-connected manifold (and more generally, with vanishing first homology), a vector field is a gradient field if and only if its covariant derivative is a symmetric tensor, but this does not seem to be the case for us. Applications of Generalized Colesanti Inequalities {#sec:Col-App} ================================================== Topological Consequences ------------------------ Assume that $(M,g,\mu)$ satisfies the $CD(0,0)$ condition and that ${\text{II}}_{\partial M} > 0$ ($M$ is locally strictly-convex). Then $\partial M$ is connected. Otherwise, $\partial M$ has at least two connected components. By constructing a function $f \in C^1(\partial M)$ which is equal to an appropriate constant on each of the components so that $\int_{\partial M} f d\mu = 0$, we obtain a contradiction to (\[eq:gen-full0\]). Observe that one cannot relax most of the conditions of the theorem. For instance, taking $M$ to be $[0,1] \times T^{n-1}$ with the product metric, where $T^{n-1}$ is the flat $n-1$-dimensional torus, we see that the strict convexity condition cannot be relaxed to ${\text{II}}_{\partial M}\geq 0$. In addition, taking $M$ to be the submanifold of Hyperbolic space $H$, which in the Poincaré model in the open unit-disc in ${\mathbb{R}}^n$ is represented by: $$M := {\left\{ x \in {\mathbb{R}}^n \; ; \; {\left\vertx\right\vert} < 1 ~,~ {\left\vertx + 10 e_n\right\vert} < 10.5 ~,~ {\left\vertx - 10 e_n\right\vert} < 10.5 \right\}} ~,$$ since $M$ is strictly convex as a subset of Euclidean space, the same holds in $H$, but $\partial M$ has two connected components. Consequently, we see that the $CD(0,0)$ condition cannot be relaxed to $CD(-1,0)$ and hence (by scaling the metric) neither to $CD(-{\epsilon},0)$. Mean-Curvature Inequalities --------------------------- Setting $f\equiv 1$ in Theorem \[thm:Colesanti\], we recover and generalize to the entire range $1/N \in [-\infty,1/n]$ the following recent result of Huang and Ruan [@HuangRuanMeanCurvatureEstimates Theorem 1.3] for $N \in [n,\infty]$, who generalized the same result obtained by Reilly [@ReillyMeanCurvatureEstimate] in the classical Riemannian volume case ($V=0$ and $N=n$). \[cor:HuangRuan1\] Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$) and that ${\text{II}}_{\partial M} > 0$ ($M$ is locally strictly-convex). Then: $$\label{eq:HR-1} \int_{\partial M} H_\mu d\mu \leq \frac{N-1}{N} \frac{\mu(\partial M)^2}{\mu(M)} ~.$$ Applying Cauchy–Schwartz, it immediately follows that in above setting: $$\label{eq:int-1-H} \int_{\partial M} \frac{1}{H_\mu} d\mu \geq \frac{\mu(\partial M)^2}{\int_{\partial M} H_\mu d\mu} \geq \frac{N}{N-1} \mu(M) ~.$$ Interestingly, it was shown by A. Ros [@RosMeanCurvatureEstimateAndApplication] in the classical non-weighted case, and generalized by Huang and Ruan [@HuangRuanMeanCurvatureEstimates Theorem 1.1] to the weighted-Riemannian setting for $N \in [n,\infty]$, that it is enough to assume that $M$ is strictly (generalized) mean-convex for the inequality between first and last terms in (\[eq:int-1-H\]) to hold. We extend this to the entire range $1/N \in (-\infty,1/n]$: \[thm:HuangRuan2\] Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$) and that $H_{\mu} > 0$ ($M$ is strictly generalized mean-convex). Then: $$\label{eq:HR-2} \int_{\partial M} \frac{1}{H_\mu} d\mu \geq \frac{N}{N-1} \mu(M) ~.$$ This is very much related to our dual version of the generalized Colesanti inequality (Theorem \[thm:dual-Colesanti\]), and in fact both inequalities may be obtained simultaneously from the generalized Reilly formula by invoking the Cauchy–Schwartz inequality in two different ways. In a sense, this explains why we lost the dependence on $N$ in Theorem \[thm:dual-Colesanti\] and why we lose the dependence on $\rho$ in Theorem \[thm:HuangRuan2\]. The idea for proving Theorem \[thm:HuangRuan2\] is the same as in [@HuangRuanMeanCurvatureEstimates], but our argument is somewhat more direct. Let us solve for $u \in {\mathcal{S}}_0(M)$ the following Dirichlet Poisson equation: $$Lu \equiv 1 \text{ on $M$ } ~,~ u \equiv 0\text{ on $\partial M$} ~.$$ By the generalized Reilly formula and the $CD(0,N)$ condition: $$\begin{aligned} \mu(M) = \int_M (Lu)^2 d\mu &=& \int_M {\left\Vert\nabla^2 u\right\Vert}^2 d\mu + \int_M {\left \langle {\mbox{\rm{Ric}}}_\mu \; \nabla u, \nabla u \right \rangle} d\mu + \int_{\partial M} H_\mu (u_\nu)^2 d \mu \\ &\geq & \frac{1}{N} \int_M (Lu)^2 d\mu + \int_{\partial M} H_\mu (u_\nu)^2 d \mu ~.\end{aligned}$$ Coupled with an application of the Cauchy–Schwartz inequality, this yields: $$\mu(M)^2 = (\int_M (L u) d\mu)^2 = (\int_{\partial M} u_\nu d\mu)^2 \leq \int_{\partial M} H_\mu (u_\nu)^2 d\mu \int_{\partial M} \frac{1}{H_\mu} d\mu \leq \frac{N-1}{N} \mu(M) \int_{\partial M} \frac{1}{H_\mu} d\mu ~,$$ and the assertion follows. \[rem:HR-equality\] It may be shown by analyzing the cases of equality in all of the above used inequalities, that when $N \in [n,\infty]$, equality occurs in (\[eq:HR-1\]) or (\[eq:HR-2\]) if and only if $M$ is isometric to a Euclidean ball and $V$ is constant. See [@ReillyMeanCurvatureEstimate; @RosMeanCurvatureEstimateAndApplication; @HuangRuanMeanCurvatureEstimates] for more details. Spectral-Gap Estimates on $\partial M$ -------------------------------------- Next, we recall a result of Xia [@XiaSpectralGapOnConvexBoundary] in the classical non-weighted Riemannian setting ($V=0$), stating that when $Ric_g \geq 0$ on $M$ and ${\text{II}}_{\partial M}\geq \sigma g|_{\partial M}$ on $\partial M$ with $\sigma > 0$, then: $$\label{eq:Xia} Var_{Vol_{\partial M}}(f) \leq \frac{1}{(n-1) \sigma^2} \int_{\partial M} |\nabla_{\partial M} f|^2 dVol_{\partial M} ~,~ \forall f \in C^1(\partial M) ~.$$ In other words, the spectral-gap of $-L_{\partial M}$ on $(\partial M,g|_{\partial M},Vol_{\partial M})$ away from the trivial zero eigenvalue is at least $(n-1) \sigma^2$. Since in that case we have $H_g = tr({\text{II}}_{\partial M})\geq (n-1) \sigma$, our next result, which is an immediate corollary of Theorem \[thm:Colesanti\] applied to $f$ with $\int_{\partial M} f \ d \mu=0$, is both a refinement and an extension of Xia’s estimate to the more general $CD(0,0)$ condition in the weighted Riemannian setting: \[cor:IIH-Poincare\] Assume that $(M,g,\mu)$ satisfies $CD(0,0)$, and that ${\text{II}}_{\partial M} \ge \sigma g|_{\partial M}$, $H_\mu \ge \xi$ on $\partial M$ with $\sigma,\xi > 0$. Then: $$Var_\mu(f) \leq \frac{1}{\sigma \xi} \int_{\partial M} |\nabla_{\partial M} f|^2 d \mu ~,~ \forall f \in C^1(\partial M) ~.$$ In the Euclidean setting, and more generally when all sectional curvatures are non-negative, an improved bound will be obtained in the next section. The next result extends the previous one to the $CD(\rho,0)$ setting: \[cor:IIHRho-Poincare\] Assume that $(M,g,\mu)$ satisfies $CD(\rho,0)$, $\rho \geq 0$, and that ${\text{II}}_{\partial M} \ge \sigma g|_{\partial M}$, $H_\mu \ge \xi$ on $\partial M$ with $\sigma,\xi > 0$. Then: $$\lambda_1 Var_\mu(f) \leq \int_{\partial M} |\nabla_{\partial M} f|^2 d \mu ~,~ \forall f \in C^1(\partial M) ~,$$with: $$\lambda_1 \geq \frac{\rho + a + \sqrt{2 a \rho + a^2}}{2} \geq \max{\left(a, \frac{\rho}{2}\right)} ~,~ a := \sigma \xi ~.$$ Let $u$ denote the first non-trivial eigenfunction of $-L_{\partial M}$, satisfying $-L_{\partial M} u = \lambda_1 u$ with $\lambda_1 > 0$ the spectral-gap (we already know it is positive by Corollary \[cor:IIH-Poincare\]). Note that since $g|_{\partial M}$ is $C^2$ smooth, $\nabla_{\partial M} V \in C^1(\partial M)$ and $\partial M$ has no boundary, then $u \in C^{2,\beta}(\partial M)$ for all $\beta \in (0,1)$. Plugging the estimates ${\text{II}}_{\partial M} \ge \sigma g|_{\partial M}$, $H_\mu \ge \xi$ into the dual generalized Colesanti inequality (Theorem \[thm:dual-Colesanti\]) and applying it to the function $u$, we obtain: $$\sigma \lambda_1 \int_{\partial M} u^2 d\mu \leq \frac{1}{\xi} \int_{\partial M} (- \lambda_1 u + \frac{\rho_1}{2} u)^2 d\mu ~,~ \forall \rho_1 \in [0,\rho] ~.$$ Opening the brackets, this yields: $$\lambda_1^2 - (\rho_1 + \xi \sigma) \lambda_1 + \frac{\rho_1^2}{4} \geq 0 ~,~ \forall \rho_1 \in [0,\rho] ~.$$ The assertion then follows by using all values of $\rho_1 \in [0,\rho]$. In the next section, we extend our spectral-gap estimates on $(\partial M,g|_{\partial M},\mu)$ for the case of varying lower bounds $\sigma$ and $\xi$. Boundaries of $CD(\rho,N)$ weighted-manifolds {#sec:boundaries} ============================================= Curvature-Dimension of the Boundary ----------------------------------- Denote the full Riemann curvature $4$-tensor on $(M,g)$ by $R^M_g$, and let ${\mbox{\rm{Ric}}}^{\partial M}_\mu$ denote the weighted Ricci tensor on $(\partial M,g|_{\partial M}, \exp(-V) d{\textrm{Vol}}_{\partial M})$. \[lem:boundary-Ric\] Set $g_0 := g|_{\partial M}$ the induced metric on $\partial M$. Then: $$Ric^{\partial M}_{\mu} = (Ric^M_{\mu} - R^M_g(\cdot,\nu,\cdot,\nu))|_{T \partial M}+ (H_\mu g_0 - \text{II}_{\partial M}) \text{II}_{\partial M} ~.$$ Let $e_1,\ldots,e_n$ denote an orthonormal frame of vector fields in $M$ so that $e_n$ coincides on $\partial M$ with the outer normal $\nu$. The Gauss formula asserts that for any $i,j,k,l \in {\left\{1,\ldots,n-1\right\}}$: $$R^{\partial M}_{g_0}(e_i,e_j,e_k,e_l) = R^{M}_{g}(e_i,e_j,e_k,e_l) + {\text{II}}_{\partial M}(e_i,e_k) {\text{II}}_{\partial M}(e_j,e_l) - {\text{II}}_{\partial M}(e_j,e_k) {\text{II}}_{\partial M}(e_i,e_l) ~.$$ Contracting by applying $g_0^{j,l}$ and using the orthogonality, we obtain: $$\label{eq:calc1} Ric^{\partial M}_{g_0} = (Ric^M_g - R^M_g(\cdot,\nu,\cdot,\nu))|_{T \partial M} + (H_g g_0 - \text{II}_{\partial M}) \text{II}_{\partial M} ~.$$ In addition we have: $$\begin{aligned} \nabla^2_{g_0} V(e_i,e_j) &=& e_i (e_j(V)) - ((\nabla_{\partial M})_{e_i} e_j)(V) \\ &= & e_i (e_j(V)) - ((\nabla_{M})_{e_i} e_j)(V) - {\text{II}}_{\partial M}(e_i,e_j) e_n(V) \\ & = & \nabla^2_g V(e_i,e_j) - {\text{II}}_{\partial M}(e_i,e_j) \; g(\nabla V , \nu) ~.\end{aligned}$$ In other words: $$\label{eq:calc2} \nabla^2_{g_0} V = \nabla^2_g V|_{T \partial M} - {\text{II}}_{\partial M} g(\nabla V, \nu) ~.$$ Adding (\[eq:calc1\]) and (\[eq:calc2\]) and using that $H_\mu = H_g - g(\nabla V,\nu)$, the assertion follows. The induced metric $g_0$ on $\partial M$ is only as smooth as the boundary, namely $C^2$. Observe that in order to apply our previously described results to $(\partial M,g_0,\exp(-V) d{\textrm{Vol}}_{\partial M})$, we would need to assume that $g_0$ and hence $\partial M$ are $C^3$ smooth, as noted in Section \[sec:prelim\]. We continue to denote the measure $\exp(-V) d{\textrm{Vol}}_{\partial M}$ on $\partial M$ by $\mu$. \[cor:boundary-CD\] Assume that $0 \leq \text{II}_{\partial M} \leq H_\mu g_0$ and $R^M_g(\cdot,\nu,\cdot,\nu) \leq \kappa g_0$ as $2$-tensors on $\partial M$. If $(M,g,\mu)$ satisfies $CD(\rho,N)$ then $(\partial M ,g_0 ,\mu)$ satisfies $CD(\rho-\kappa,N-1)$. The first assumption ensures that: $$(H_\mu g_0 - \text{II}_{\partial M}) \text{II}_{\partial M} \geq 0 ~,$$ since the product of two commuting positive semi-definite matrices is itself positive semi-definite. It follows by Lemma \[lem:boundary-Ric\] that: $$\begin{aligned} & & Ric^{\partial M}_{\mu,N-1} = Ric^{\partial M}_\mu - \frac{1}{N-1-(n-1)} dV \otimes dV |_{T \partial M}\\ &\geq & (Ric^M_\mu - R^M_g(\cdot,\nu,\cdot,\nu) - \frac{1}{N-n} dV \otimes dV) |_{T \partial M} = (Ric^M_{\mu,N} - R^M_g(\cdot,\nu,\cdot,\nu))|_{T \partial M} ~.\end{aligned}$$ The assertion follows from our assumption that $Ric^M_{\mu,N} \geq \rho g$ and $R^M_g(\cdot,\nu,\cdot,\nu)|_{T \partial M} \leq \kappa g_0$. An immediate modification of the above argument yields: Assume that $(M,g,\mu)$ satisfies $CD(\rho,N)$ and that $R^M_g(\cdot,\nu,\cdot,\nu) \leq \kappa g_0$ as $2$-tensors on $\partial M$. If $\sigma_1 g_0 \leq {\text{II}}_{\partial M} \leq \sigma_2 g_0$, for some functions $\sigma_1,\sigma_2 : \partial M \rightarrow {\mathbb{R}}$, then: $$Ric^{\partial M}_{\mu,N-1} \geq (\rho -\kappa + \min(\sigma_1(H_\mu -\sigma_1) , \sigma_2 (H_\mu -\sigma_2))) g_0 ~.$$ In particular, if $H_\mu \geq \xi $ and $\sigma g_0 \leq \text{II}_{\partial M} \leq (H_\mu -\sigma) g_0$ for some constants $\xi,\sigma \in {\mathbb{R}}$, then: $$Ric^{\partial M}_{\mu,N-1} \geq (\rho - \kappa + \sigma (\xi - \sigma)) g_0 ~.$$ When $\text{II}_{\partial M} \geq \sigma g_0$ with $\sigma \geq 0$, we obviously have $\text{II}_{\partial M} \leq (H_g - (n-2) \sigma) g_0$ and $H_g \geq (n-1) \sigma$. In addition if ${\left \langle \nabla V,\nu \right \rangle} \leq 0$ on $\partial M$, we have $H_g \leq H_\mu$. Consequently, if $n \geq 3$ we obtain: $$\label{eq:boundary-Ric-estimate} (H_\mu g_0 - {\text{II}}_{\partial M}) {\text{II}}_{\partial M} \geq \sigma (H_g - \sigma) g_0 \geq (n-2) \sigma^2 g_0 ~.$$ Putting all of this together, we obtain: \[prop:CD-boundary\] Assume that $n \geq 3$, $(M^n,g,\mu)$ satisfies $CD(\rho,N)$, $\text{II}_{\partial M} \geq \sigma g_0$ with $\sigma \geq 0$ and ${\left \langle \nabla V,\nu \right \rangle} \leq 0$ and $R^M_g(\cdot,\nu,\cdot,\nu) \leq \kappa g_0$ on $T \partial M$. Then $(\partial M,g_0,\mu)$ satisfies $CD(\rho - \kappa +(n-2)\sigma^2,N-1)$. Observe that this is sharp for the sphere $S^{n-1}$, both as a hypersurface of Euclidean space ${\mathbb{R}}^n$, and as a hypersurface in a sphere $R S^{n}$ with radius $R \geq 1$. Spectral-Gap Estimates on $\partial M$ -------------------------------------- We can now apply the known results for weighted-manifolds (without boundary!) satisfying the $CD(\rho_0,N-1)$ condition to $(\partial M, g_0,\mu)$. The first estimate is an immediate consequence of the Bakry–Émery criterion [@BakryEmery] for log-Sobolev inequalities, see [@Ledoux-Book] for definitions and more details: \[cor:LS\] With the same assumptions as in Proposition \[prop:CD-boundary\] and for $N \in [n,\infty]$, $(\partial M,g_0,\mu)$ satisfies a log-Sobolev inequality with constant $\lambda_{LS} := {\left(\rho - \kappa + (n-2) \sigma^2\right)} \frac{N-1}{N-2}$, assuming that the latter is positive. In particular, the spectral-gap is at least $\lambda_{LS}$. The latter yields an improvement over Xia’s spectral-gap estimate (\[eq:Xia\]) for the boundary of a strictly convex manifold of non-negative sectional curvature. For concreteness, we illustrate this below for geodesic balls: Assume that $\partial M = \emptyset$, $n \geq 3$, and that $(M^n,g)$ has sectional curvatures in the interval $[\kappa_0,\kappa_1]$. Let $B_r$ denote a geodesic ball around $p \in M$ of radius $0 < r \leq \text{inj}_p$, where $\text{inj}_p$ denotes the radius of injectivity at $p$, and consider $(\partial B_r,g_0,{\textrm{Vol}}_{\partial B_r})$ where $g_0 = g|_{\partial B_r}$. By [@PetersenBook2ndEd Chapter 6, Theorem 27], ${\text{II}}_{\partial B_r} \geq \sqrt{\kappa_1} \cot(\sqrt{\kappa_1} r) g_0$. Consequently, by Lemma \[lem:boundary-Ric\]: $${\mbox{\rm{Ric}}}^{\partial B_r}_{g_0}\geq (n-2) \kappa_0 g_0 + (H_{g} g_0 - {\text{II}}_{\partial B_r}) {\text{II}}_{\partial B_r} \geq \rho_0 g_0 ~,~ \rho_0 := (n-2) {\left(\kappa_0 + \kappa_1 \cot^2(\sqrt{\kappa_1} r)\right)} ~.$$ It follows that $(\partial B_r,g_0,{\textrm{Vol}}_{\partial B_r})$ satisfies $CD(\rho_0,n-1)$, and hence by the Bakry–Émery criterion as above this manifold satisfies a log-Sobolev inequality with constant $\lambda_{LS} \geq \frac{n-1}{n-2} \rho_0 = (n-1) (\kappa_0 + \kappa_1 \cot^2(\sqrt{\kappa_1} r))$ whenever the latter is positive, strengthening Xia’s result for the spectral-gap (\[eq:Xia\]) in the case of non-negative sectional curvature ($\kappa_0 = 0$). Furthermore, if we replace the lower bound assumption on the sectional curvatures by the assumption that ${\mbox{\rm{Ric}}}^{B_r}_g \geq \rho g$, we obtain by Corollary \[cor:LS\] that $\lambda_{LS} \geq (n-1) (\frac{\rho - \kappa_1}{n-2} + \kappa_1 \cot^2(\sqrt{\kappa_1} r))$ whenever the latter is positive. Spectral-Gap Estimates on $\partial M$ involving varying curvature ------------------------------------------------------------------ Proceeding onward, we formulate our next results in Euclidean space with constant density, since then the assumptions of the previous subsection are the easiest to enforce. We continue to denote by $g_0$ the induced Euclidean metric on $\partial M$. By Lemma \[lem:boundary-Ric\] we know that in this case: $$\label{eq:Ric-Gauss} Ric^{\partial M}_{g_0} = (H_g g_0 - {\text{II}}_{\partial M}) {\text{II}}_{\partial M} ~,$$ and so if $\text{II}_{\partial M} \geq 0$, we verify as in Corollary \[cor:boundary-CD\] that $(\partial M, g_0,{\textrm{Vol}}_{\partial M})$ satisfies $CD(0,n-1)$. Consequently, the following spectral-gap estimates immediately follow from (\[eq:Ric-Gauss\]), (\[eq:boundary-Ric-estimate\]) and the Lichnerowicz and Veysseire Theorems (see Theorems \[thm:BLN\] and \[thm:Veysseire\]), at least for $C^3$ boundaries. The general case of a $C^2$ boundary follows by a standard Euclidean approximation argument. The first estimate below improves (in the Euclidean setting) the spectral-gap estimate given by Corollary \[cor:IIH-Poincare\]: \[thm:Lich-On-Boundary\] Let $n \geq 3$ and let $M$ denote a compact subset of Euclidean space $({\mathbb{R}}^n,g)$ with $C^2$-smooth boundary. Assume that $\text{II}_{\partial M} \geq \sigma g_0$ and $H = tr(\text{II}_{\partial M}) \geq \xi$ for some $\sigma,\xi > 0$. Then: $$Var_{Vol_{\partial M}}(f) \leq \frac{n-2}{n-1} \frac{1}{(\xi-\sigma)\sigma} \int_{\partial M} {\left\vert\nabla_{\partial M} f\right\vert}^2 dVol_{\partial M} ~,~ \forall f \in C^1(\partial M) ~.$$ \[thm:Vey-On-Boundary\] Let $n \geq 3$ and let $M$ denote a compact subset of Euclidean space $({\mathbb{R}}^n,g)$ with $C^2$-smooth boundary. Assume that: $$\text{II}_{\partial M} \geq \sigma g_0$$ for some positive measurable function $\sigma : \partial M \rightarrow {\mathbb{R}}_+$, and set $H = tr(\text{II}_{\partial M})$. Then: $$Var_{Vol_{\partial M}}(f) \leq \dashint_{\partial M} \frac{1}{ (H-\sigma) \sigma} dVol_{\partial M} \int_{\partial M} {\left\vert\nabla_{\partial M} f\right\vert}^2 dVol_{\partial M} ~,~ \forall f \in C^1(\partial M) ~.$$ We conclude this section with a similar estimate to the one above, by employing the generalized Colesanti inequality: \[thm:Col-On-Boundary\] With the same assumptions as in the previous theorem, we have: $$Var_{Vol_{\partial M}}(f) \leq C {\left(\dashint_{\partial M} \frac{1}{H} dVol_{\partial M} \dashint_{\partial M} \frac{1}{\sigma} dVol_{\partial M}\right)} \int_{\partial M} {\left\vert\nabla_{\partial M} f\right\vert}^2 dVol_{\partial M} ~,~ \forall f \in C^1(\partial M) ~,$$ where $C>1$ is some universal (dimension-independent) numeric constant. Given a $1$-Lipschitz function $f : \partial M \rightarrow {\mathbb{R}}$ with $\int_{\partial M} f dVol_{\partial M} = 0$, we may estimate using Cauchy–Schwartz and Theorem \[thm:Colesanti\]: $$\begin{aligned} \nonumber \Bigl(\dashint_{\partial M} {\left\vertf\right\vert} dVol_{\partial M} \Bigr)^2 &\leq& \dashint_{\partial M} \frac{1}{H} dVol_{\partial M}\dashint_{\partial M} H f^2 dVol_{\partial M} \\ \nonumber &\leq& \dashint_{\partial M} \frac{1}{H} dVol_{\partial M} \dashint_{\partial M} {\left \langle {\text{II}}_{\partial M}^{-1} \;\nabla_{\partial M} f,\nabla_{\partial M} f \right \rangle} dVol_{\partial M} \\ \label{eq:FM-estimate} & \leq & \dashint_{\partial M} \frac{1}{H} dVol_{\partial M} \dashint_{\partial M} \frac{1}{\sigma} dVol_{\partial M} ~.\end{aligned}$$ It follows by a general result of the second-named author [@EMilman-RoleOfConvexity], which applies to any weighted-manifold satisfying the $CD(0,\infty)$ condition, and in particular to $(\partial M, g_0 , Vol_{\partial M})$, that up to a universal constant, the same estimate as in (\[eq:FM-estimate\]) holds for the variance of any function $f \in C^{1}(\partial M)$ with $\dashint_{\partial M} {\left\vert\nabla_{\partial M} f\right\vert}^2 dVol_{\partial M} \leq 1$, and so the assertion follows. We remark that the results in [@EMilman-RoleOfConvexity] were proved assuming the metric in question is $C^\infty$ smooth, but an inspection of the proof, which builds upon the regularity results in [@MorganRegularityOfMinimizers], verifies that it is enough to have a $C^2$ metric, so the results are indeed applicable to $(\partial M, g_0 , Vol_{\partial M})$. Note that the estimate given by Theorem \[thm:Lich-On-Boundary\] is sharp and that the ones in Theorems \[thm:Vey-On-Boundary\] and \[thm:Col-On-Boundary\] are sharp up to constants, as witnessed by $S^{n-1} \subset {\mathbb{R}}^n$. The Euclidean setting was only used to easily establish that $(\partial M, g_0, Vol_{\partial M})$ satisfies $CD(0,\infty)$. The above results remain valid whenever both $(M,g,\exp(-V) Vol_{M})$ and $(\partial M,g_0,\exp(-V) Vol_{\partial M})$ satisfy the $CD(0,\infty)$ condition, if we assume that $\partial M$ is $C^3$-smooth. Connections to the Brunn–Minkowski Theory {#sec:BM} ========================================= It was shown by Colesanti [@ColesantiPoincareInequality] that in the Euclidean case, the inequality (\[eq:gen-full0\]) is *equivalent* to the statement that the function $t \mapsto Vol(K + t L)^{1/n}$ is concave at $t=0$ when $K,L$ are strictly convex and $C^2$ smooth. Here $A + B := {\left\{a + b \; ; \; a \in A , b \in B\right\}}$ denotes Minkowski addition. Using homogeneity of the volume and a standard approximation procedure of arbitrary convex sets by ones as above, this is in turn equivalent to: $$Vol((1-t) K + t L)^{1/n} \geq (1-t) Vol(K)^{1/n} + t Vol(L)^{1/n} ~,~ \forall t \in [0,1] ~,$$ for all convex $K,L \subset {\mathbb{R}}^n$. This is precisely the content of the celebrated Brunn–Minkowski inequality in Euclidean space (e.g. [@Schneider-Book; @GardnerSurveyInBAMS]), at least for convex domains. Consequently, Theorem \[thm:Colesanti\] provides yet another proof of the Brunn–Minkowski inequality in Euclidean space via the generalized Reilly formula. Conceptually, this is not surprising since the Bochner formula is a dual version of the Brascamp–Lieb inequality (see Section \[sec:BLN\]), and the latter is known to be an infinitesimal form of the Prekopá–Leindler inequality, which in turn is a functional (essentially equivalent) form of the Brunn-Minkowski inequality. So all of these inequalities are intimately intertwined and essentially equivalent to one another; see [@BrascampLiebPLandLambda1; @BobkovLedoux; @Ledoux-Book] for more on these interconnections. We also mention that as a by-product, we may obtain all the well-known consequences of the Brunn–Minkowski inequality (see [@GardnerSurveyInBAMS]); for instance, by taking the first derivative in $t$ above, we may deduce the (anisotropic) isoperimetric inequality (for convex sets $K$). Since our generalization of Colesanti’s Theorem holds on any weighted-manifold satisfying the $CD(0,N)$ condition, it is then natural to similarly try and obtain a Brunn–Minkowski or isoperimetric-type inequality in the latter setting. The main difficulties arising with such an attempt in this generality are the lack of homogeneity, the lack of a previously known generalization of Minkowski addition, and the fact that enlargements of convex sets are in general non-convex (consider geodesic balls on the sphere which are extended past the equator). At least some of these issues are addressed in what follows. Riemannian Brunn–Minkowski for Geodesic Extensions -------------------------------------------------- Let $K$ denote a compact subset of $(M^n,g)$ with $C^2$ smooth boundary ($n \geq 2$). Denote: $$\delta^0(K) := \mu(K) ~,~ \delta^1(K) := \mu(\partial K) := \int_{\partial K} d\mu ~,~ \delta^2(K) := \int_{\partial K} H_\mu d\mu ~.$$ It is well-known (see e.g. [@EMilmanGeometricApproachPartI] or Subsection \[subsec:Full-BM\]) that $\delta^i$, $i=0,1,2$, are the $i$-th variations of $\mu(K_t)$, where $K_t$ is the $t$-neighborhood of $K$, i.e. $K_t := {\left\{x \in M ; d(x,K) \leq t\right\}}$ with $d$ denoting the geodesic distance on $(M,g)$. Given $1/N \in (-\infty,1/n]$, denote in analogy to the Euclidean case the “generalized quermassintegrals" by: $$W_N(K) = \delta^0(K) ~,~ W_{N-1}(K) = \frac{1}{N} \delta^1(K) ~,~ W_{N-2}(K) = \frac{1}{N (N-1)} \delta^2(K) ~.$$ Observe that when $\mu = {\textrm{Vol}}_M$ and $N=n$, these quermassintegrals coincide with the Lipschitz–Killing invariants in Weyl’s celebrated tube formula, namely the coefficients of the polynomial $\mu(K_t) = \sum_{i=0}^n {n \choose i} W_{n-i}(K) t^i$ for $t \in [0,{\epsilon}_K]$ and small enough ${\epsilon}_K > 0$. In particular, when $(M,g)$ is Euclidean and $K$ is convex, these generalized quermassintegrals coincide with their classical counterparts, discovered by Steiner in the 19th century (see e.g. [@BernigCurvatureSurvey] for a very nice account). As an immediate consequence of Corollary \[cor:HuangRuan1\] we obtain: [**(Riemannian Brunn-Minkowski for Geodesic Extensions)**]{}. \[cor:Geodesic-BM\] Assume that $(K,g|_K,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$) and that ${\text{II}}_{\partial K} > 0$ ($K$ is locally strictly-convex). Then the following holds: 1. (Generalized Minkowski’s second inequality for geodesic extensions) $$\label{eq:Alexandrov} W_{N-1}(K)^2 \geq W_{N}(K) W_{N-2}(K) ~,$$ or in other words: $$\delta^1(K)^2 \geq \frac{N}{N-1} \delta^0(K) \delta^2(K) ~.$$ 2. (Infinitesimal Geodesic Brunn–Minkowski) $(d/dt)^2 N \mu(K_t)^{1/N} |_{t=0} \leq 0$. 3. (Global Geodesic Brunn–Minkowski) The function $t \mapsto N \mu(K_t)^{1/N}$ is concave on any interval $[0,T]$ so that for all $t \in [0,T)$, $K_t$ is $C^2$ smooth, locally strictly-convex, bounded away from $\partial M$, and $(K_t,g|_{K_t},\mu|_{K_t})$ satisfies $CD(0,N)$. The first assertion is precisely the content of Corollary \[cor:HuangRuan1\]. The second follows since: $$(d/dt)^2 N \mu(K_t)^{1/N} |_{t=0} = \mu(K_t)^{1/N-2} {\left(\delta_2(K) \delta_0(K) - \frac{N-1}{N} \delta_1(K)^2\right)} ~.$$ The third is an integrated version of the second. In the non-weighted Riemannian setting, the interpretation of Corollary \[cor:HuangRuan1\] as a Riemannian version of Minkowski’s second inequality was already noted by Reilly [@ReillyMeanCurvatureEstimate]. We also mention that in Euclidean space, a related Alexandrov–Fenchel inequality was shown to hold by Guan and Li [@GuanLiAlexandrovFenchelForNonConvexUsingFlows] for arbitrary mean-convex star-shaped domains. Riemannian Brunn–Minkowski via Parallel Normal Flow {#subsec:Full-BM} --------------------------------------------------- Let $F_0 : \Sigma^{n-1} \rightarrow M^n$ denote a smooth embedding of an oriented submanifold $\Sigma_0 := F_0(\Sigma)$ in $(M,g)$, where $\Sigma$ is a $n-1$ dimensional compact smooth oriented manifold without boundary. The following geometric evolution equation for $F : \Sigma \times [0,T] \rightarrow M$ has been well-studied in the literature: $$\label{eq:flow0} \frac{d}{dt} F(y,t) = \varphi(y,t) \nu_{\Sigma_t}(F(y,t)) ~,~ F(y,0) = F_0 ~,~ y \in \Sigma ~,~ t \in [0,T] ~.$$ Here $\nu_{\Sigma_t}$ is the unit-normal (in accordance to the chosen orientation) to $\Sigma_t := F_t(\Sigma)$, $F_t := F(\cdot,t)$, and $\varphi : \Sigma \times [0,T] \rightarrow {\mathbb{R}}_+$ denotes a function depending on the extrinsic geometry of $\Sigma_t \subset M$ at $F(y,t)$. Typical examples for $\varphi$ include the mean-curvature, the inverse mean-curvature, the Gauss curvature, and other symmetric polynomials in the principle curvatures (see [@HuiskenPoldenSurvey] and the references therein). Motivated by the DeTurck trick in the analysis of Ricci-flow (e.g. [@ToppingRicciFlowBook]), we propose to add another tangential component $\tau_t$ to (\[eq:flow0\]). Let $\varphi : \Sigma \rightarrow {\mathbb{R}}$ denote a $C^2$ function which is fixed throughout the flow. Assume that ${\text{II}}_{\Sigma_t} > 0$ for $t \in [0,T]$ along the following flow: $$\begin{aligned} \label{eq:Mink-flow} \frac{d}{dt} F(y,t) &=& \omega_t(F(y,t)) ~,~ F(y,0) = F_0 ~,~ y \in \Sigma ~,~ t \in [0,T] ~, \\ \nonumber \omega_t &:=& \varphi_t \nu_{\Sigma_t} + \tau_t ~ \text{ on $\Sigma_t$} ~,~ \tau_t := {\text{II}}_{\Sigma_t}^{-1} \nabla_{\Sigma_t} (\varphi_t) ~,~ \varphi_t := \varphi \circ F_t^{-1} ~.\end{aligned}$$ For many flows, the tangential component $\tau_t$ would be considered an inconsequential diffeomorphism term, which does not alter the set $\Sigma_t = F_t(\Sigma)$, only the individual trajectories $t \mapsto F(y,t)$ for a given $y \in \Sigma$. However, contrary to most flows where $\varphi(y,t)$ depends solely on the geometry of $\Sigma_t$ at $F_t(y)$, for our flow $\varphi$ plays a different role, and in particular its value along every trajectory is fixed throughout the evolution. Consequently, this tangential term creates a desirable geometric effect as we shall see below. Before proceeding, it is useful to note that (\[eq:Mink-flow\]) is clearly parametrization invariant: if $\zeta: \Sigma' \rightarrow \Sigma$ is a diffeomorphism and $F$ satisfies (\[eq:Mink-flow\]) on $\Sigma$, then $F'(z,t) := F(\zeta(z),t)$ also satisfies (\[eq:Mink-flow\]) with $\varphi'(z) := \varphi(\zeta(z))$. Consequently, we see that (\[eq:Mink-flow\]) defines a semi-group of pairs $(\Sigma_t, \varphi_t)$, so it is enough to perform calculations at time $t=0$. In addition, we are allowed to use a convenient parametrization $\Sigma'$ for our analysis. We now claim that in Euclidean space, Minkowski summation can indeed be parametrized by the evolution equation (\[eq:Mink-flow\]). Given a convex compact set in ${\mathbb{R}}^n$ containing the origin in its interior (“convex body") with $C^2$ smooth boundary and outer unit-normal $\nu_K$, by identifying $T_x {\mathbb{R}}^n$ with ${\mathbb{R}}^n$, $\nu_K: \partial K \rightarrow S^{n-1}$ is the Gauss-map. Note that when $K$ is strictly convex (${\text{II}}_K > 0$), the Gauss-map is a diffeomorphism. Finally, the support function $h_K$ is defined by $h_K(x) := \sup_{y \in K} {\left \langle x,y \right \rangle}$, so that ${\left \langle x,\nu_K(x) \right \rangle} = h_K(\nu_K(x))$ and ${\left \langle \nu_K^{-1}(\nu),\nu \right \rangle} = h_K(\nu)$. \[prop:Euclidean-Flow\] Let $K$ and $L$ denote two strictly-convex bodies in ${\mathbb{R}}^n$ with $C^2$ smooth boundaries. Let $F : S^{n-1} \times {\mathbb{R}}_+ \rightarrow {\mathbb{R}}^n$ be defined by $F(\nu,t) := \nu_{K + tL}^{-1}(\nu) $, so that $\partial (K + t L) = F_t(S^{n-1})$ for all $t \geq 0$. Then $F$ satisfies (\[eq:Mink-flow\]) with $\varphi = h_L$ and $F_0 := \nu_K^{-1}$. As the support function is additive with respect to Minkowski addition, then so is the inverse Gauss-map: $\nu_{K + tL}^{-1} = \nu_K^{-1} + t \nu_L^{-1}$. Consequently: $$\frac{d}{dt} F_t(\nu) = \nu_L^{-1}(\nu) = h_L(\nu) \nu + {\left( \nu_L^{-1}(\nu) - {\left \langle \nu_L^{-1}(\nu) , \nu \right \rangle} \nu\right)} ~.$$ Since $\nu_{F_t(S^{n-1})}(F_t(\nu)) = \nu$, it remains to show (in fact, just for $t=0$) with the usual identification between $T_x {\mathbb{R}}^n$ and ${\mathbb{R}}^n$ that: $$\nu_{K + t L}(x) = \nu \;\;\; \Rightarrow \;\;\; {\text{II}}_{\partial(K + t L)}^{-1} \nabla_{\partial(K + tL)} (h_L \circ \nu_{K+tL}) (x) = \nu_L^{-1}(\nu) - {\left \langle \nu_L^{-1}(\nu) , \nu \right \rangle} \nu ~.$$ Indeed by the chain-rule: $$\nabla_{\partial K}(h_L(\nu_K(x))) = \nabla_{S^{n-1}} h_L(\nu) \nabla_{\partial K} \nu_K(x) = {\text{II}}_{\partial K}(x) \nabla_{S^{n-1}} h_L(\nu) ~,$$ so our task reduces to showing that: $$\nabla_{S^{n-1}} h_L(\nu) = \nu_L^{-1}(\nu) - {\left \langle \nu_L^{-1}(\nu) , \nu \right \rangle} \nu ~,~ \forall \nu \in S^{n-1} ~.$$ This is indeed the case, and moreover: $$\nabla_{{\mathbb{R}}^n} h_L(\nu) = \nu_L^{-1}(\nu) ~,~ \forall \nu \in S^{n-1} ~.$$ The reason is that $\nu_L(x) = \frac{\nabla_{{\mathbb{R}}^n} {\left\Vertx\right\Vert}_L}{{\left\vert\nabla_{{\mathbb{R}}^n}{\left\Vertx\right\Vert}_L\right\vert}}$, and since $\nabla_{{\mathbb{R}}^n} h_L$ is $0$-homogeneous, we obtain: $$\nabla_{{\mathbb{R}}^n} h_L \circ \nu_L(x) = \nabla_{{\mathbb{R}}^n} h_L \circ \nabla_{{\mathbb{R}}^n} {\left\Vertx\right\Vert}_L = x ~,~ \forall x \in \partial L ~,$$ where the last equality follows since $h_L$ and ${\left\Vert\cdot\right\Vert}_L$ are dual norms. This concludes the proof. The latter observation gives a clear geometric interpretation of what the flow (\[eq:Mink-flow\]) is doing in the Euclidean setting: normals to the evolving surface remain constant along trajectories. In the more general Riemannian setting, where one cannot identify between $M$ and $T_x M$ and where the Gauss map is not available, we have the following geometric characterization of the flow (\[eq:Mink-flow\]) which extends the latter property: normals to the evolving surface remain *parallel* along trajectories. Consequently, we dub (\[eq:Mink-flow\]) the “Parallel Normal Flow". Consider the following geometric evolution equation along a time-dependent vector-field $\omega_t$: $$\frac{d}{dt} F_t(y) = \omega_t(F_t(y)) ~,~ y \in \Sigma ~,~ t \in [0,T] ~,$$ and assume that $F_t: \Sigma \rightarrow \Sigma_t$ is a local diffeomorphism and that ${\text{II}}_{\Sigma_t} > 0$ for all $t \in [0,T]$. Then the unit-normal field is parallel along the flow: $$\frac{d}{dt} \nu_{\Sigma_t}(F_t(y)) = 0 ~,~ \forall y \in \Sigma ~,~ \forall t \in [0,T] ~,$$ if and only if there exists a family of functions $f_t : \Sigma_t\rightarrow {\mathbb{R}}$, $t \in [0,T]$, so that: $$\label{eq:char} \omega_t = f_t \nu_{\Sigma_t} + {\text{II}}_{\Sigma_t}^{-1} \nabla_{\Sigma_t} f_t ~,$$ Furthermore, the entire normal component of $\omega_t$, denoted $\omega^\nu_t := {\left \langle \omega_t,\nu_{\Sigma_t} \right \rangle} \nu_{\Sigma_t}$, is parallel along the flow: $$\frac{d}{dt} \omega^\nu_{t}(F_t(y)) = 0 ~,~ \forall y \in \Sigma ~,~ \forall t \in [0,T] ~,$$ if and only if there exists a function $\varphi : \Sigma \rightarrow {\mathbb{R}}$ so that $f_t = \varphi \circ F_t^{-1}$ in (\[eq:char\]). Recall that the derivative of a vector-field $X$ along a path $t \mapsto \gamma(t)$ is interpreted by employing the connection $\frac{d}{dt} X(\gamma(t)) = \nabla_{\gamma'(t)} X$. First, observe that ${\left \langle \frac{d}{dt} \nu_{\Sigma_t}(F_t(y)) , \nu_{\Sigma_t} \right \rangle} = \frac{1}{2} \frac{d}{dt} {\left \langle \nu_{\Sigma_t},\nu_{\Sigma_t} \right \rangle}(F_t(y)) = 0$, so $\frac{d}{dt} \nu_{\Sigma_t}(F_t(y))$ is tangent to $\Sigma_t$. Given $y \in \Sigma$ let $e \in T_y\Sigma$ and set $e_t := dF_t(e) \in T_{F_t(y)} \Sigma_t$. Since ${\left \langle \nu_{\Sigma_t} , e_t \right \rangle} = 0$, we have: $${\left \langle \frac{d}{dt} \nu_{\Sigma_t}(F_t(y)) , e_t \right \rangle} = - {\left \langle \nu_{\Sigma_t}, \frac{d}{dt} dF_t(e) \right \rangle} = -{\left \langle \nu_{\Sigma_t} , \nabla_{e_t} \frac{d}{dt} F_t(y) \right \rangle} = -{\left \langle \nu_{\Sigma_t} , \nabla_{e_t} \omega_t \right \rangle} ~.$$ Decomposing $\omega_t$ into its normal $\omega^\nu_t = f_t \nu_{\Sigma_t}$ and tangential $\omega^\tau_t$ components, we calculate: $$-{\left \langle \nu_{\Sigma_t} , \nabla_{e_t} \omega_t \right \rangle} = - \nabla_{e_t} f_t - f_t {\left \langle \nu_{\Sigma_t}, \nabla_{e_t} \nu_{\Sigma_t} \right \rangle} - {\left \langle \nu_{\Sigma_t} , \nabla_{e_t} \omega^\tau_t \right \rangle} ~.$$ Since ${\left \langle \nu_{\Sigma_t}, \nabla_{e_t} \nu_{\Sigma_t} \right \rangle} = \frac{1}{2} \nabla_{e_t} {\left \langle \nu_{\Sigma_t}, \nu_{\Sigma_t} \right \rangle} = 0$, ${\left \langle \nu_{\Sigma_t} , \nabla_{e_t} \omega^\tau_t \right \rangle} = -{\left \langle {\text{II}}_{\Sigma_t} \omega^\tau_t,e_t \right \rangle}$ and since $e$ and hence $e_t$ were arbitrary, we conclude that: $$\frac{d}{dt} \nu_{\Sigma_t}(F_t(y)) = -\nabla_{\Sigma_t} f_t + {\text{II}}_{\Sigma_t} \omega^\tau_t ~,$$ and so the first assertion follows. The second assertion follows by calculating: $$\frac{d}{dt} \omega^\nu_{t}(F_t(y)) = \frac{d}{dt} (f_t \nu_{\Sigma_t})(F_t(y)) = (\frac{d}{dt} f_t(F_t(y))) \nu_{\Sigma_t}(F_t(y)) + f_t(F_t(y)) \frac{d}{dt} \nu_{\Sigma_t}(F_t(y)) ~.$$ We see that $\omega^\nu_{t}$ is parallel along the flow if and only if both normal and tangential components on the right-hand-side above vanish, reducing to the first assertion in conjunction with the requirement that $f_t$ remain constant along the flow, i.e. $f_t = \varphi \circ F_t^{-1}$. Consequently, given a locally strictly-convex compact set $\Omega \subset (M,g)$ with $C^2$ smooth boundary which is bounded away from $\partial M$, the region bounded by $F_t(\partial \Omega) \subset (M,g)$ with initial conditions $F_0 = Id$ and $\varphi \in C^2(\partial \Omega)$, if it exists, will be referred to as the “Riemannian Minkowski Extension of $\Omega$ by $t \varphi$ " and denoted by $\Omega + t \varphi$. Note that this makes sense as long as the Parallel Normal Flow is a diffeomorphism which preserves the aforementioned convexity and boundedness away from $\partial M$ up until time $t$ - we will say in that case that the Riemannian Minkowski extension is “well-posed". When $\varphi \equiv t$ on $\partial \Omega$, we obtain the usual geodesic extension $\Omega_t$. Note that $\varphi$ need not be positive to make sense of this operation, and that multiplying $\varphi$ by a positive constant is just a time re-parametrization of the flow. Also note that this operation only depends on the geometry of $(M,g)$, and not on the measure $\mu$, in accordance with the classical Euclidean setting. We have seen that Riemannian Minkowski extension coincides with Minkowski summation in the Euclidean setting: $K + t \cdot h_L = K + t L$. We do not go here into analyzing the well-posedness of this operation, but rather concentrate on using this operation to derive the following Riemannian generalization of the Brunn-Minkowski inequality. \[thm:Full-BM\] Let $\Omega \subset (M,g)$ denote a locally strictly-convex compact set with $C^2$ smooth boundary which is bounded away from $\partial M$, and let $\varphi \in C^2(\partial \Omega)$. Let $\Omega_t$ denote the Riemannian Minkowski extension $\Omega_t := \Omega + t \varphi$, and assume that it is well-posed for all $t \in [0,T]$. Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$). Then the function $$t \mapsto N \mu(\Omega_t)^{1/N}$$ is concave on $[0,T]$. Set $\Sigma := \partial \Omega$, $F_0 = Id$, and recall that our evolution equation is: $$\label{eq:Jac-Flow} \frac{d}{dt} F_t(y) = \omega_t(F_t(y)) := \varphi(y) \nu_{\Sigma_t}(F_t(y)) + \tau_t(F_t(y)) ~,~ F_0(y) = Id ~,~ y \in \Sigma ~,~ t \in [0,T] ~.$$ As previously explained, it is enough to perform all the analysis at time $t=0$. Clearly, the first variation of $\mu(\Omega_t)$ only depends on the normal velocity $\varphi$, and so we have: $$\frac{d}{dt} \mu(\Omega_t)|_{t=0} = \int_{\Sigma} \varphi \exp(-V) d{\textrm{Vol}}_{\Sigma} ~.$$ By the semi-group property, it follows that: $$\frac{d}{dt} \mu(\Omega_t) = \int_{\Sigma_t} \varphi \circ F_t^{-1} \exp(-V) d{\textrm{Vol}}_{\Sigma_t} ~.$$ Since $F_t$ is a diffeomorphism for small $t \geq 0$, we obtain by the change-of-variables formula: $$\label{eq:Jac-base} \frac{d}{dt} \mu(\Omega_t) = \int_{\Sigma} \varphi \exp(-V \circ F_t) \; {\text{Jac}}F_t \; d{\textrm{Vol}}_{\Sigma} ~,$$ where ${\text{Jac}}F_t(y)$ denotes the Jacobian of $(\Sigma,g|_{\Sigma}) \ni y \mapsto F_t(y) \in (\Sigma_t, g|_{\Sigma_t})$, i.e. the determinant of $d_y F_t : (T_y \Sigma,g_y) \rightarrow (T_{F_t(y)} \Sigma_t, g_{F_t(y)})$. As is well known, $\frac{d}{dt} {\text{Jac}}F_t = {\text{div}}_{\Sigma_t} \frac{d}{dt} F_t$; we briefly sketch the argument. It is enough to show this for $t=0$ and for a $y \in \Sigma$ so that ${\left \langle \frac{d}{dt} F_t(y) , \nu_{\Sigma}(y) \right \rangle} \neq 0$. Fix an orthonormal frame $e_1,\ldots,e_n$ in $T M$ so that $e_n$ coincides with $\nu_{\Sigma_t}$ in a neighborhood of $(y,0)$ in $M \times {\mathbb{R}}$, and hence $e_1,\ldots,e_{n-1}$ is a basis for $T_{F(y,t)} \Sigma$. Since $d F_0 = Id$, it follows that: $$\frac{d}{dt} {\text{Jac}}F_t(y) = tr {\left(\frac{d}{dt} d_y F_t \right)} = \sum_{i=1}^{n-1} \frac{d}{dt} {\left \langle d_y F_t(e_i(y)) , e_i(F_t(y)) \right \rangle} ~.$$ Now as $F_0 = Id$ and $\frac{d}{dt} F_t|_{t=0} = \omega_0$, we have at $(y,0)$ (denoting $\omega = \omega_0$): $$\frac{d}{dt} {\left \langle d_y F_t(e_i) , e_i(F_t(y)) \right \rangle}|_{t=0} = {\left \langle \frac{d}{dt} d_y F_t(e_i) |_{t=0} , e_i \right \rangle} + {\left \langle e_i , \nabla_{\frac{d}{dt} F_t|_{t=0}} e_i \right \rangle} = {\left \langle \nabla_{e_i} \omega, e_i \right \rangle} + {\left \langle e_i , \nabla_{\omega} e_i \right \rangle} ~.$$ But ${\left \langle e_i, \nabla_{\omega} e_i \right \rangle} = \frac{1}{2} \nabla_{\omega} {\left \langle e_i,e_i \right \rangle} = 0$, and so we confirm that $\frac{d}{dt} {\text{Jac}}F_t = {\text{div}}_{\Sigma_t} \omega_t$. Now, taking the derivative of (\[eq:Jac-base\]) in $t$, we obtain: $$\frac{d^2}{(dt)^2} \mu(\Omega_t)|_{t=0} = \int_{\Sigma} \varphi ({\text{div}}_{\Sigma} \omega - {\left \langle \nabla V,\omega \right \rangle}) \exp(-V) d{\textrm{Vol}}_{\Sigma} ~.$$ Recall that $\omega = \varphi \nu_{\Sigma} + \tau$ (denoting $\tau = \tau_0$), so: $${\text{div}}_{\Sigma} \omega - {\left \langle \nabla V,\omega \right \rangle} = \varphi (H_{\Sigma,g} - {\left \langle \nabla V,\nu_\Sigma \right \rangle}) + {\text{div}}_{\Sigma} \tau - {\left \langle \nabla_{\Sigma} V, \tau \right \rangle} = \varphi H_{\Sigma,\mu} + {\text{div}}_{\Sigma,\mu} \tau ~.$$ Plugging this above and integrating by parts, we obtain: $$\frac{d^2}{(dt)^2} \mu(\Omega_t)|_{t=0} = \int_{\Sigma} H_{\Sigma,\mu} \varphi^2 d\mu - \int_{\Sigma} {\left \langle \nabla_{\Sigma} \varphi,\tau \right \rangle} d\mu .$$ Recalling that $\tau = {\text{II}}_{\Sigma}^{-1} \nabla_{\Sigma} \varphi$ and applying Theorem \[thm:Colesanti\], we deduce that: $$\frac{d^2}{(dt)^2} \mu(\Omega_t) |_{t=0} \leq \frac{N-1}{N} \frac{ (\int_{\Sigma} \varphi d\mu)^2 }{\mu(\Omega)} = \frac{N-1}{N} \frac{{\left(\frac{d}{dt} \mu(\Omega_t)|_{t=0}\right)}^2}{\mu(\Omega)} ~,$$ which is precisely the content of the assertion. \[rem:other-gen-BM\] Other more standard generalizations of the Brunn–Minkowski inequality in the weighted Riemannian setting and in the even more general metric-measure space setting, for spaces satisfying the $CD(\rho,N)$ condition with $N \in [n,\infty]$, have been obtained by Cordero-Erausquin–McCann–Schmuckenshläger [@CMSInventiones; @CMSManifoldWithDensity], Sturm [@SturmCD12] and Lott–Villani [@LottVillaniGeneralizedRicci], using the theory of optimal-transport. In those versions, Minkowski interpolation $(1-t) K + t L$ is replaced by geodesic interpolation of two domains, an operation whose existence does not require any a-priori justification, and which is not confined to convex domains. However, our version has the advantage of extending Minkowski summation $K + t L$ as opposed to interpolation, so we just need a single domain $\Omega_0$ and an initial condition $\varphi_0$ on the normal derivative to $\partial \Omega_0$; this may consequently be better suited for compensating the lack of homogeneity in the Riemannian setting and obtaining isoperimetric inequalities. There seem to be some interesting connections between the Parallel Normal Flow and an appropriate optimal-transport problem and Monge–Ampère equation, but this is a topic for a separate note. In this context, we mention the work by V. I. Bogachev and the first named author [@BogachevKolesnikovFlow], who showed a connection between the Gauss curvature flow and an appropriate optimal transport problem. While we do not go into this here, it is clear that in analogy to the Euclidean setting, one may use the Riemannian Minkowski extension operation to define the $k$-th Riemannian mixed volume of a locally strictly-convex $K$ and $\varphi \in C^2(\partial K)$ by taking the $k$-th variation of $t \mapsto \mu(K + t \varphi)$ and normalizing appropriately. It is then very plausible to expect that these mixed volumes should satisfy Alexandrov–Fenchel type inequalities, in analogy to the original inequalities in the Euclidean setting. Comparison with the Borell–Brascamp–Lieb Theorem {#subsec:BBL} ------------------------------------------------ Let $\mu$ denote a Borel measure with convex support $\Omega$ in Euclidean space $({\mathbb{R}}^n,{\left\vert\cdot\right\vert})$. In this Euclidean setting, it was shown by Borell [@BorellConvexMeasures] and independently by Brascamp and Lieb [@BrascampLiebPLandLambda1], that if $(\Omega,{\left\vert\cdot\right\vert},\mu)$ satisfies $CD(0,N)$, $1/N \in [-\infty,1/n]$, then for all Borel subsets $A,B \subset {\mathbb{R}}^n$ with $\mu(A),\mu(B) > 0$: $$\label{eq:BBL} \mu((1-t) A + t B) \geq {\left((1-t) \mu(A)^{\frac{1}{N}} + t \mu(B)^{\frac{1}{N}}\right)}^N ~,~ \forall t \in [0,1] ~.$$ Consequently, since $(1-t) K + t K = K$ when $K$ is convex, by using $A = K + t_1 L$ and $B = K + t_2 L$ for two convex subsets $K,L\subset \Omega$, it follows that the function: $$t \mapsto N \mu(K + t L)^{\frac{1}{N}}$$ is concave on ${\mathbb{R}}_+$. Clearly, Corollary \[cor:Geodesic-BM\] and Theorem \[thm:Full-BM\] are generalizations to the Riemannian setting of this fact, and in particular provide an alternative proof in the Euclidean setting. The above reasoning perhaps provides some insight as to the reason behind the restriction to convex domains in the concavity results of this section. We mention in passing that when the measure $\mu$ is homogeneous (in the Euclidean setting), one does not need to restrict to convex domains, simply by rescaling $A$ in (\[eq:BBL\]). See [@EMilmanRotemHomogeneous] for isoperimetric applications. The Weingarten Curvature Wave Equation {#subsec:strange-flow} -------------------------------------- To conclude this section, we observe that there is another natural evolution equation which yields the concavity of $N \mu(\Omega_t)^{1/N}$. Assume that $\varphi$ in (\[eq:flow0\]) evolves according to the following heat-equation on the evolving weighted-manifold $(\Sigma_t,\text{II}_{\Sigma_t},\mu_{\Sigma_t})$ equipped the Weingarten metric $\text{II}_{\Sigma_t}$ and the measure $\mu_{\Sigma_t} := \exp(-V) dVol_{g_{\Sigma_t}}$, $g_{\Sigma_t} := g|_{\Sigma_t}$: $$\label{eq:flow1} \frac{d}{dt} \log \varphi(y,t) = L_{(\Sigma_t,\text{II}_{\Sigma_t}, \mu_{\Sigma_t})}(\varphi_t)(F_t(y)) ~,~ \varphi_t := \varphi(F_t^{-1}(\cdot),t) ~,~ \varphi(\cdot,0) = \varphi_0 ~.$$ Here $L = L_{(\Sigma_t,\text{II}_{\Sigma_t}, \mu_{\Sigma_t})}$ denotes the weighted-Laplacian operator associated to this weighted-manifold, namely: $$\label{eq:flow-L} L(\psi) = \text{div}_{\text{II}_{\Sigma_t},\mu}(\nabla_{\text{II}_{\Sigma_t}} \psi) = \text{div}_{g_{\Sigma_t},\mu}(\text{II}^{-1}_{\Sigma_t} \nabla_{g_{\Sigma_t}} \psi) ~.$$ The last transition in (\[eq:flow-L\]) is justified since for any test function $f$: $$\begin{gathered} \int_{\Sigma_t} f \cdot \text{div}_{\text{II}_{\Sigma_t},\mu}(\nabla_{\text{II}_{\Sigma_t}} \psi) d\mu = - \int_{\Sigma_t} \text{II}_{\Sigma_t}(\nabla_{\text{II}_{\Sigma_t}} f,\nabla_{\text{II}_{\Sigma_t}} \psi) d\mu \\ = - \int_{\Sigma_t} g_{\Sigma_t}(\nabla_{g_{\Sigma_t}} f, \text{II}^{-1}_{\Sigma_t} \nabla_{g_{\Sigma_t}} \psi) d\mu = \int_{\Sigma_t} f \cdot \text{div}_{g_{\Sigma_t},\mu}({\text{II}}_{\Sigma_t}^{-1} \nabla_{g_{\Sigma_t}} \psi) d\mu ~.\end{gathered}$$ Note that (\[eq:flow1\]) is precisely the (logarithmic) gradient flow in $L^2(\Sigma_t,\mu_{\Sigma_t})$ for the Dirichlet energy functional on $(\Sigma_t,\text{II}_{\Sigma_t}, \mu_{\Sigma_t})$: $$\varphi \mapsto E(t,\varphi) := \frac{1}{2} \int_{\Sigma_t} \text{II}_{\Sigma_t}(\nabla_{\text{II}_{\Sigma_t}} (\varphi_t) , \nabla_{\text{II}_{\Sigma_t}} (\varphi_t)) d\mu_{\Sigma_t} ~.$$ Coupling (\[eq:flow0\]) and (\[eq:flow1\]), it seems that an appropriate name for the resulting flow would be the “Weingarten Curvature Wave Equation", since the second derivative in time of $F_t$ in the normal direction to the evolving surface $\Sigma_t$ is equal to the weighted Laplacian on $(\Sigma_t,{\text{II}}_{\Sigma_t},\mu_{\Sigma_t})$. We do not go at all into justifications of existence of such a flow, but rather observe the following: Assume that there exists a smooth solution $(F,\varphi)$ to the system of coupled equations (\[eq:flow0\]) and (\[eq:flow1\]) on $\Sigma \times [0,T]$, so that $F_t : \Sigma \rightarrow \Sigma_t \subset (M,g)$ is a diffeomorphism for all $t \in [0,T]$. Assume that $(M,g,\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$), that $\Sigma_t$ are strictly locally-convex (${\text{II}}_{\Sigma_t} > 0$) and bounded away from $\partial M$ for all $t \in [0,T]$. Assume that $\Sigma_t$ are the boundaries of compact domains $\Omega_t$ having $\nu_t$ as their exterior unit-normal field. Then the function $$t \mapsto N \mu(\Omega_t)^{1/N}$$ is concave on $[0,T]$. Denote $\Phi_t := \frac{d}{dt} \varphi(F_t^{-1}(\cdot),t)$. It is easy to verify as in the proof of Theorem \[thm:Full-BM\] that: $$\frac{d}{dt} \mu(\Omega_t) = \int_{\partial \Omega_t} \varphi_t d\mu ~,~ \frac{d}{dt} \mu(\partial \Omega_t) = \int_{\partial \Omega_t} H_\mu \varphi_t d\mu ~,$$ and that: $$\frac{d^2}{(dt)^2} \mu(\Omega_t) = \int_{\partial \Omega_t} (\Phi_t + H_\mu \varphi_t^2) d\mu ~.$$ Plugging the evolution equation (\[eq:flow1\]) above and integrating by parts, we obtain: $$\begin{aligned} \frac{d^2}{(dt)^2} \mu(\Omega_t) &=& \int_{\partial \Omega_t} ( \varphi_t \text{div}_{g_{\Sigma_t},\mu} {\left(\text{II}^{-1}_{\Sigma_t} \nabla_{g_{\Sigma_t}} \varphi_t\right)} + H_\mu \varphi_t^2) d\mu \\ & = & \int_{\partial \Omega_t} (H_\mu \varphi_t^2 - {\left \langle {\text{II}}^{-1}_{\Sigma_t} \nabla_{g_{\Sigma_t}} \varphi_t, \nabla_{g_{\Sigma_t}} \varphi_t \right \rangle}) d\mu ~.\end{aligned}$$ Applying Theorem \[thm:Colesanti\], we deduce that: $$\frac{d^2}{(dt)^2} \mu(\Omega_t) \leq \frac{N-1}{N} \frac{ (\int_{\partial \Omega_t} \varphi_t d\mu)^2 }{\mu(\Omega_t)} = \frac{N-1}{N} \frac{{\left(\frac{d}{dt} \mu(\Omega_t)\right)}^2}{\mu(\Omega_t)} ~,$$ which is precisely the content of the assertion. Isoperimetric Applications {#sec:Apps} ========================== We have seen in the previous section that under the $CD(0,N)$ condition and for various geometric evolution equations, including geodesic extension, the function $t \mapsto N \mu(\Omega_t)^{1/N}$ is concave as long as $\Omega_t$ remain strictly locally-convex, $C^2$ smooth, and bounded away from $\partial M$. Consequently, the following derivative exists in the wide-sense: $$\mu^+(\Omega) := \frac{d}{dt} \mu(\Omega_t)|_{t=0} = \lim_{t \rightarrow 0}\frac{\mu(\Omega_{t}) - \mu(\Omega)}{t} ~.$$ $\mu^+(\Omega)$ is the induced “boundary measure" of $\Omega$ with respect to $\mu$ and the underlying evolution $t \mapsto \Omega_t$. It is well-known and easy to verify (as in the proof of Theorem \[thm:Full-BM\]) that in the case of geodesic extension, $\mu^+(\Omega)$ coincides with $\mu(\partial \Omega) = \int_{\partial \Omega} d\mu$. We now mention several useful isoperimetric consequences of the latter concavity. For simplicity, we illustrate this in the Euclidean setting, but note that all of the results remain valid in the Riemannian setting as long as the corresponding generalizations described in the previous section are well-posed. Denote by $\mu^+_L(K)$ the boundary measure of $K$ with respect to $\mu$ and the Minkowski extension $t \mapsto K + t L$, where $L$ is a compact convex set having the origin in its interior. Let Euclidean space $({\mathbb{R}}^n,{\left\vert\cdot\right\vert})$ be endowed with a measure $\mu$ with convex support $\Omega$, so that $(\Omega,{\left\vert\cdot\right\vert},\mu)$ satisfies the $CD(0,N)$ condition ($1/N \in (-\infty,1/n]$). Let $K \subset \Omega$ and $L \subset {\mathbb{R}}^n$ denote two strictly convex compact sets with non-empty interior and $C^2$ boundary. Then: 1. The function $t \mapsto N \mu(K + t L)^{1/N}$ is concave on ${\mathbb{R}}_+$. 2. The following isoperimetric inequality holds: $$\mu^+_L(K) \geq \mu(K)^{\frac{N-1}{N}} \sup_{t > 0} N \frac{\mu(K + t L)^{1/N} - \mu(K)^{1/N}}{t} ~.$$ In particular, if the $L$-diameter of $\Omega$ is bounded above by $D < \infty$ ($\Omega - \Omega \subset D L$), we have: $$\mu^+_L(K) \geq \frac{N}{D} \mu(K)^{\frac{N-1}{N}} {\left(\mu(\Omega)^{1/N} - \mu(K)^{1/N}\right)} ~.$$ Alternatively, if $\mu(\Omega) = \infty$ and $N \in [n,\infty]$, we have: $$\label{eq:for-hom} \mu^+_L(K) \geq \mu(K)^{\frac{N-1}{N}} \limsup_{t \rightarrow \infty} \frac{N \mu(t L)^{1/N}}{t} ~.$$ 3. Define the following “convex isoperimetric profile": $${\mathcal{I}}^c_L(v) := \inf {\left\{\mu^+_L( K) \; ;\; \mu(K) = v \text{ , $K \subset \Omega$ has $C^2$ smooth boundary and ${\text{II}}_{\partial K} > 0$ } \right\}} ~.$$ Then the function $v \mapsto ({\mathcal{I}}^c_L(v))^{\frac{N}{N-1}}/v$ is non-increasing on its domain. Given a weighted-manifold $(M,g,\mu)$, recall that the usual isoperimetric profile is defined as: $${\mathcal{I}}(v) := \inf {\left\{\mu(\partial A) \;;\; \mu(A) = v \text{ , $A \subset M$ has $C^2$ smooth boundary}\right\}} ~.$$ When $(M,g,\mu)$ satisfies the $CD(0,N)$ condition with $N \in [n,\infty]$ and ${\text{II}}_{\partial M} \geq 0$ ($M$ is locally convex), it is known that $v \mapsto {\mathcal{I}}(v)^{\frac{N}{N-1}}$ is in fact concave on its domain, implying that $v \mapsto {\mathcal{I}}(v)^{\frac{N}{N-1}}/v$ is non-increasing (see [@EMilman-RoleOfConvexity; @EMilmanGeometricApproachPartI] and the references therein). The proof of this involves crucial use of regularity results from Geometric Measure Theory, and a major challenge is to give a softer proof. In particular, even in the Euclidean setting, an extension of these results to a non-Euclidean boundary measure $\mu^+_L(A)$ is not known and seems technically challenging. The last assertion provides a soft proof for the class of *convex* isoperimetric minimizers, which in fact remains valid for $N < 0$. As explained in Subsection \[subsec:BBL\], it is possible to prove the above assertions using the Borell–Brascamp–Lieb theorem. Another possibility is to invoke the localization method (see [@KLS; @BobkovZegarlinski]). However, these two approaches would be confined to the Euclidean setting, whereas the proof we give below is not. 1. The first assertion is almost an immediate consequence of the concavity calculation performed in the previous section for the classical Minkowski extension operation $t \mapsto K + t L$. However, in that section we assumed that $K + t L$ is bounded away from the boundary $\partial \Omega$, and we now explain how to remove this restriction. Note that if $y \in \partial K$, then $F_t(y) = y + t \nu^{-1}_L(\nu_K(y))$ is a straight line, as verified in Proposition \[prop:Euclidean-Flow\]. By the convexity of $\Omega$, this means that this line can at most exit $\Omega$ once, never to return. It is easy to verify that this incurs a non-positive contribution to the calculation of the second variation of $t \mapsto \mu(K + t L)$ in the proof of Theorem \[thm:Full-BM\]; the rest of the proof remains the same (with the first variation interpreted as the left-derivative). More generally, we note here that the concavity statement remains valid if instead of using $\varphi$ which remains constant on the trajectories of the flow, it is allowed to decrease along each trajectory. 2. By the concavity from the first assertion, it follows that for every $0<s\leq t$: $$N \frac{\mu(K + sL)^{1/N} - \mu(K)^{1/N}}{s} \geq N \frac{\mu(K + tL)^{1/N} - \mu(K)^{1/N}}{t} ~.$$ Taking the limit as $s \rightarrow 0$, the second assertion follows. 3. Given $K \subset \Omega$ with $C^2$ smooth boundary and ${\text{II}}_{\partial K} > 0$, denote $V(t) := \mu(K + t L )$ and set ${\mathcal{I}}_{K} := V' \circ V^{-1}$, expressing the boundary measure of $K + t L$ as a function of its measure. Note that ${\mathcal{I}}_{K}^{\frac{N}{N-1}}(v) / v$ is non-increasing on its domain. Indeed, assuming that $V$ is twice-differentiable, we calculate: $$\frac{d}{dv} \frac{{\mathcal{I}}_{K}^{\frac{N}{N-1}}(v)}{v} = {\left({\left(\frac{N}{N-1} \frac{V V''}{V'} - V'\right)}\frac{(V')^{\frac{1}{N-1}}}{V^2}\right)} \circ V^{-1}(v) \leq 0 ~,$$ and the general case follows by approximation. But since $I^c_L := \inf_{K} {\mathcal{I}}_K$ where the infimum is over $K$ as above, the third assertion readily follows. When in addition $N \mu^{1/N}$ is homogeneous, i.e. $N \mu(t L)^{1/N} = t N \mu(L)^{1/N}$ for all $t > 0$, it follows by (\[eq:for-hom\]) that for convex $K$ and $N \in [n,\infty]$: $$\mu^+_L(K) \geq \mu(K)^{\frac{N-1}{N}} N \mu(L)^{1/N} ~.$$ In particular, among all convex sets, homothetic copies of $L$ are isoperimetric minimizers. As already eluded to in Subsection \[subsec:BBL\], this is actually known to hold for arbitrary Borel sets $K$ (see [@CabreRosOtonSerra; @EMilmanRotemHomogeneous]).
--- abstract: 'We demonstrate experimentally that a hybrid single-electron transistor with superconducting leads and a normal-metal island can be refrigerated by an alternating voltage applied to the gate electrode. The simultaneous measurement of the dc current induced by the rf gate through the device at a small bias voltage serves as an in-situ thermometer.' author: - 'S. Kafanov$^{1}\footnote{Electronic address: sergey.kafanov@ltl.tkk.fi}$, A. Kemppinen$^{2}$, Yu.A. Pashkin$^{3\footnote{On leave from P.N. Lebedev Physical Institute of the Russian Academy of Sciences, Moscow 119991, Russia}}$, M. Meschke$^1$, J.S. Tsai$^3$ and J.P. Pekola$^{1}$' title: 'Electronic Radio-Frequency Refrigerator' --- Local cooling has become an interesting topic as nanodevices are getting more diverse. Mesoscopic electron systems [@ApplPhysLett.65.3123.Nahum; @ApplPhysLett.68.1996.Leivo; @ApplPhysLett.86.173508.Clark; @RevModPhys.78.217.Giazotto; @PhysRevLett.99.047004.Rajauria], superconducting qubits [@Science.314.1589.Valenzuela; @NaturePhys.4.612.Ploeg] and nanomechanical oscillators [@Nature.443.193.Naik; @NaturePhys.4.415.Schliesser] are among the systems of interest in this respect. The electron cooler holds the promise in applications, for instance in spaceborne radioastronomy, where it would present an easy-to-use, light-weight solution for noise reduction, with the further benefit of saving energy. In all the realizations until today the electronic refrigerator was operated by a dc bias voltage. Single-electron Coulomb blockade opens, however, a way to manipulate heat flow on the level of individual electrons [@PhysRevLett.99.027203.Saira], and to synchronize the refrigerator operation to an external frequency of the ac drive, as was predicted in [@PhysRevLett.98.037201.Pekola]. Although the ac operation may not produce more efficient refrigeration than the devices with a constant bias [@PhysRevLett.98.037201.Pekola; @PhysRevB.77.104517.Kopnin] the former one has a number of important benefits: (*i*) in some instances ac operation is the only available operation mode, (*ii*) non-galvanic continuous drive becomes possible and (*iii*) by ac drive one can produce a thermodynamic refrigeration cycle with electrons as the medium. In this Letter we demonstrate a device, the hybrid single-electron turnstile, which makes use of all the features (*i*)-(*iii*), and whose operating temperature can be lowered by almost a factor of two by the ac drive at the gate. The hybrid single-electron transistor (SET) has been intensively studied during the past few years to produce quantized current for metrological applications [@NaturePhys.4.120.Pekola; @PhysRevLett.101.066801.Averin; @ApplPhysLett.94.172108.Kemppinen; @arXiv.0803.1563v2.Kemppinen; @arXiv.0905.3402v1.Lotkhov]. The rf refrigerator is based on the very same device concept: it is composed of superconducting source and drain leads tunnel coupled to a very small normal-metal island in the Coulomb blockade regime (see Fig.\[Fig1\](a)). A small bias voltage applied over the SET defines a preferable direction for single-electron tunneling. However, for the bias voltages $|V|<2\Delta/e$, the dc current through the whole structure is strongly suppressed, due to the superconducting energy gap in the leads. The situation becomes different when a periodic variation of the gate charge of amplitude $A_\mathrm{rf}$ drives the transistor between the stability regions corresponding to two adjacent island charge states (see Fig.\[Fig1\](b)). Drive transfers a single electron through the turnstile in each cycle, and as a result creates a detectable dc current proportional to the driving frequency ($I\propto f$). The process is associated with heat transport from the island into the bias leads, which is the main topic of the present Letter (see Fig.\[Fig1\](c, d), for the principle). The quantitative analysis of the rf refrigerator operation is based on the orthodox theory, where the electron transport is considered as a sequence of instantaneous tunneling events [@JETP.62.623.Kulik; @Averin], under the assumption, that the tunneling electrons do not exchange energy with the environment [@Grabert.Devoret]. In the quasi-equilibrium limit [@RevModPhys.78.217.Giazotto], the electron energy distribution in the island and in the leads is given by the Fermi-Dirac distribution $f_{N(S)}(\varepsilon)$ with temperature $T_{N(S)}$. In general these temperatures are different from each other and from that of the cryostat, $T_0$. Due to the large volume of the bias leads and the tiny heat flux, we assume that electrons in the leads are well thermalized with lattice phonons ($T_S=T_0$). The tunneling rates $\Gamma_{j}^{\pm}$ of electrons tunneling to $(+)$ and from the island $(-)$ through junction with $n$ excess electrons on the island are given by the standard expressions $$\Gamma_{j,\,n}^{\pm}=\frac{1}{e^2R_{j}}\int n_{S}(\varepsilon)f_{S}(\varepsilon) \left[1-f_{N}(\varepsilon-\delta\mathcal{E}^{\pm}_{j,\,n})\right]d\varepsilon,$$ with $$\delta \mathcal{E}^{\pm}_{j,\,n}=\frac{e^2}{C_\Sigma}\left(n\pm \frac{1}{2}+\frac{V_{g}C_g}{e}\right)+\left(-1\right)^{j}e\frac{C_1C_2}{C_{j}C_\Sigma}V_{b}$$ where $C_{j}$, $R_{j}$ are the capacitance and the resistance of the tunnel junction $j=1,\,2$ and $C_\Sigma=C_1+C_2+C_{g}+C_\mathrm{env}$ is the total capacitance of the island, which includes the capacitance to the gate $C_g$ and that to the environment $C_\mathrm{env}$. The density of states (DOS) in the superconductor is denoted by $n_S(\varepsilon)$. The energy gain $\delta\mathcal{E}_{j,\,n}^{\pm}$ is the decrease of Gibbs energy of the system due to the corresponding tunneling event. The dynamics of electron tunneling through this device is given by the standard master equation for the probability $\sigma_{n,\,t}$ to have $n$ excess electrons on the island [@Averin]. Non-ideality of the superconducting leads can be taken into account assuming a finite quasiparticle density of states $\gamma$ inside the BCS superconducting gap, *e.g.*, due to inelastic electron scattering in the superconductor [@PhysRevLett.53.2437.Dynes; @Kopnin] or by inverse proximity effect due to nearby normal-metals. We model this smeared DOS: $n_{S}(\varepsilon)=\left|\Re\{(\varepsilon-i\Delta\gamma)/ \sqrt{(\varepsilon-i\Delta\gamma)^2-\Delta^2}\}\right|$. Typical experimental value for the effective smearing parameter $\gamma$ for the aluminum thin films near the tunnel junction is $\sim 10^{-4}$ [@ApplPhysLett.94.172108.Kemppinen; @arXiv.0803.1563v2.Kemppinen]. Heat transport through the junctions associated to each electron tunneling process is given by $$\dot{Q}_{j,\,n}^{\pm}=\frac{1}{e^2R_{j}}\int(\varepsilon-\delta\mathcal{E}^{\pm}_{j,\,n}) n_{S}(\varepsilon)f_{S}(\varepsilon) \left[1-f_{N}(\varepsilon-\delta\mathcal{E}_{j,\,n}^{\pm})\right]d\varepsilon.$$ The charge current and the cooling power of the rf refrigerator are then given by averaging the corresponding quantities over an operation cycle: $$I=(-1)^{j}ef\int_{0}^{f^{-1}}\sum_{n} \left(\Gamma_{j,\,n}^{-}-\Gamma_{j,\,n}^{+}\right)\sigma_{n,\,t}dt$$ and $$\dot{Q}=f\int_0^{f^{-1}} \sum_{j,\,n}(\dot{Q}_{n}^{-}-\dot{Q}_{n}^{+})\sigma_{n,\,t}dt.$$ The cooling power is counterbalanced by the heat loads from the relaxation processes. The load from electromagnetic coupling to the environment, *i.e.*, electron-photon relaxation, can be ignored due to poor matching between the island and environment [@PhysRevLett.93.045901.Schmidt; @Nature.444.187.Meschke]. The heat load by electron-phonon interaction dominates in our experiment; the corresponding power is given by [@PhysRevLett.55.422.Roukes] $$\label{Power_balance} P_{el-ph}=\Sigma\mathcal{V}(T_{N}^{5}-T_{0}^{5})$$ where $\Sigma$ is the electron-phonon coupling constant of the normal-metal and $\mathcal{V}$ is the island volume. The mean temperature of the island $T_N$ is obtained from $\dot{Q}=P_{el-ph}$. In order to cool the island, the frequency $f$ has to be high enough to prevent full relaxation toward lattice temperature, $\tau_{el-ph}^{-1}\ll f$. On the other hand to secure the quasiequilibrium state of the electron gas we require $f\ll \tau_{el-el}^{-1}$. The samples were fabricated by electron beam lithography and shadow deposition technique [@PhysRevLett.59.109.Fulton; @ApplPhysLett.76.2256.Pashkin], and they were measured in a dilution refrigerator with a base temperature of $40\,\mathrm{mK}$. For the characterization of the rf refrigerator, we measured the IV curves of the device at the base temperature of the cryostat, with simultaneous fast ramping of the gate voltage (see Fig.\[Fig2\](a)). The solid lines are the calculated IV curves for the two extreme gate positions: gate-open $Q_g=V_gC_g=e/2$ and gate closed $Q_g=0$. From these fits we get the following parameters of the sample: asymptotic resistance $R_{\infty}=R_1+R_2=315\,\mathrm{k\Omega}$; charging energy $E_c=e^2/(2C_\Sigma)=7\,\mathrm{K}$; superconducting energy gap $\Delta=210\,\mathrm{\mu eV}$; gap smearing parameter $\gamma=2\left.dI/dV\right|_{V=0}R_{\infty}=9.4\times 10^{-5}$. In order to obtain the value of $\Sigma$, we measured and fitted the IV characteristics in the subgap region at the gate-open state at different cryostat temperatures, see the inset of Fig.\[Fig2\](a). The corresponding electron temperatures extracted from the fitting are shown in the inset of Fig.\[Fig2\](b). In the gate-open state, the turnstile functions similarly to a regular SINIS cooler [@ApplPhysLett.68.1996.Leivo; @PhysRevLett.99.027203.Saira], with a maximum cooling power at the bias voltage $V_b\simeq\pm 2\Delta/e$. At higher bias voltages, the turnstile operates in the regime where the temperature rapidly increases with bias voltage. Figure\[Fig2\](b) presents the extracted cooling power $\dot{Q}$ vs. $T_0^5-T_N^5$ matched with the heat load from the electron-phonon relaxation. Using the dimensions of the island, $\mathcal{V}=30\times 50 \times 80\,\mathrm{nm}^3$, we then obtain $\Sigma=4\times 10^9\,\mathrm{WK^{-5}m^{-3}}$ from the linear fit of the data, which is in good agreement with the values obtained for the same Au-Pd alloy in another experiment [@PhysRevLett.Timoveev]. For the demonstration of rf refrigeration, we measured the charge current though the device at different operation frequencies ($f=2^k\,\mathrm{MHz},\,k=1\dots 7$), and at different bath temperatures $100\,\mathrm{mK}\apprle T_0\apprle 500\,\mathrm{mK}$. In order to distinguish between the ordinary dc cooling and rf cooling, we biased the turnstile at a low voltage $V_b=50\,\mathrm{\mu V}\simeq 0.25\Delta/e$, where dc cooling is small. This bias is indicated by the arrows in the insets of Fig.\[Fig2\]. Generally, the dc bias is not needed for rf cooling, but it makes in-situ thermometry possible. The measured current in the gate-open state vs. the rf amplitude at different frequencies is shown in Fig.\[Fig3\](a). The cryostat temperature was $T_0=240\,\mathrm{mK}$ in this case. With a small bias voltage $V_b$ applied, the rates of tunneling in the forward and backward directions differ by a factor of $\sim\exp\left(-eV_b/(k_{\mathrm{B}}T_{N})\right)$ [@NaturePhys.4.120.Pekola]. Thus, measuring the dc current $I$ through the device serves as a thermometer of the island. By using parameters of the cooler obtained from the dc measurements, and taking into account the balance between the cooling power and the heat flow due to the electron-phonon relaxation, we have calculated the corresponding current $I$ as a function of rf amplitude; the simulation results are shown by a continuous line in Fig.\[Fig3\](a). As a reference we also show (dashed lines), the corresponding curves calculated for fixed temperature ($T_N=T_0$). We should mentioned that we used $Q_g$ as a fitting parameter in Fig.\[Fig3\]. Good agreement between the experiment and simulations with non-constant $T_N$ allows us to extract the temperature of the island. Figure\[Fig3\](b) shows the mean temperature $T_N$ thus obtained (open symbols), and the corresponding predicted temperature (continuous lines) from the numerical simulations with the independently determined parameters. We note that the instantaneous electron temperature in the rf refrigerator island is expected to fluctuate around its mean value $T_N$, due to fundamental principles of thermodynamics. These fluctuations are inversely proportional to the volume of the island, $\langle \delta T_N^2\rangle=k_{\mathrm{B}}T_N^2/C_\mathrm{el}\propto T_N/\mathcal{V}$, where $C_\mathrm{{el}}$ is the heat capacity of the electron gas in the island [@Landau]. For our samples, with a very small island, we obtain $\langle \delta T_N^2\rangle^{1/2}\sim 10\,\mathrm{mK}$ at $T_N\simeq 200\,\mathrm{mK}$. Figure\[Fig4\](a) shows the calculated cooling power (gray lines) and the corresponding minimum temperature of the island (black lines) for two different dc gate charges ($Q_g=0.5e$ and $0.48e$). The highest cooling power is achieved exactly in the gate-open state. The cooling power decreases rapidly, even for small offsets from this position, because cooling is not any more optimized for tunneling through both junctions. Therefore, background charge fluctuations reduce the cooling power of the refrigerator, and affect its temperature. However, the cooling power increases with operation frequency. For the frequencies lower than the characteristic electron-phonon relaxation rate, the electron temperature is close to the lattice temperature. At higher frequencies, the cooling power rises monotonically and eventually saturates due to the finite $R_{\infty}C_{\Sigma}$ time constant of the device. Because of the small drive amplitude of the rf refrigerator the frequency dependence of the cooling power does not turn into heating at high frequencies, which, on the other hand, is predicted for multi-electron cycles [@PhysRevLett.98.037201.Pekola]. The rf refrigeration plays an important role in the development of the current standard based on the hybrid turnstile. This effect allows one to cool down the island also in the metrologically interesting range of the operation parameters. The experimental pumping curve measured at $T_0=300\,\mathrm{mK}$ with a plateau at $I=ef$ and the extracted electron temperature at $f=64\,\mathrm{MHz}$ are shown in Fig.\[Fig4\](b). Here, the turnstile is in the gate-open state and biased at the optimum bias point for pumping, $V_b\simeq\Delta/e$. In conclusion, we have experimentally demonstrated rf refrigeration using a single-electron transistor with superconducting leads and a normal-metal island, by applying an rf signal to the gate electrode. The cooling power rises monotonically with operation frequency until it saturates. In practice the demonstrated rf cooling effect may be useful *e.g.*, in the development of a standard for electric current. This work was partially supported by the Academy of Finland, Japan Science and Technology Agency through the CREST Project, the European Community’s Seventh Framework Program under Grant Agreement No.218783 (SCOPE), the NanoSciERA project “NanoFridge” and EURAMET joint research project REUNIAM, the Technology Industries of Finland Centennial Foundation. [10]{} M. Nahum, T. M. Eiles, and J. M. Martinis, Appl. Phys. Lett. [**65**]{}, (1994). M. M. Leivo, J. P. Pekola, and D. V. Averin, Appl. Phys. Lett. [**68**]{}, (1996). A. M. Clark et al., Appl. Phys. Lett. [**86**]{}, (2005). F. Giazotto, T. T. Heikkilä, A. Luukanen, A. M. Savin, and J. P. Pekola, Rev. Mod. Phys. [**78**]{}, (2006). S. Rajauria et al., Phys. Rev. Lett. [**99**]{}, (2007). S. O. Valenzuela et al., Science [**314**]{}, (2006). S. H. W. van der Ploeg et al., Nature Phys. [**4**]{}, (2008). A. Naik et al., Nature [**443**]{}, (2006). A. Schliesser, R. Riviere, G. Anetsberger, O. Arcizet, and T. J. Kippenberg, Nature Phys. [**4**]{}, (2008). O. P. Saira et al., Phys. Rev. Lett. [**99**]{}, (2007). J. P. Pekola, F. Giazotto, and O. P. Saira, Phys. Rev. Lett. [**98**]{}, (2007). N. B. Kopnin, F. Taddei, J. P. Pekola, and F. Giazotto, Phys. Rev. B [**77**]{}, (2008). J. P. Pekola et al., Nature Phys. [**4**]{}, (2007). D. V. Averin and J. P. Pekola, Phys. Rev. Lett. [**101**]{}, (2008). A. Kemppinen et al., Appl. Phys. Lett. [**94**]{}, (2009). A. Kemppinen, M. Meschke, M. Möttönen, D. V. Averin, and J. P. Pekola, (2008). S. V. Lotkhov, A. Kemppinen, S. Kafanov, J. P. Pekola, and A. B. Zorin, (2009), Appl. Phys. Lett. submitted. I. O. Kulik and R. I. Shekhter, JETP [**62**]{}, 623 (1975). D. V. Averin and K. K. Likharev, , chapter Single Electronics: A Correlated Transfer of Single Electrons and Cooper Pairs in System of Small Tunnel Junctions., North-Holland, Amsterdam, 1991. G. L. Ingold and Yu. V. Nazarov, , chapter Charge Tunneling Rates in Ultrasmall Junctions, Plenum Press, New York, 1992. R. C. Dynes, J. P. Garno, G. B. Hertel, and T. P. Orlando, Phys. Rev. Lett. [**53**]{}, (1984). N. B. Kopnin, , Oxford University Press, USA, 2001. D. R. Schmidt, R. J. Schoelkopf, and A. N. Cleland, Phys. Rev. Lett. [**93**]{}, (2004). M. Meschke, W. Guichard, and J. P. Pekola, Nature [**444**]{}, (2006). M. L. Roukes, M. R. Freeman, R. S. Germain, R. C. Richardson, and M. B. Ketchen, Phys. Rev. Lett. [**55**]{}, (1985). T. A. Fulton and G. J. Dolan, Phys. Rev. Lett. [**59**]{}, (1987). Yu. A. Pashkin, Y. Nakamura, and J. S. Tsai, Appl. Phys. Lett. [**76**]{}, (2000). A. V. Timofeev, M. Helle, M. Meschke, M. Möttönen, and J. P. Pekola, Phys. Rev. Lett. [**102**]{}, (2009). L. D. Landau and E. M. Lifshitz, , volume 5; Statistical Physics (Part 1), Pergamon Press, Oxford, 3rd edition, 1980.
--- abstract: | For a given $\omega$-operad $A$ on globular sets we introduce a sequence of symmetric operads on $Set$ called slices of $A$ and show how the connected limit preserving properties of slices are related to the property of the category of $n$-computads of $A$ being a presheaf topos. 1991 Math. Subj. Class. 18C20, 18D05 author: - | M.A. Batanin[^1]\ Macquarie University, North Ryde, NSW 2109, Australia\ mbatanin@math.mq.edu.au date: 4 August 2002 title: 'Computads and slices of operads.' --- \[section\] \[section\] \[section\] \[section\] \[pro\] \[theorem\] \[section\] \[section\] \[section\] Introduction. ============= Computads were invented by Street [@StL] as a tool for the presentation of strict $n$-categories. They attracted a new wave of interest in recent years due to the development of the theory of weak higher categories. It also became evident that we often need some more general types of computads than Street’s computads. For example, the theory of surface diagrams in 3[D]{}-space naturally leads to the use of so called Gray-computads [@MT]. In our paper [@BatP] computads for magma-type globular theories were used. In our paper [@BatC] we construct a general theory of computads for finitary monads on globular sets. An important class of such monads consists of so called analytic monads [@BS] which can be identified with higher operads in $Span$ in the sense of [@BatN]. The examples in the previous paragraph all belong to this class of monads. In [@BatC] some properties of computads for analytic monads were established. In particular, it was claimed that computads form a presheaf topos. This statement in the case of Street’s $2$-computads was proved by Shanuel and then reproved by Carboni and Johnstone [@CJ]. Unfortunately, the proof we gave in [@BatC] and [@BatP] turned out to be incorrect. In [@MZ] Makkai and Zawadowski observed that the category of Street’s $3$-computads can not be a presheaf topos. In this paper we study this question more carefully. We find a sufficient condition when computads for a given analytic monad on globular sets do form a presheaf category. The condition is given in terms of a sequence of symmetric operads in the category of sets which we can construct from the analytic monad. We call this sequence the sequence of [*slices of the operad*]{}. We also show that if the slices are [*normalised*]{} then the condition is even necessary. We also give examples of monads for which this condition is satisfied. A surprising result is that $n$-computads for weak $n$-categories do form a presheaf category for any $n$. This result is also true for $3$-computads for Gray-categories. It seems to us that the slices of operads are closely related to the coherence problem for weak $n$-categories and we suggest a couple of conjectures about it in section \[slice\]. [**Acknowledgements.**]{} I would like to thank Ross Street for stimulating discussion during my work on this paper. I am also grateful to Michael Makkai and Marek Zawadowski for informing me about their example, which was a starting point for this work. Finally, I acknowledge the financial support of the Scott Russell Johnson Memorial Foundation and the Macquarie University Research Commitee. Computads. ========== By an $n$-globular (globular if $n=\omega$) set we mean a sequence (infinite if $n=\omega$) of sets $$X_{0},X_{1},\ldots,X_{k},\ldots, X_n$$ together with source and target maps $$s_{r-1},t_{r-1}:X_{r}\longrightarrow X_{r-1}$$ satisfying the equation: $$s_{r-1}\cdot s_{r} = s_{r-1}\cdot t_{r} \ , \ t_{r-1}\cdot s_{r} = t_{r-1}\cdot t_{r}.$$ The set $X_r$ is called the set of $r$-cells of $X$. Sometimes we will use also notation $(X)_r$ for this set. Every $(n-1)$-globular set can be considered as an $n$-globular set with empty set of $n$-cells. So we have a chain of inclusion functors $$Set = Glob_0\subseteq Glob_1 \subseteq \ldots \subseteq Glob_k \subseteq Glob_{k+1} \ldots \subseteq Glob$$ and each of the inclusion functors $$L_k:Glob_k \longrightarrow Glob_n$$ has a right adjoint $$tr_{k}: Glob_{n} \rightarrow Glob_k .$$ Let $A=(A,\mu,\epsilon)$ be a finitary monad on $Glob$. We denote by $A_n$ the $n$-truncation of $A$, i.e. the restriction of $A$ to the category $Glob_n$ of $n$-globular sets. The category of algebras of $A_n$ will be denoted by $Alg_n$ and the corresponding forgetful functor will be denoted by $$W_n:Alg_n \longrightarrow Glob_n .$$ We now make the following inductive definition [@BatC]: The category $Comp_0$ of $A_{0}$-computads is $Glob_0$. The functors $${\cal W}_0 = W_{0}:Alg_0 \rightarrow Comp_0$$ $${\cal F}_{0}= F_{0}:Comp_{0}\rightarrow Alg_{0}$$ are the forgetful and free $A_{0}$-algebra functors, respectively. Let us suppose now that the category $Comp_{n-1}$ of $A_{n-1}$-computads is already defined together with two functors: $${\cal W}_{n-1}:Alg_{n-1}\rightarrow Comp_{n-1}$$ $${\cal F}_{n-1}:Comp_{n-1}\rightarrow Alg_{n-1}$$ such that ${\cal F}_{n-1}$ is left adjoint to ${\cal W}_{n-1}$. An $A_{n}$-computad $ \cal C$ is a triple $(C,\phi,{ \cal C}')$ consisting of an $n$-globular set $C$, an $A_{n-1}$-computad ${ \cal C}'$ and an isomorphism $$\phi:W_{n-1}({\cal F}_{n-1}{ \cal C}')\rightarrow {tr}_{n-1}C$$ in $Glob_{n-1}$. Let $G$ be an object of $Alg_{n}$. The counit of the adjunction ${\mbox{$\cal F$}}_{n-1}\dashv {\mbox{$\cal W$}}_{n-1}$ gives a morphism $$r_{n-1}:{\mbox{$\cal F$}}_{n-1}{\mbox{$\cal W$}}_{n-1}{\mbox{$tr$}}_{n-1}G\rightarrow {\mbox{$tr$}}_{n-1}G.$$ Define an $n$-globular set ${\mbox{$\mathcal G$}}$ in the following way. The $(n-1)$-skeleton of ${\mbox{$\mathcal G$}}$ coincides with $W_{n-1}{\mbox{$\cal F$}}_{n-1}{\mbox{$\cal W$}}_{n-1}{\mbox{$tr$}}_{n-1}G$ and $${\mbox{$\mathcal G$}}_{n}= \{(\xi,a,\eta) \in {\mbox{$\mathcal G$}}_{n-1}\times G_{n}\times {\mbox{$\mathcal G$}}_{n-1} \ | \ s_{n-2}\xi = s_{n-2}\eta, \ t_{n-2}\xi = t_{n-2}\eta ,$$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ s_{n-1}a=r_{n-1}(\xi), \ t_{n-1}a = r_{n-1}(\eta) \}.$$ Define $$s_{n-1}(\xi,a,\eta) = \xi \ , \ t_{n-1}(\xi,a,\eta) = \eta .$$ Then put $${\mbox{$\cal W$}}_{n}G=({\mbox{$\mathcal G$}},id, {\mbox{$\cal W$}}_{n-1}{\mbox{$tr$}}_{n-1}G).$$ For an $A_{n}$-computad ${\mbox{$\cal C$}}=(C,\phi,{ {\mbox{$\cal C$}}}'), $ define $$V_{n}({\mbox{$\cal C$}}) = C$$ and $V_{0}= id$ for $n=0$. Define a natural transformation $$\Theta_{n}: V_{n}{\mbox{$\cal W$}}_{n}\rightarrow W_{n},$$ to be the morphism of $n$-globular sets which coincides with $$W_{n-1}r_{n-1}:W_{n-1}{\mbox{$\cal F$}}_{n-1}{\mbox{$\cal W$}}_{n-1}{\mbox{$tr$}}_{n-1}G\rightarrow W_{n-1}{\mbox{$tr$}}_{n-1}G$$ up to dimension $n-1$ and has $$\Theta_{n}(\xi,a,\eta)=a$$ in dimension $n$. Let us define a new monad $I_A$ on globular sets by means of the following pushout. (50,30)(5,5) (10,30)[(0,0)]{} (10,26)[(0,-1)[12]{}]{} (-15,15) (10,10)[(0,0)]{} (16,10)[(1,0)[27]{}]{} (28,12) (52,10)[(0,0)]{} (52,26)[(0,-1)[12]{}]{} (50,15) (52,30)[(0,0)]{} (22,30)[(1,0)[17]{}]{} (32,32) The algebras of $I_A$ are globular sets together with an $A_{n-1}$-algebra structure on its $(n-1)$-truncation. Notice that the categories of $A_n$-computads and $(I_A)_n$-computads are canonically isomorphic. Moreover, the functor $V$ together with the $A_{n-1}$-algebra structure on ${\mbox{$tr$}}_{n-1}VC \simeq W_{n-1}({\cal F}_{n-1}{ \cal C}')$ is left adjoint to the forgetful functor from the category of $I_A$-algebras to $A_n$-computads and $\Theta_n$ is the counit of this adjunction. So, the functor ${\cal F}_n$ is canonically isomorphic to a composite of $V$ and $\Gamma$ which is left adjoint to the restriction functor $$l^{\star}: Alg_n \longrightarrow Alg_{I_A}$$ induced by an obvious morphism of monads $$l:I_A\rightarrow A_n.$$ This left adjoint exists due to the finitary assumption [@Kelly]. We also can talk about $\omega$-computads. Recall [@BatC] that the $n$-truncation of an $(n+1)$-computad $(C,\phi,{\mbox{$\cal C$}})$ is the $n$-computad ${\mbox{$\cal C$}}$. Let $A$ be a finitary monad on $Glob$. An ${\omega}$-computad for $A$ is a sequence $ {\mbox{$\cal C$}}_n$ of $n$-computads for $A$ together with a sequence of isomorphisms $$c_n: tr_n({\mbox{$\cal C$}}_{n+1}) \rightarrow {\mbox{$\cal C$}}_n .$$ A morphism of $\omega$-computads is a sequence of morphisms of $n$-computads which commutes in the obvious sense with the structure isomorphisms. We use the techniques of [@Kelly] for an explicit construction of the left adjoint $\Gamma$ into the category of $A_n$-algebras. Let $X=M_0$ be an $I_A$-algebra and let $M_1$ be the following coequalizer in $Glob_n$ (80,10)(0,15) (69,20)[(0,0)]{} (40,20)[(0,0)]{} (61,21)[(-1,0)[15]{}]{} (61,19)[(-1,0)[15]{}]{} (54,17) (53,22) (15,20)[(0,0)]{} (34,20)[(-1,0)[15]{}]{} (26,21) where $k$ is the $I_A$-algebra structure morphism for $X$ and $\eta$ is the composite $\mu\cdot A_n(l)$. Notice, that $k$ is an identity in dimension $n$. Suppose that the globular set $M_{r},$ together with the morphism $$\pi_r: A_n M_{r-1} \rightarrow M_r,$$ are already constructed. Then define $M_{r+1}$ to be the following coequalizer. (100,25) (99,20)[(0,0)]{} (93,16)[(-1,-1)[9]{}]{} (92,10) (79,4)[(0,0)]{} (76,7)[(-1,1)[9]{}]{} (67,10) (62,20)[(0,0)]{} (91,20)[(-1,0)[21]{}]{} (80,21) (30,20)[(0,0)]{} (53,20)[(-1,0)[16]{}]{} (40,21) (3,20)[(0,0)]{} (23,20)[(-1,0)[15]{}]{} (12,21) Then we have the following sequence of morphisms $$M_{0}\stackrel{\epsilon_{n}}{\longrightarrow} A_{n}M_{0} \stackrel{\pi_{1}}{\longrightarrow} M_{1} \stackrel{\epsilon_{n}}{\longrightarrow} A_{n}M_{1} \stackrel{\pi_{2}}{\longrightarrow} \ldots$$ We denote the colimit of it by $M_{\infty}X$. According to [@Kelly] $M_{\infty}X$ has a natural $A_n$-algebra structure given by $\pi_{\infty}= \mbox{colim}\hspace{0.5mm} \pi_r$, and this is indeed the free $A_n$-algebra generated by $X$. \[slice\]Suspensions and slices of globular operads. ==================================================== Every strict $\omega$-category has an underlying globular set. This functor has a left adjoint $$D:Glob \longrightarrow \omega\mbox{\it -Cat} .$$ We will also denote by $(D,\mu,\epsilon)$ the monad generated by this adjunction (notice, that in [@BatN] this monad was denoted by $D_s$). In [@BatN] a description of $D$ in terms of plain trees was presented. Recall [@StP] that a natural transformation $p:R\rightarrow Q$ between two functors is called [*cartesian*]{} if for every morphism $f:X\rightarrow Y$ the naturality square (60,25) (45,5)[(0,0)]{} (45,16)[(0,-1)[8]{}]{} (47,12) (45,20)[(0,0)]{} (73,5)[(0,0)]{} (52,5)[(1,0)[15]{}]{} (56,6) (73,20)[(0,0)]{} (52,20)[(1,0)[15]{}]{} (56,21) (71,16)[(0,-1)[8]{}]{} (73,12) is a pullback. Recall also that an endofunctor $A$ on $Glob$ is called [*analytic*]{} if it is equipped with a cartesian natural transformation (augmentation) $p:A\rightarrow D$. Such an endofunctor is determined up to isomorphism by a collection $$p(1):A(1)\rightarrow D(1),$$ where $1$ is the terminal globular set and it is connected limits preserving. A monad on $Glob$ is called analytic if its functor part is analytic and unit and multiplication are cartesian natural transformation. The category of analytic monads is equivalent to the category of $\omega$-operads in $Span$. The following definition is due to Joyal [@J]. An endofunctor ${\mbox{\LARGE\it a}}$ on $Set$ is called [*analytic*]{} if it can be represented as a ‘Taylor series’ $${\mbox{\LARGE\it a}}(X)= \sum_{n\ge 0} A[n]\times_{_{\Sigma_n}} X^n ,$$ where $A[n], n\ge 0$, is a symmetric collection, i.e. a family of sets equipped with an action of the symmetric group $\Sigma_n$ on $A[n]$. The analytic functors are closed under composition and the monoids in this monoidal categories are called [*symmetric operads*]{}. Symmetric operads are a special case of algebraic theories in $Set$. Another special case of algebraic theories called [*strongly regular*]{} theories was considered by Carboni and Johnstone in [@CJ]. These are theories which can be given by equations without permutations and repetitions of symbols. For example, the theory of monoids is such a theory, but the theory of commutative monoids is not. In [@CJ] a characterisation of strongly regular theories is established. They are exactly the theories given by [*nonsymmetric operads*]{} in $Set$. The last are monoids with respect to composition in the monoidal category of endofunctors of the form $$\hspace{40mm} {\mbox{\LARGE\it a}}(X)= \sum_{n\ge 0} A[n]\times X^n , \hspace{40mm} *$$ where $A[n], n\ge 0$ is a nonsymmetric collection, i.e. just a sequence of sets. We will call the functors of the form $(*)$ [*strongly analytic*]{}. It was also proved in [@CJ] that strongly analytic functors preserve connected limits. An $n$-globular set $X$ is called $k$-terminal if its $k$-th truncation is a terminal $k$-globular set. An algebra of a monad on $n$-globular sets is called $k$-terminal if its underlying globular set is $k$-terminal. We denote by $Glob_n^{(k)}$ the category of $k$-terminal $n$-globular sets. Clearly, $Glob_n^{(k)}$ is isomorphic to $Glob_{n-k-1}.$ For a monad $A$ on $Glob$ we denote by $Alg_n^{(k)}$ the category of $k$-terminal algebras of $A_n$. We have a restriction of the forgetful functor $W$ $$W^{(k)}:Alg_n^{(k-1)} \rightarrow Glob_n^{(k-1)},\ k\ge 1 .$$ It is not hard to prove that this functor is monadic at least for a finitary monad $A$ [@W]. Hence, we have a monad $S^k A_n $ on $Glob_{n-k}$ such that its category of algebras is equivalent to $Alg_n^{(k-1)}$. We also put $S^0 A = A$. [@W] $S^k A_n$ is called the $k$-fold suspension of $A_n$ Now if $k=n$ then $S^k A_k $ is a monad on $Glob_0 = Set$. The proposition 2.1 and the theorem 10.2 from [@BatEH] assert that this monad is actually a symmetric operad on $Set$. The symmetric operad $S^k A_k $ will be called the $k$-th slice of $A$. We will denote this operad by ${\mbox{$\cal P$}}_k(A)$. For any operad $A$ its $0$-slice is given by a symmetric operad which underlying collection consists of a monoid $A[U_0]$ in dimension $1$ and empty sets in other dimensions. The tree $U_0$ is the only tree of height $0$. The first slice of the terminal operad $D$ is free monoid operad. All the higher slices are the free commutative monoid operad. It is proved in [@BatEH Theorem 10.1] that the first slice of an operad is always a free symmetric operad on some nonsymmetric operad [@BatEH] and, hence, is always a strongly regular theory. For the bicategory operad, its first slice is the nonsymmtric operad freely generated by a pointed collection which has exactly one operation in dimensions $0,1,2$. The second slice is the free commutative monoid operad. For the Gray operad $G$ [@BatN] the first slice is the free monoid operad, the second slice is the double-monoid-with-common-unit operad i.e. a set with two independent monoid structures and common unit. So ${\mbox{$\cal P$}}_2(G)$ is a strongly regular theory. The third slice is the free commutative monoid operad. For a free operad on a globular collection, the slices are free symmetric operads on some nonsymmetric collections and are, therefore, strongly regular theories. The category of $\omega$-computads for such operads were used in [@BatP]. For the universal contractible $\omega$-operad $K$ from [@BatN] the slices are free symmetric operads on nonsymmetric collections. This can be easily seen from the construction of $K$ given in [@BatN]. Hence, all the slices of $K$ are strongly regular theories. Recall that the algebras of $K$ are by definition weak $\omega$-categories. For the universal contractible $n$-operad its slices up to dimension $n-1$ are free symmetric operads on some nonsymmetric collections but its $n$-th slice is the free commutative monoid operad. The algebras of this operad are weak $n$-categories. In the theory of symmetric operads a very important condition is freeness of the action of the symmetric groups. For example, $E_{\infty}$-operads are exactly those operads which are contractible and have free action of the symmetric groups. If the action is not free it usually means that the corresponding algebras have some homotopy degeneracy like the vanishing of some Whithead products or Postnikov invariants. From the examples above we see that slices carry with them some information about the homotopy behaviour of the higher operads. It seems to us that the condition for slices to be regular theories is the correct analogue of the condition of freeness of action of the symmetric groups. So our conjecture is Suppose that an $n$-operad $A$ is contractible, contains a system of binary compositions [@BatN], and all its slices up to dimension $n-1$ are strongly regular theories. Then every weak $n$-category is weakly equivalent to an $A$-algebra. At the time of writing it is not completely clear what the right notion of ‘semistrict’ $n$-categoryshould be. The desirable properties are: - every weak $n$-category must be equivalent to a semistrict one; - the notion of ‘semistrict’ $n$-category is ‘minimal’ with the above property. In dimension $2$ this is just the notion of strict $2$-category. In dimension $3$ it is the notion of Gray-category [@GPS]. Crans has a candidate for dimension $4$ and some ideas about higher dimensions [@Crans]. Here we risk to suggesting a conjecture. There is a unique contractible $n$-operad $G_n$ with the property that ${\mbox{$\cal P$}}_k(G_n), \ 0\le k \le n-1,$ is the free $k$-fold monoid operad. A semistrict $n$-category is an algebra for this operad. Weak limits and coequalisers ============================ This section has a technical character and contains some elementary facts about weak pullbacks and coequalisers we will need in next section. Let $F:\Lambda\rightarrow C$ be a functor between two categories and let $W\stackrel{p_{\lambda}}{\rightarrow} F(c_{\lambda})$ be a cone over $F$. It is called a weak limit of $F$ if for any other cone $V\stackrel{q_{\lambda}}{\rightarrow}F(c_{\lambda})$ there exists a morphism $r:V\rightarrow W$ such that $q_{\lambda}=p_{\lambda}\cdot r$. It is obvious that if limit of a functor $F$ exists then it is a retract of any weak limit of $F$. Moreover, in order to prove that $W$ is a weak limit it is enough to construct a section of the canonical morphism from $W$ to the limit of $F$ which makes some obvious diagrams commutative. We will use this simple observation extensively. Following [@J] and [@W] we call a natural transformation between two functors [*weak cartesian*]{} provided every naturality square is a weak pullback. \[wcart\] Suppose (60,12) (28,5)[(0,0)]{} (42,5)[(-1,0)[10]{}]{} (36,6.5) (45,5)[(0,0)]{} (63,5)[(0,0)]{} (59,4.3)[(-1,0)[10]{}]{} (59,5.8)[(-1,0)[10]{}]{} (54,2) (54,7) is a coequaliser of two weakly cartesian transformations between functors $A,B:\Lambda\rightarrow Set$. Then $p$ is weakly cartesian. [[**Proof.  **]{}]{}Let $f:X\rightarrow Y$ be a map of sets and let $P$ be the pullback of $C(f)$ and $p_Y$ i.e. $$P = \{(c,a) | C(f)(c)= p_{Y}(a) \}.$$ We have to prove that there is a section $s$ of the canonical map $A(X)\rightarrow P$ which makes the following diagram commutative =0.9mm (60,45)(15,-2) (45,25)[(0,0)]{} (45,20)[(0,-1)[12]{}]{} (87,35)[(0,0)]{} (75,25)[(0,0)]{} (75,20)[(0,-1)[12]{}]{} (79,34)[(-4,-1)[25]{}]{} (86,32)[(-1,-4)[5.9]{}]{} (77,27)[(1,1)[5]{}]{} (67,25)[(-1,0)[13]{}]{} (35,14) (83,14) (45,5)[(0,0)]{} (76,5)[(0,0)]{} (67,5)[(-1,0)[13]{}]{} (63,32) (60,6.5) (77,29) Let us take $(c,a)\in P$ and let $a'\in A(X)$ be such that $p_X(a')= c$. Put $y=A(f)(a')$. Then $p_Y(y) = p_Y(a)$. The last equality means $x$ and $a$ are equivalent with respect to the equivalence relation generated by $\chi$ and $\zeta$. Without loss of generality we can assume that there is a finite sequence $b_1,\ldots,b_k$ of elements of $B(Y)$ such that $$y=\chi(b_1)\ , \ a = \zeta(b_k), \ \zeta(b_i) = \chi(b_{i+1}).$$ Since $\chi$ is weakly cartesian we can find a $b'_1\in B(X)$ such that $B(f)(b'_1) = b_1$ and $\chi(b'_1)= a'$. Then consider the element $\zeta(b'_1)$. We have $p_X(\zeta(b'_1))= c $ and $$A(f)(\zeta(b'_1)) = \zeta(B(f)(b'_1)) = \zeta(b_1) = \chi(b_2).$$ Therefore, we can find $b'_2$ such that $B(f)(b'_2) = b_2$ and $\chi(b'_2) = \zeta(b'_1).$ Then again $p_X(\zeta(b'_2))= c $ and $$A(f)(\zeta(b'_2)) = \zeta(B(f)(b'_2)) = \zeta(b_2) = \chi(b_3).$$ We can continue this process and finally we get $$p_X(\zeta(b'_{k}))= c$$ and $$A(f)(\zeta(b'_{k})) = \zeta(b_k) = a.$$ Hence, we can put $s(c,a)= \zeta(b'_k) $. The lemma is therefore proved. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} \[sc\] Sequential colimits in $Set$ preserve weak pullbacks. [[**Proof.  **]{}]{}It is well known that sequential colimits in $Set$ preserve pullbacks. So it is enough to prove that in a sequential colimit of weak pullbacks we can choose the sections of the retractions from pullbacks to weak pullbacks naturally. Let us fix a section $q_i:P_i\rightarrow W_i$ of the canonical retraction $W_i\rightarrow P_i$ for every $i\ge 0$. We will construct a new section $s_i$ inductively. We take $s_0=q_0$. Now suppose the retraction $s_i$ in the $i$-th weak pullback =0.9mm (60,45)(15,-2) (50,25)[(0,0)]{} (50,22)[(0,-1)[14]{}]{} (83,35)[(0,0)]{} (72,25)[(0,0)]{} (72,22)[(0,-1)[14]{}]{} (79,34)[(-4,-1)[25]{}]{} (82,32)[(-1,-4)[5.9]{}]{} (74,27)[(1,1)[5]{}]{} (69,25)[(-1,0)[15]{}]{} (50,5)[(0,0)]{} (73.5,5)[(0,0)]{} (69,5)[(-1,0)[15]{}]{} (77,28) is already constructed. Then we can construct $s_{i+1}$ in the following way. Let $a\in P_{i+1}$ belong to the image of the limiting map $\lambda_i:P_i\rightarrow P_{i+1}$ and let us choose a $b\in P_i$ such that $\lambda_I (b) = a.$ Then we put $s_{i+1}(a)= w_i(s_i(b)),$ where $w_i:W_i\rightarrow W_{i+1}$. If $a$ does not belong to $im(\lambda_i)$ then we put $s_{i+1}(a)=q_{i+1}(a).$ The sections $s_i$ obviously induce a section $$\mbox{colim}\hspace{0.3mm} P_i \rightarrow \mbox{colim}\hspace{0.3mm} W_i$$ of the canonical map $\mbox{colim}\hspace{0.3mm} W_i \rightarrow \mbox{colim}\hspace{0.3mm} P_i$ which completes the proof. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} By a similar diagram-chase method one can easily prove the following lemma. \[mono\] Suppose that in a commutative diagram of coequalisers (60,29) (28,5)[(0,0)]{} (42,5)[(-1,0)[10]{}]{} (36,6.5) (45,5)[(0,0)]{} (63,5)[(0,0)]{} (59,4.3)[(-1,0)[10]{}]{} (59,5.8)[(-1,0)[10]{}]{} (25,11.5) (28,20)[(0,0)]{} (42,20)[(-1,0)[10]{}]{} (36,21.5) (60,11.5) (42.5,11.5) (45,20)[(0,0)]{} (63,20)[(0,0)]{} (59,19.3)[(-1,0)[10]{}]{} (59,20.8)[(-1,0)[10]{}]{} (28,16)[(0,-1)[8]{}]{} (45,16)[(0,-1)[8]{}]{} (63,16)[(0,-1)[8]{}]{} both right commutative squares are weak pullbacks and $\psi$ and $\zeta$ are monomorphisms, then colimiting map $\phi$ is a monomorphism. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} The following lemma is obvious. If a commutative square is weakly cartesian and one of the limiting maps is a monomorphism then the square is cartesian. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} Analytic functors on $Glob$ preserve connected weak limits. [[**Proof.  **]{}]{}Let $A$ be an analytic functor on $Glob$ and let $C$ be a weak connected limit of a diagram of globular sets $F:\Lambda \rightarrow Glob.$ Then there is a retraction $$r:C\rightarrow \lim_{\Lambda}F.$$ Hence, we have a retraction $$A(r):A(C)\rightarrow A(\lim_{\Lambda}F) \simeq \lim_{\Lambda}A(F)$$ which proves the lemma. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} \[mixture\] Let $\phi: A \rightarrow B$ be a natural transformation in $Glob_n$ such that $tr_{n-1}\phi$ is cartesian and $(\phi)_n:(A)_n \rightarrow (B)_n$ is weakly cartesian in $Set$. Then $\phi$ is weakly cartesian. [[**Proof.  **]{}]{}Let $f:X\rightarrow Y$ be a morphism of globular sets and let $P$ be a pullback of $\phi$ and $B(f)$. Then we can assume that $tr_{n-1}P = tr_{n-1}A$. Let $\psi: (P)_n \rightarrow (A(X))_n$ be a section of the canonical retraction $(r)_n:(A(X))_n\rightarrow (P)_n$ which exists due to the weak cartesianness of $(\phi)_n$. We have to prove that $\psi$ respects source and target operators. Indeed, consider a map $\alpha= s_{n-1}(\psi):(P)_n\rightarrow (A(X))_{n-1}$. Then we have $$s_{n-1}(p_A)_n = s_{n-1}(p_A)_n((r)_n(\psi)) = s_{n-1}((Af)_n(\psi))= (Af)_{n-1}(\alpha).$$ Analogously $$s_{n-1}(p_B)_n = (\phi)_{n-1}(\alpha) ,$$ where $p_A, p_B$ are canonical projections from the pullback $P$. Since $(Af)_{n-1}$ and $(\phi)_{n-1}$ are also projections of a pullback we have by its universal property that $\alpha$ must coincide with $s_{n-1}:(P)_n\rightarrow (A(X))_{n-1}$. So $\psi$ commutes with the source operator. Analogously it commutes with target operator. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} \[map1\] Suppose $\phi: {\mbox{\LARGE\it a}}\rightarrow { {\unitlength=0.25mm \begin{picture}(8.5,10)(0,0) \put(5,6.5){\makebox(0,0){\mbox{\large $b $}}} \put(3.2,10.5){\makebox(0,0){\mbox{ $\scriptstyle o $}}} \put(4.4,7.5){\makebox(0,0){\mbox{$\scriptstyle\cdot $}}} \end{picture}}}$ is a weakly cartesian transformation between two strongly analytic functors in $Set$, then it is cartesian. [[**Proof.  **]{}]{}By [@CJ] and a theorem of Joyal [@J; @W] we can assume that ${\mbox{\LARGE\it a}}$ and ${ {\unitlength=0.25mm \begin{picture}(8.5,10)(0,0) \put(5,6.5){\makebox(0,0){\mbox{\large $b $}}} \put(3.2,10.5){\makebox(0,0){\mbox{ $\scriptstyle o $}}} \put(4.4,7.5){\makebox(0,0){\mbox{$\scriptstyle\cdot $}}} \end{picture}}}$ both are given by free symmetric collections $A[n] = \alpha[n]\times \Sigma_n$, $B[n]= \beta[n]\times \Sigma_n$ and $\phi$ is given by equivariant maps of symmetric collections $$\phi[n]:\alpha[n]\times \Sigma_n \rightarrow \beta[n]\times \Sigma_n \ , n \ge 0 \ .$$ The map $\phi[n]$ is determined in its turn by a map of nonsymmetric collections $$\psi[n]:\alpha[n] \rightarrow \beta[n]\times \Sigma_n.$$ Then the natural transformation $\phi$ is the coproduct over $n$ of the composites $$(\alpha[n]\times \Sigma_n)\times_{_{\Sigma_n}} X^n \simeq \alpha[n]\times X^n \stackrel{\psi[n]\times 1}{-\!\!\!\longrightarrow} \beta[n]\times \Sigma_n \times X^n \stackrel{1\times k}{\rightarrow}$$ $$\stackrel{1\times k}{\rightarrow} \beta[n]\times X^n \simeq (\beta[n]\times \Sigma_n)\times_{_{\Sigma_n}} X^n , \hspace{30mm}$$ where $k$ is the action of $\Sigma_n$ on $X^n$. Then for the unique map $X\rightarrow 1$ we have the following commutative naturality diagram (60,29)(-10,0) (7,5)[(0,0)]{} (15,5)[(1,0)[15]{}]{} (20,6.5) (45,5)[(0,0)]{} (83,5)[(0,0)]{} (59,5)[(1,0)[13]{}]{} (25,11.5) (7,20)[(0,0)]{} (15,20)[(1,0)[15]{}]{} (17,21.5) (45,20)[(0,0)]{} (83,20)[(0,0)]{} (59,20)[(1,0)[13]{}]{} (62,21.5) (7,16)[(0,-1)[8]{}]{} (45,16)[(0,-1)[8]{}]{} (83,16)[(0,-1)[8]{}]{} In this diagram both left and right squares are obviously pullbacks; hence, so is the big square. This is enough to imply the transformation $\phi$ is cartesian. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} \[mon2\] Let $A$ be an analytic functor on $Glob$ and let $f:X\rightarrow Y$ be a map of globular sets such that for a fixed $n\ge 0$ the map $(f)_n$ is a monomorphism. Then $(A(f))_n$ is a monomorphism. [[**Proof.  **]{}]{}Since $A$ is strongly analytic it is sufficient to prove the lemma for the case $A=D$. Then it is obvious from the construction of $D$ given in [@BatN]. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} Computads and slices of operads. ================================ Let $$k_n: A_n({\mbox{$\cal F$}}_{n}) \rightarrow {\mbox{$\cal F$}}_{n}$$ be a natural transformation which is given on a computad ${\mbox{$\cal C$}}$ by the structure map of the algebra ${\mbox{$\cal F$}}_n({\mbox{$\cal C$}})$. Suppose for an $n$-operad $A$ all the slices ${\mbox{$\cal P$}}_k(A), \ 0\le k \le n, $ are strongly regular theories, then - $k_n$ is a cartesian natural transformation; - ${\mbox{$\cal F$}}_n$ preserves connected limits. [[**Proof.  **]{}]{}We will prove the theorem by induction. If $n=0$ the proposition is obvious because the $0$-operads are just monoids and $0$-computads are sets. We assume, therefore, that the natural transformation $$tr_{n-1}k_n = k_{n-1}: A_{n-1}{\mbox{$\cal F$}}_{n-1} \rightarrow {\mbox{$\cal F$}}_{n-1}$$ is cartesian and ${\mbox{$\cal F$}}_{n-1}$ preserves connected limits. Now we can use Kelly’s method to construct the left adjoint to the restriction functor $$Alg_n^{(n-1)} \rightarrow Alg^{(n-1)}_{I_A}$$ First of all observe that for the operad $I_A$ the natural transformation $$\kappa:I_A (-) \rightarrow (-)$$ is cartesian on the category of $(n-1)$-terminal $I_A$-algebras because $tr_{n-1}\kappa $ is the constant map $$1:A_{n-1}(1)\rightarrow 1$$ and $\kappa$ is an identity in dimension $n$. Hence, $A(\kappa)$ is cartesian. By lemma \[wcart\] the resulting colimit map $$\pi_1:A_n M_0 \rightarrow M_1$$ is weakly cartesian in dimension n and, therefore, by lemma \[mixture\] $\pi_1$ is weakly cartesian because $tr_{n-1}\pi_1 = 1 .$ Analogously we have that in Kelly’s construction all $\pi_r$ are weakly cartesian transformations and all $M_r$ preserve connected limits. The last sequential colimit of Kelly’s construction $$\mbox{colim}\hspace{0.5mm} \pi_r:A_n M_{\infty}\rightarrow M_{\infty}$$ is weakly cartesian by lemmas \[mixture\] and \[sc\]. This map is obviously the map $$j : A_n({\mbox{$\cal P$}}_{n}) \longrightarrow {\mbox{$\cal P$}}_{n}$$ given on $X$ by the structure morphism of the algebra ${\mbox{$\cal P$}}_{n}(X)$. Since ${\mbox{$\cal P$}}_{n}$ is strongly reguilar theory the functor $(A_n(P_{n}))_n$ is strongly analytic as well. Hence, by lemma \[map1\] $j$ is even cartesian. Now let $$\widehat{(-)}: Alg_{I_A}\rightarrow Alg^{(n-1)}_{I_A}$$ be a functor which assigns to an $I_A$-algebra $X$ the $(n-1)$-terminal $I_A$-algebra $\widehat{X}$ with $(\widehat{X})_n = (X)_n$. We obviously have a natural morphism of $I_A$-algebras $X\rightarrow \widehat{X}$. For a computad ${\mbox{$\cal C$}}$ we therefore have a coequalisers diagram (60,29) (25,5)[(0,0)]{} (38,4.8)[(-1,0)[10]{}]{} (45,5)[(0,0)]{} (70,5)[(0,0)]{} (61.5,3.9)[(-1,0)[10]{}]{} (61.5,5.4)[(-1,0)[10]{}]{} (25,20)[(0,0)]{} (38,20)[(-1,0)[10]{}]{} (45,20)[(0,0)]{} (70,20)[(0,0)]{} (61.5,19.3)[(-1,0)[10]{}]{} (61.5,20.8)[(-1,0)[10]{}]{} (25,16)[(0,-1)[8]{}]{} (45,16)[(0,-1)[8]{}]{} (70,16)[(0,-1)[8]{}]{} In this diagram the two right vertical morphisms are monomorphisms in dimension $n$ by lemma \[mon2\] and, therefore, the colimiting map is a monomorphism in dimension $n$ by lemma \[mono\]. In addition, the left square is a weak pullback by lemma \[wcart\]. What we have here is a map of the first stages of the Kelly machine for $V{\mbox{$\cal C$}}$ and $\widehat{V{\mbox{$\cal C$}}}$ respectively. Continuing this process we have as an output of the Kelly machine in dimension $n$, a weak pullback (60,29)(-13,0) (25,5)[(0,0)]{} (44,4.8)[(-1,0)[11]{}]{} (55,5)[(0,0)]{} (25,20)[(0,0)]{} (45,20)[(-1,0)[14]{}]{} (55,20)[(0,0)]{} (25,16)[(0,-1)[8]{}]{} (55,16)[(0,-1)[8]{}]{} with vertical morphisms being monomorphisms. So it is a pullback. By a similar argument, the natural transformation $$({\mbox{$\cal F$}}_n{\mbox{$\cal C$}})_n \longrightarrow (A_n({\mbox{$\cal P$}}_{n}\widehat{V{\mbox{$\cal C$}}}))_n$$ is cartesian. For a computad morphism $f:{\mbox{$\cal C$}}\rightarrow {\mbox{$\cal C$}}'$ we have now the following commutative cube. (60,50)(-10,-4) (10,25)[(0,0)]{} (10,22)[(0,-1)[14]{}]{} (12,15) (35,25)[(-1,0)[17]{}]{} (23,26) (45,25)[(0,0)]{} (45,22)[(0,-1)[14]{}]{} (57,21) (57,28) (10,5)[(0,0)]{} (33,5)[(-1,0)[14]{}]{} (23,6) (45,5)[(0,0)]{} (57,1) (57,8) (-3,3) (50,30) (30,35)[(0,0)]{} (30,32)[(0,-1)[9.5]{}]{} (30,21.2)[(0,-1)[3]{}]{} (32,25) (54,35)[(-1,0)[16]{}]{} (43,36) (65,35)[(0,0)]{} (65,32)[(0,-1)[14]{}]{} (77,31) (77,38) (30,15)[(0,0)]{} (53,15)[(-1,0)[4.4]{}]{} (47.3,15)[(-1,0)[6.4]{}]{} (43,16) (65,15)[(0,0)]{} (77,11) (77,18) (13,28)[(1,1)[7]{}]{} (50,28)[(1,1)[7]{}]{} (13,8)[(1,1)[7]{}]{} (50,8)[(1,1)[7]{}]{} In this diagram the front and rear vertical squares are pullbacks. The bottom horizontal square is a pullback because $j_n$ is cartesian. Hence, we have that the top horizontal square is a pullback in dimension n. It is also a pullback after truncation by our inductive assumption. So we have proved that $k_n$ is cartesian. Finally, we have to prove that ${\mbox{$\cal F$}}_n$ preserves connected limits. To do this it is sufficient to prove this result in dimension $n$. Let ${\mbox{$\cal C$}}$ be a connected limit of computads ${\mbox{$\cal C$}}_{\lambda}$ and let $c_{\lambda}:{\mbox{$\cal C$}}\rightarrow {\mbox{$\cal C$}}_{\lambda}$ be the canonical projection. So we have a cartesian square (60,29)(-13,0) (25,5)[(0,0)]{} (47,4.8)[(-1,0)[13]{}]{} (55,5)[(0,0)]{} (25,20)[(0,0)]{} (48,20)[(-1,0)[15]{}]{} (55,20)[(0,0)]{} (25,16)[(0,-1)[8]{}]{} (55,16)[(0,-1)[8]{}]{} But $( {\mbox{$\cal P$}}_{n}\widehat{V{\mbox{$\cal C$}}})_n$ is naturally isomorphic to $( {\mbox{$\cal P$}}_{n}\lim(\widehat{V{\mbox{$\cal C$}}_{\lambda}}))_n $ because $V$ obviously preserves limits in dimension $n$. So, after the limit we have a pullback (60,29)(-15,0) (22,5)[(0,0)]{} (45,4.8)[(-1,0)[9]{}]{} (55,5)[(0,0)]{} (22,20)[(0,0)]{} (48,20)[(-1,0)[15]{}]{} (55,20)[(0,0)]{} (22,16)[(0,-1)[8]{}]{} (55,16)[(0,-1)[8]{}]{} where the bottom arrow is an isomorphism because ${\mbox{$\cal P$}}_{n}$ preserves connected limits. So the top arrow is, and we completed the proof of the theorem. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} Suppose that for an operad $A$ the slices ${\mbox{$\cal P$}}_{k}(A), 0\le k\le n-1$ are strongly regular theories. Then the category of $n$-computads of $A$ is a presheaf topos. [[**Proof.  **]{}]{}The proof generalizes example 3.6 from [@CJ]. We use induction on $n$. If $n=0$ the statement is true by definition. Suppose we know that the category $Comp_{n-1}$ is a presheaf topos. Consider the functor $$T_{n-1}: Comp_{n-1}\longrightarrow Set \ ,$$ which assigns to a computad ${\mbox{$\cal C$}}$ the set of parallel pairs of $(n-1)$-cells from $W_{n-1}{\mbox{$\cal F$}}_{n-1}{\mbox{$\cal C$}}$. Then we have the equivalence of categories $$Comp_{n} \sim Set\downarrow T_{n-1}.$$ Now we prove that $T_{n-1}$ preserves connected limits. Notice that $T_{n-1}$ is isomorphic to the following composite $$Comp_{n-1}\stackrel{\scriptscriptstyle {\cal{F}}_{n-1}}{-\!\!\!\longrightarrow} Alg_{n-1} \stackrel{Alg_{n-1}(A_{n-1}S^{n-1},-)}{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow}Set$$ where $S^{n-1}$ is the $n-1$-globular set which has two elements $-$ and $+$ in every dimension and $$s(-) = s(+) = - \ , \ t(-) = t(+) = + .$$ By our assumption, ${\mbox{$\cal F$}}_{n-1}$ preserves pullbacks (wide pullbacks), so $T_{n-1}$ does. According to the results of [@CJ] this is sufficient for $Set\downarrow T_{n-1}$ to be a preasheaf topos. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} The following categories of computads are presheaf toposes: - the category of Street $2$-computads (Shanuel, Carboni-Johnstone [@CJ]); - the category of Gray-computads [@MT] and the category of $3$-computads for Gray-categories; - the category of $k$-computads for weak $n$-categories for all $0\le k\le n$; - the category of $k$-computads for $P$-magmas [@BatP]. [[**Proof.  **]{}]{}See examples in section \[slice\]. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} The following theorem extends the example of Makkai-Zawadowski. Let $A$ be an operad such that its slices ${\mbox{$\cal P$}}_k(A),\ 1\le k\le n-2,$ are normalised in the sense that ${\mbox{$\cal P$}}_k(A)[0]=1$. Then the category of $n$-computads is a presheaf topos if and only if all the slices ${\mbox{$\cal P$}}_k(A),\ 0\le k \le n-1,$ are strongly regular theories. [[**Proof.  **]{}]{}We only need to prove the only if part of the theorem. For this we will show that if there exists ${\mbox{$\cal P$}}_k(A)$ which is not strongly regular then the category of $k$-computads is not a presheaf topos; this implies that the category of $n$-computads is not a presheaf topos either. So without loss of generality we can assume that the category of $(n-1)$-computads is a presheaf topos but ${\mbox{$\cal P$}}_{n-1}(A)$ is not strongly regular, in particular, it is not connected limits preserving. Let ${\mbox{\large $\vartheta$}}_{k}, \ 0\le k \le n-2 $ be a $k$-computad defined by induction $${\mbox{\large $\vartheta$}}_{k}= (O_{k-1},id,{\mbox{\large $\vartheta$}}_{k-1})$$ and ${\mbox{\large $\vartheta$}}_0 = 1$, where $O_{k-1}$ is a $(k-1)$-terminal $k$-globular set with empty set of cells of dimension $k$. For this definition to be valuable, we have to prove that $${\mbox{$\cal F$}}_{k}{\mbox{\large $\vartheta$}}_{k} = 1.$$ If $k=0$ it follows from $A_0(1)= 1$. Suppose we have proved it up to dimension $k-1$. Then by applying the Kelly machine we see that the calculation of ${\mbox{$\cal F$}}_{k}{\mbox{\large $\vartheta$}}_{k}$ amounts to the calculation of a free $A_k$-algebra on the $I_{A_k}$-algebra $O_{k-1}$. So the algebra ${\mbox{$\cal F$}}_{k}{\mbox{\large $\vartheta$}}_{k}$ is isomorphic to ${\mbox{$\cal P$}}_k(A)(\emptyset)=1$. Let us consider the full subcategory of $Comp_{n-1}$ consisting of computads ${\mbox{$\cal C$}}$ with $tr_{n-2}{\mbox{$\cal C$}}= {\mbox{\large $\vartheta$}}_{n-2}.$ Obviously, this subcategory is isomorphic to the category of sets. By the above argument, the restriction of ${\mbox{$\cal F$}}_{n-1}$ to this subcategory is isomorphic to ${\mbox{$\cal P$}}_{n-1}(A)$ and, hence, is not connected limit preserving. So the functor $T_{n-1}$ is not connected limit preserving either and hence $Comp_{n} \sim Set\downarrow T_{n-1}$ can not be a presheaf topos by a theorem from [@CJ] again. =0.25mm (500,10)(-10,0) (440,10)[(0,-1)[10]{}]{} (440,0)[(1,0)[10]{}]{} (450,0)[(0,1)[10]{}]{} (450,10)[(-1,0)[10]{}]{} (451,11)[(-1,0)[10]{}]{} (451,1)[(0,1)[10]{}]{} (450.5,0.5)[(0,1)[10]{}]{} (450.5,10.5)[(-1,0)[10]{}]{} The category of Street\ $n$-computads for $n\ge 3$ is not a presheaf topos. [99]{} Batanin M.A., Monoidal globular categories as a natural environment for the theory of weak $n$-categories, [*Adv. Math.*]{} [**136**]{} (1998), pp. 39-103. Batanin M.A., The Eckmann-Hilton argument, higher operads and $E_n$-spaces, [*preprint*]{}, 52pp, http://au.arxiv.org/abs/math.CT/0207281, 2002. Batanin M.A., Computads for finitary monads on globular sets, [ *Contemp. Math.*]{} AMS, 230 (1998), 37-58. Batanin M.A., On the Penon method of weakening algebraic structures, [*Journal of Pure and Appl. Algebra*]{} [**172**]{} (2002) 1-23. Batanin M.A., Street R., The universal property of the multitude of trees, [*Journal of Pure and Appl. Algebra*]{} [**154**]{} (2000), 3-13. Carboni A., Johnstone P., Connected limits, familial representability and Artin glueing, [*Mathematical Structures in Computer Science*]{} [**5**]{} (1995), 441-459. Crans S., A tensor product for Gray-categories, [*Theory Appl. Categ.*]{}, [**5**]{} (1999), 12-69. Gordon R., Power A.J., Street R., Coherence for Tricategories, [*Memoirs of the AMS,*]{} [**117**]{}, n.558, 1995. Joyal A., Foncteurs analytiques et espèces de structures, [*Lecture Notes in Math.*]{}, [**1234**]{} (1991), 126-159. Kelly G.M., A unified treatment of the transfinite construction for free algebras, free monoids, colimits, associated sheaves, and so on, [*Bull. of the Australian Math. Soc.*]{}, [**22**]{} (1980), 1-85. Makkai M., Zawadowski M., 3-computads do not form a presheaf category, [*personnal letter to M.Batanin*]{}, September 2001. McIntyre M., Trimble T., Surface diagrams for Gray-categories, (submitted), 1997. Street R., Limits indexed by category-valued $2$-functors, [*J. Pure and Appl. Algebra*]{}, [**8**]{} (1976), 149-181. Street R., The petit topos of globular sets, [*Journal of Pure and Appl. Algebra,*]{} [**154**]{} (2000), 299-315. Weber M., Symmetric Operads for Globular Sets, [*PhD thesis*]{}, Macquarie University, 2001. [^1]: The author holds the Scott Russell Johnson Fellowship in the Centre of Australian Category Theory at Macquarie University
--- abstract: 'We study the drag force experienced by an object slowly moving at constant velocity through a 2D granular material consisting of bidisperse disks. The drag force is dominated by force chain structures in the bulk of the system, thus showing strong fluctuations. We consider the effect of three important control parameters for the system: the packing fraction, the drag velocity and the size of the tracer particle. We find that the mean drag force increases as a power-law (exponent of 1.5) in the reduced packing fraction, $(\gamma - \gamma_c)/\gamma_c$, as $\gamma$ passes through a critical packing fraction, $\gamma_c$. By comparison, the mean drag grows slowly (basically logarithmic) with the drag velocity, showing a weak rate-dependence. We also find that the mean drag force depends nonlinearly on the diameter, $a$ of the tracer particle when $a$ is comparable to the surrounding particles’ size. However, the system nevertheless exhibits strong statistical invariance in the sense that many physical quantities collapse onto a single curve under appropriate scaling: force distributions P($f$) collapse with appropriate scaling by the mean force, the power spectra P($\omega$) collapse when scaled by the drag velocity, and the avalanche size and duration distributions collapse when scaled by the mean avalanche size and duration. We also show that the system can be understood using simple failure models, which reproduce many experimental observations. These observations include: a power law variation of the spectrum with frequency characterized by an exponent $\alpha=-2$, exponential distributions for both the avalanche size and duration, and an exponential fall-off at large forces for the force distributions. These experimental data and simulations indicate that fluctuations in the drag force seem to be associated with the force chain formation and breaking in the system. Moreover, our simulations suggest that the logarithmic increase of the mean drag force with rate can be accounted for if slow relaxation of the force chain networks is included.' address: 'Department of Physics and Center for Nonlinear and Complex Systems, Duke University, Durham NC, 27708-0305, USA' author: - 'Junfei Geng and R. P. Behringer' title: Slow Drag in 2D Granular Media --- introduction ============ Granular materials are of great interest for their rich phenomenology and import applications [@reviews]. When subject to external stresses, a dense granular system forms inhomogeneous force chain networks where only a fraction of the grains carry most of the force [@chains]. The spatial scale of these force chains can extend over many grain diameters, and the chain lengths may be comparable to the system size. The separation between microscopic and macroscopic scales poses a theoretical challenge if one attempts to describe a granular system using continuum approach. Recently, experimental works by several research groups [@fluct_expts; @mueth_98; @miller_96; @howell_99; @hartley_03; @albert_00] suggest the importance of strong stress fluctuations in granular systems. The fluctuations, as characterized by the standard deviation or $rms$ of the stress, can often be somewhere from 1 to several times of the mean stress. However, questions involving the dynamics, nature, and length/time scales associated with these fluctuations are still poorly understood. An improved understanding of these questions could provide insight into describing a number of practical applications and such phenomena as earthquakes and avalanches. Another motivation concerns exploring jamming [@liu_98; @cates_98] in granular materials. Specifically, jammed states in granular systems may be reached when the density (packing fraction) of the system is high enough. In this regard, slow drag experiments, the subject of this paper, provide a useful way to understand the nature of stress fluctuations and slow dynamics in granular materials. We have used a similar experimental approach to probe the thermodynamic temperature in granular systems, as reported elsewhere [@geng_03]. In molecular fluids, the drag force on a particle arises from viscous interactions, i.e. from collisional interactions of the particle and surrounding molecules that involve momentum transfer. This drag force is linearly proportional to the object’s velocity through the fluid when the velocity is not very large. In dense granular media, the origin of the drag force differs in several respects. First, frictional interactions exist between a drag particle and surrounding grains. Second, but related, is the existence of force chains. These relatively long-range inhomogeneous structures can provide an elastic (rigid in the limit of infinitely stiff particles) resistance to a moving particle. In Fig. \[fig:setup\]c, we show such force chain structures obtained using photoelastic techniques [@howell_99; @geng_01]. These force chains are typically inhomogeneous and anisotropic in nature, and constantly form and break when an object moves through the granular media, leading to strong fluctuations in the drag force. In the experiments presented here, we consider the drag force experienced by a tracer particle moving through a 2D granular material consisting of bidisperse disks. In our experiment, the size of the tracer particle is comparable to the surrounding grains, which allows us to explore fluctuations at the grain scale. The experimental results presented here are described well by simple failure models. A number of experimental and theoretical results provide important background to the present studies. Experiments that are relevant here include the “carbon paper” studies of Mueth et al. [@mueth_98], who measured the static forces of a material (e.g. glass beads) at the boundary of a container, and showed that the distribution of forces, $f$, is exponential for large $f$. Sheared granular systems, both in 2D [@howell_99; @hartley_03] and 3D [@miller_96], show strong force/stress fluctuations. In addition, the 2D experiments by Howell et al. [@howell_99] showed a well defined strengthening/softening transition as the packing fraction of the system passed a critical packing fraction $\gamma_c$. The mean stress in such a system varies as a power-law in the reduced packing fraction, $$r=\frac{\gamma-\gamma_c}{\gamma_c},$$ with an exponent between 2 and 4, depending on the particle type. Later experiments on similar 2D systems by Hartley et al. [@hartley_03] showed that the mean stress increased logarithmically with the shearing rate, which may be related to collective slow relaxation of the force chain network. 3D experiments by Miller et al. [@miller_96] identified rate-independent power spectra, $P(\omega)$, for the stress time series which fell off as $P \sim \omega^{-2}$ at high spectral frequency $\omega$. Experiments on 3D drag by Albert et al. [@albert_99; @albert_00; @albert_01] relate most closely to the present experiments. These studies yielded the drag force experienced by a rod as it was dragged through granular materials such as glass beads. Depending on the rod insertion depth and the size ratio between the rod and the grain, three types of drag force time series were observed: a periodic regime where the signal resembles an ideal sawtooth pattern, a random regime, and a stepped regime with sawtooth-like steps. These authors focused their work on the periodic and stepped regimes, characterized by stick-slip fluctuations due to successive formation and collapse of jammed states. A particularly interesting finding of these studies was that the mean drag force on the rod was independent of the drag velocity. Several theoretical works [@q-model; @kahng_01] have provided a context for understanding the stress distributions and stress fluctuations in granular materials. The q-model of Coppersmith et al. [@q-model] predicts a force distribution for static systems $P(F)\propto F^{N-1}$exp$(-F/F_0)$, where $N$ is the system dimension. This model only considers the vertical force transmitted through a regularly packed lattice. Vertical forces on a grain in one layer are balanced by transmitting fractions, $q$ and (1-$q$), to the two supporting grains in the next layer (assuming a 2D system), where $q$ is a random number uniformly distributed in $0\le q\le 1$. We note for exponential force distributions, that the mean is of the order of the width of the distribution. Other lattice models [@other_lattice] and calculations by Radjai using contact dynamics [@radjai_96] also predict exponential force distributions for large forces. Recently, Kahng et al. [@kahng_01] have used a stochastic failure model to understand the 3D drag experiments of Albert et al. [@albert_99; @albert_00; @albert_01]. These authors used simple springs with random thresholds to model the jamming and reorganization of grains. Among other results, the model reproduces the experimentally observed periodic sawtooth fluctuations in the drag force. We will use this simple failure model, with modifications, later in this paper to understand the experiments described here. The organization of the remainder of this paper is as follows. In Section II, we describe the experimental setup and procedures. In Section III, we report experimental results. In Section IV, we describe models and simulations. Finally, we draw conclusions in Section V. Experimental setup and procedures ================================= The experiments were carried out in an apparatus which is, in spirit, similar to the one in Ref. [@albert_99], except that the one used here is two-dimensional in character, whereas the one used by Albert et al. was three-dimensional. We show a cross-sectional view of the apparatus in Fig. \[fig:setup\]a. The bottom plate was driven by the center shaft, both of which are supported by ball bearings mounted on a stable metal table (not shown). A stepper motor ran at a low frequency to drive the bottom plate. The top plate did not rotate and had no contact with the rotating bottom plate or the particles. The granular medium consisted of a single layer of bi-disperse disks with diameters 0.744 and 0.876 cm, where the thickness of both types of disk was 0.660 cm. Fig. \[fig:setup\]b shows an actual image from the experiment where the two types of disks can be identified. The disks were placed on the bottom plate and confined between two concentric ring structures. The inner ring radius was 10.5 cm and the outer ring radius was 25.4 cm. When the bottom plate was rotating, the disks moved with it as a rigid body, due to friction. This frictional force with the substrate was relatively weak compared to the forces between particles associated with force chains. The centrifugal force experienced by the disks was negligible due to the slow rotation speed. Note that this apparatus is not to be confused with a Couette shearing apparatus where either the inner wheel or the outer wheel is moving. In this apparatus, both inner and outer boundaries remained fixed and the driving was provided by the moving bottom plate. A digital force gauge (Model DPS-110 from Imada Inc., resolution 0.1 g), shown in the inset of the Fig. \[fig:setup\]a, was mounted on one side of the top plate. The force sensor was connected with a tracer particle through a hole, located in the center of the inner and outer ring. The reading on the force gauge, which yielded the instantaneous tangential force, was recorded as a time series by a computer through its serial communication port, as in Fig. \[fig:force\_series\]. When the granular medium moved, force chains form in the bulk of the system, as show in Fig. \[fig:setup\]c using photoelastic techniques [@howell_99; @geng_01]. The pins on the left side of the top plate stirred the particles. There are three important parameters that we explored in the system, i.e, the rotation rate $\omega$, the system packing fraction $\gamma$ (or density), and the tracer particle size $a$. We varied the rotation rate $\omega$ over two orders of magnitude, from $\omega=6.33\times 10^{-6}$ to $8.67\times 10^{-4}Hz$ (corresponding to $v=7.14 \times 10^{-6}$ to $ 9.78 \times 10^{-4}$ $m/s$), the packing fraction $\gamma$ from 0.561 to 0.761 (these values are global packing fractions since the system is not completely uniform), and the tracer particle diameters over the set of diameters $a=0.744$, $0.876$, $1.250$, $1.610$ and $1.930$ cm. Experimental results ==================== In this section, we report the experimental results. We first consider the effect of rotation rate, and we then turn to the effect of changes in the packing fraction. Changing the Medium Rotation Rate --------------------------------- An initial series of experiments was carried out at a fixed packing fraction $\gamma=0.754$, which is above the critical packing fraction $\gamma_c$, discussed in more detail in the next section. Here, we varied the rotation rate over $\omega=6.33\times 10^{-6} \leq \omega \leq 8.67\times 10^{-4}Hz$ (corresponding to $7.14 \times 10^{-6} \leq v \leq 9.78 \times 10^{-4}$ $m/s$). (The velocity of the tracer is $v=\omega r$, where $r=17.95$ cm is the radial location of the tracer.) In Fig. \[fig:force\_series\], we show three sets of force time series, obtained with a tracer size $a=0.876$ cm, and rotation rates that spanned the full range of $\omega$’s, namely (a). $\omega=6.3\times 10^{-6} Hz$, (b). $\omega=5.0\times 10^{-4}Hz$, and (c). $\omega=8.7\times 10^{-4}Hz$. As one would expect, the force time series in Fig. \[fig:force\_series\] show strong fluctuations. Interestingly, an enlarged view of a small section of Fig. \[fig:force\_series\]c, seems qualitatively similar to the slower run in Fig. \[fig:force\_series\]a, which suggests possible scaling behavior. We will return to this point below. ### Mean Drag Force and Force Distributions In Fig. \[fig:f\_v\]a, we show the mean drag force, $<F>$, as a function of rotation rate, $\omega$, for tracer particles of five difference diameters ($a=0.744$, $0.876$, $1.25$, $1.61$ and $1.93$ cm.). For each of these tracer sizes, the mean drag force increased only slightly (by a factor less than 2) for a variation by more than two decades in $\omega$. To emphasize this slow increase, we plot the same data on log-lin scales in Fig. \[fig:f\_v\]b. The data can be fitted by a straight line, indicating a logarithmic variation of $<F>$ with $\omega$. This is consistent with the results by Hartley et al. [@hartley_03] who found that the total stress in a system of similar particles undergoing slow shearing also increases logarithmically with the shearing rate. We emphasize that this slow increase in the mean force differs significantly from the drag force in a fluid, where the mean force increases linearly with the drag velocity when the velocity is not too large. This is also in contrast to rate-independent stresses in Mohr-Coulomb friction models [@nedderman; @wood] for dense granular systems. It is consistent with several rate-dependent friction models[@rate-dep]. In Fig. \[fig:variance\_v\], we show the standard deviation of the drag force, $StdDev(F)$, as a function of the rotation rate, where $StdDev(F)$=$\sqrt{\frac{1}{N}\sum_{i=1}^{N}(F_i-<F>)}$. $N$ is the number of measurements in the force time series and $F_i$ is the $i$th measurement. We note that the standard deviation is of the same order of magnitude as its corresponding mean, and that it also increases roughly logarithmically with the rate. The slow increase of the mean drag force with rate appears to differ from experimental observations in some previous studies, including those by Wieghard and by Albert et al. [@wieghard_75; @albert_99]. In particular, Wieghard [@wieghard_75] measured the drag force experienced by vertical rods dipped into a rotating bed of fine dry sand. In this case, the drag force had a weak dependence on the velocity: first decreasing then increasing with increasing velocity. In the experiments by Albert et al. [@albert_99], the mean drag force on a cylindrical rod was found to be independent of the drag velocity. In the case of Wiegard’s experiments, the explanation for the difference is relatively straightforward. The velocity range used in Wieghard’s experiments is very different from both that used in our and Albert et al.’s measurements. Wieghard investigated velocities ranging from about 0.2 m/s to 2 m/s; the minimum of the drag force appeared between 0.5 m/s and 1 m/s depending on the rod insertion depth. Wieghard explained the variation of drag with speed in the following way. The normal pressure and the frictional forces along the slip surface provided resistance. At lower speed, the inertial force of the sand flowing around the body was small and negligible. When the velocity increased, there was a reduction in drag because, presumably, more contacts were slipping and kinetic friction is smaller than static friction. At larger speeds, friction became less dependent on the velocity; however, when the velocity was increased, an additional inertial term led to an increase in the drag force. The velocities used in the Albert et al. experiments and in the current experiments (of the order of 1 mm/s) are more comparable and are much slower than that of Wieghard. To a first order approximation, the present data is consistent with Albert’s data, i.e. they both show that the mean force is roughly independent of the velocity. However, we do see a slow, logarithmic increase in the mean force that differs from the observation of Albert et al. The explanation for this difference is not known, but it is interesting to speculate on the cause. Of course, there is the obvious difference in dimensionality. However, another difference between the two experiments is that the present particles were softer ( a lower Young’s modulus) than those used by Albert et al. In the present experiments, the particles deformed elastically, whereas in the experiments of Albert et al. an extrenal spring was deformed. The real issues include differences in the elastic time scales vs. characteristic times for frictional events (e.g. creation and destruction of force chains.) and the amount of elastic deformation of particles. In this regard, we note the work by Campbell[@campbell]. Recent experiments by Hartley et al. [@hartley_03] using the same type of particles as those of the present experiments showed a qualitatively similar relation between the mean force and the rate, albeit in a Couette system. These experiments also showed that under static shear stresses, there was a logarithmically slow relaxation of the force network. Later in this work, we will use a modified failure model inspired by this observation to reproduce the slow increase in the mean drag force. In Fig. \[fig:dist\_f\_v\], we show drag force distributions for different rotation rates. The left panel of Fig. \[fig:dist\_f\_v\] gives force distributions for a tracer particle of diameter $a=0.744$ cm, and the right panel gives data for $a=1.93$ cm. From Fig. \[fig:dist\_f\_v\]a, c, we note that, irrespective of the particle size, the force distributions broaden and shift towards larger forces as the rotation velocity increases. Interestingly, these force distributions collapse into a single curve when scaled by the corresponding mean force, as shown in Fig. \[fig:dist\_f\_v\]b, d. Thus, the mean force is one of the key control parameters for this system. These data indicate a roughly exponential fall-off for large forces, as seen in Fig. \[fig:dist\_f\_v\]b, d, which shows the scaled distributions on semi-log scales. As the tracer size increases, one noticeable change in the force distributions is that the probability of very small force becomes smaller. An intuitive explanation is that a larger tracer particle is more likely to be in contact with some strong force chains at any time, thus reducing the probability of a very small force. This argument must be modified for tracers that are much larger than the background particles. As the tracer particle diameter becomes very large, there are multiple contacts, some of which involve strong force chains, and we expect that the distribution for $F/<F>$ will no longer depend on the tracer diameter. ### Power Spectra and Correlations The power spectra, $P(\omega)$, resulting from such force time series provide a useful quantitative measure of the relevant time scales for force fluctuations. (Note that the mean force has been removed in calculating the spectra.) In Fig. \[fig:power\_spec\_v\]a, we show $P(\omega)$ vs. the frequency, $\omega$, on log-log scales. At high frequency, the spectra fall off as $P(\omega) \propto 1/\omega^\alpha$, with $\alpha \simeq 2$. At low frequency, the spectra vary more weakly, and are almost independent of the frequency. The $1/\omega^2$ behavior at high frequency can be explained by assuming a series of random jumps occurring on time scales at least as fast as a crossover time $\sim 1/\omega^*$. This time corresponds roughly to the time for the tracer particle to travel a few disk diameters. We will come back to this time scale below in more detail. The power spectrum at low frequency is presumably explained by the fact that there are no strong correlations at very long time scales in the force time series. A $1/\omega^2$ behavior occurs in many other contexts, e.g. for frictional fluctuations [@demirel_96] and stick-slip motions [@rozman_96]. These spectra also show interesting rate invariance. In Fig. \[fig:power\_spec\_v\]b, we rescale the power spectra data of Fig. \[fig:power\_spec\_v\]a by dividing the $\omega$-axis by the corresponding rotation rate, $\omega_0$, and multiplying $P$ by $\omega_0$. This corresponds to rescaling time by $1/\omega_o$, or alternatively by replacing time by angular displacement. Fig. \[fig:power\_spec\_v\]b shows an excellent collapse of all the data for the scaled power vs. the scaled frequency, and implies rate invariance in the fluctuating component of the stresses. Such rate-invariance in stress fluctuations has also been observed by Miller et al. [@miller_96] and Albert et al. [@albert_01]. An argument for this rate invariance is provided in Ref. [@behringer_01] which suggests that the system spends much of its time in states close to static equilibrium, so that $\omega_0$ sets the time scale to move between states. We can better understand the role of $\omega^{*}$ by calculating the correlations resulting from these force time series. In Fig. \[fig:correlation\]a, we show correlation functions, $C(t)$, for time series at different rotation rates (Note that $C(\Delta t)=<F(t)F(t+\Delta t)>$, where the brackets denote an average of time, and $C(\Delta t)$ is simplified as $C(t)$ when no confusion is caused). These correlation functions generally drop quickly (exponentially) to zero over a time scale of $t_c$, and then fluctuate around zero, indicating that the signals are uncorrelated beyond that time. If we rescale the data of Fig. \[fig:correlation\]a by multiplying the $t$-axis by the corresponding velocities, all correlation functions collapse to a single curve, as shown in Fig. \[fig:correlation\]b. The collapsed curve defines a characteristic length scale, $\Delta x_c$, which is comparable to one disk diameter. Intuitively, this can be explained by the fact that force chains contacting the tracer particle tend to form and then fail when the tracer particle moves by a few grain diameters, in agreement with the characteristic length scale revealed in Fig. \[fig:correlation\]b. We note here that the correlation data and the power spectra data are a Fourier Transform pair according to the Wiener-Khinchin Theorem [@nr_92]. Thus, the $1/\omega^2$ power spectrum at high frequency can also be derived from the correlation data at small time scales. Using the fact (inset of Fig. \[fig:correlation\]b) that the correlation functions decay exponentially at early time as: $C(t)=A_0\cdot exp(-t/t_c)$, the corresponding power spectrum can be obtained by performing a Fourier transform: $$\begin{aligned} P(\omega)&=&\int_{-\infty}^{\infty}C(t)exp(-i\omega t)dt\\ &=&\frac{2t_c}{1+(\omega t_c)^2}\\ &\approx&\omega^{-2}, \hspace{2mm} if\hspace{2mm} \omega \gg 1/t_c. \\\end{aligned}$$ Thus, for large frequency ($\omega \gg 1/t_c$), we expect the power spectrum will decay as $1/\omega^2$. ### Avalanches and the Force Chain Force Constant. If we define an avalanche event to be a monotonic decrease in the force time series, we can investigate the stress release process in the system more quantitatively (similar results are found for the stress build-up process). This approach is similar in spirit to the approach of self-organized criticality (SOC) [@frette_96], and it is interesting to ask whether any sign of SOC is present in this system. We denote the size of an avalanche to be the magnitude of the drop of the force and the duration to be the time it takes for an avalanche event to take place, as illustrated in Fig. \[fig:ava\_def\]. With such definitions, we can calculate the probability distributions for both avalanche sizes and avalanche durations. We show such distributions (properly rescaled) in Fig. \[fig:ava\_size\_dist\] for force time series obtained at different velocities. It is possible to collapse all the distributions for avalanche size by dividing the horizontal coordinate for each set of data by the corresponding mean avalanche size and (and therefore necessarily multiplying the vertical coordinate by the mean avalanche size). The avalanche duration distributions are similarly rescaled by the corresponding mean avalanche duration of each data set. In Fig. \[fig:ava\_size\_dist\], we show both data sets on log-lin scales, which emphasizes the roughly exponential nature of the distributions. The flat tails at larger values of the horizontal coordinates may be due to insufficient statistics. These data suggest that there is a large probability of finding small avalanche events in the system, while the probability of finding a large avalanche event becomes exponentially small. Note that these distributions do not show any indication of power laws, as one would expect for a self-similar process and SOC. It is interesting to ask how the mean avalanche size and duration change with $\omega$. We show, in Fig. \[fig:mean\_size\_dura\_v\]a, data for the mean avalanche size, $\overline{\Delta F}$, and duration, $\overline{\Delta t}$, as functions of the rotation rate. The mean avalanche size increases with $\omega$ and the mean avalanche duration decreases with $\omega$. Both the mean size and the and the mean duration vary as power laws with $\omega$. Particularly interesting is the fact that the ratio of the mean avalanche size to duration, Fig. \[fig:mean\_size\_dura\_v\]b, also varies essentially linearly as a power of $\omega$. The linear relationship between $\overline{\Delta F}/\overline{\Delta t}$ and $\omega$ (or the medium velocity $v$) suggests that there is an effective ’spring constant’ for the force chains, that can be defined as $\overline{\Delta F}/(v\overline{\Delta t})$. We develop this point further in the next few paragraphs. An obvious question is whether a large avalanche event (in terms of its size) is in general associated with a longer duration, or perhaps vice-versa. This question is addressed in Fig. \[fig:ava\_2D\_dist\] by calculating the 2D probability distributions for avalanches sizes and durations. These distributions are given in Fig. \[fig:ava\_2D\_dist\]a-c for different drag velocities, using a greyscale representation. We see that these distributions are always distributed around certain directions with positive slopes, which suggests that, in general, a larger avalanche event lasts longer. We also note that the slope of the distribution orientation increases with increasing drag velocity. Based on the scalings of Fig. \[fig:ava\_size\_dist\], if we rescale the vertical and horizontal axis in Fig. \[fig:ava\_2D\_dist\] by the mean avalanche size and mean avalanche duration, respectively, we expect that the resulting distributions for different velocities would be peaked around the same orientation. Indeed we have tested that this is the case. Since the 2D distributions for avalanche size and duration, Fig. \[fig:ava\_2D\_dist\], tend to be oriented around a certain direction, it is useful to consider an alternative approach to characterize these events. Namely, we define the avalanche rate to be the ratio of the avalanche size and the corresponding duration, i.e., $Rate=\frac{Size}{Duration}=\frac{\Delta F}{\Delta t}$. We show the distributions of rates for different medium velocities in Fig. \[fig:force\_chain\_const\]a. From this figure, we see first that each distribution is peaked, which is consistent with our claim that events have a most probable direction in Fig. \[fig:ava\_2D\_dist\], albeit with some spreading around that direction. Secondly, this figure shows that when the rotation rate increases, the position of the peak shifts to the right. We extract the peak positions and plot them as a function of the medium velocity, Fig. \[fig:force\_chain\_const\]b. This figure shows that the peak position increases roughly linearly with the medium velocity. If we denote the slope of a least-squares linear fit to this data as $k_{eff}$, then: $$k_{eff}=\frac{\Delta F}{\Delta t}\frac{1}{v}=\frac{\Delta F}{\Delta x}.$$ Thus, $k_{eff}$ resembles the force constant of a simple spring. Indeed, Fig. \[fig:setup\]c shows that the resisting forces are mainly carried through chain-like structures, and one might imagine that each of these force chains acts like a spring. The collective force constant of these force chains is then rather well defined, as suggested by the quantity, $k_{eff}$, extracted from Fig. \[fig:force\_chain\_const\]b. One has to keep in mind that since Fig. \[fig:force\_chain\_const\]b is obtained only for peak positions, the actual effective force constant at a given instant can vary around the $k_{eff}$ extracted here. A similar observation has been made in Ref. [@kahng_01] by Kahng et al. concerning their 3D drag experiment (see Fig. 2 in Ref. [@albert_00]). However, the force constant revealed in those experiments reflects only the force constant of the external spring. That is, since it is much softer than the effective spring constant of the grains, the force registered on the force sensor is mainly due to the compression of the external spring. By contrast, in our experiments, the effective force constant gives a measurement of actual strength of the force chains in the granular system. Specifically, the force constant of the external spring in our apparatus is much stronger than that associated with the particles. The above analysis supports the idea that force chains may be modeled by springs as proposed in the model by Kahng et al. [@kahng_01]. In Section IV below, we modify their model to explain features of the data for the current experiments. In the remainder of this section, we explore several other features of the experimental results. Changing the Packing Fraction ----------------------------- In this section, we describe experimental data and analysis associated with changing the packing fractions in the system. For this set of experiments, we fixed the rotation rate at $\omega_0=5.0\times 10^{-4}Hz$ and the tracer size at $a=1.25$ cm. ### Mean Drag Force and Force Distributions When we change the packing fraction, $\gamma$, we observe a softening/strengthening transition similar to the one reported in Ref. [@howell_99]. Specifically, when $\gamma$ is below a critical value, $\gamma_c$, the system is so loosely packed that it cannot sustain force chains. In the regime $\gamma < \gamma_c$, when the grains make contact with the tracer particle, they are almost immediately pushed into open space, and no long-range force chains form. On the contrary, when the packing fraction is above the critical value $\gamma \geq \gamma_c$, there are always some force chains in the bulk of the system, such as those shown in Fig. \[fig:setup\]c. In Fig. \[fig:force\_series\_gamma\], we show three sets of force time series data obtained at different $\gamma$’s. For the data of $\gamma=0.561$, which is below $\gamma_c=0.645$, the forces are close to zero, with a small amount of activity corresponding to those events when the tracer particle makes contact with grains. When $\gamma=0.653$, which is slightly above $\gamma_c$, we already see more activity, and the average force signal increases above the base line. When $\gamma$ is increased further, say to $\gamma=0.754$, the force signal become much more active and the scale of fluctuations is significantly larger. Fig. \[fig:f\_gamma\]a shows the mean drag force as a function of the global packing fraction $\gamma$. We identify two different regimes in this figure. For smaller $\gamma$’s, the mean force can be fitted by a linear function of $\gamma$: $F=a\gamma+b$, where a and b are constants, while for larger $\gamma$’s, the mean force can be fitted by a power-law, which parallels the results of Howell et al. [@behringer_01]: $F=F_c+d(\gamma-\gamma_c)^\beta$, where d and $\beta$ are constants. We define $\gamma_c$ as the crossover value from the linear to the non-linear regime. In Fig. \[fig:f\_gamma\]b, we show the mean force as a function of reduced packing fraction, $r=\frac{\gamma-\gamma_c}{\gamma_c}$, for $\gamma \geq \gamma_c$ on log-log scales to emphasize the power-law character in the nonlinear regime. In that regime, the exponent of the power law is $\beta=1.53$. In Fig. \[fig:dist\_f\_gamma\]a, we show drag force distributions for different packing fractions. As the packing fraction is increased, the distributions widen and the means becomes larger, consistent with the data of Fig. \[fig:f\_gamma\]. Again, if we rescale the force distributions by the corresponding mean force, we obtain an approximate collapse of all curves. Thus, the mean force is also the appropriate scaling factor for the amplitude of the drag force fluctuations. Thus far, we have considered the mean properties and distributions of the drag forces for different rotation rates and packing fractions. We now combine these results and examine how the control parameters, $\omega$ and $\gamma$, affect the drag force. Fig. \[fig:dist\_f\_v\_gamma\] shows the combined drag force distributions for various rotation rates and packing fractions. The solid symbols are data for different $\omega$’s, and the open symbols are data for different $\gamma$’s. All the distributions are rescaled by their corresponding mean drag forces. Again, we see all rescaled curves have nearly the same form. This statistical invariance in the force distributions is striking, since these data are obtained over a wide range of rotation rates (more than two decades) and packing fractions. This again confirms the key scaling role of the mean force. We note too that these distributions decay roughly exponentially for large forces, in the spirit of the q-model [@q-model]. For a given tracer particle, changing the rotation rate or changing $\gamma$ both affect the mean drag force, although the former is only a weak effect. In Fig. \[fig:f\_v\_gamma\]a, we combine the data for mean drag forces from Fig. \[fig:f\_v\] and \[fig:f\_gamma\] in a single plot, where the top axis is the rotation rate, $\omega$, and the bottom axis is the reduced packing fraction $r=(\gamma-\gamma_c)/\gamma_c$. When $\gamma$ is fixed, the mean force (solid circles) increases slowly with $\omega$, where this slow increase is adequately described as a logarithm. When $\omega$ is fixed, the mean force (solid squares) increases rapidly with $\gamma$, and this increase is described by a power-law. If we assume that $\bar{F}$ can be written in a product form as $\bar{F}=f_1(a)f_2(\omega)f_3(r)$, for our given tracer particle size, we find that a good description of the data is given by: $$\bar{F}=\frac{1}{14.51}(22.802+2.588\log\omega)(2.502+174.91 r^{1.529}).$$ Fig. \[fig:f\_v\_gamma\]b shows the mean drag force $\bar{F}$ in a 3D perspective plot. From this figure, we see that an increase of the rotation rate, $\omega$, leads to an increase of the mean drag force, qualitatively resembling what occurs due to an increase in the packing fraction, $\gamma$, but on a much weaker scale. Similar effects on the stress due to changes in the shear rate and packing fraction were also observed in a 2D granular Couette systems [@hartley_03]. We also examine how the diameter of the tracer particle, $a$, affects the mean drag force. In Fig. \[fig:f\_a\], we show the mean drag force as a function of the tracer diameter for different rotation rates at a given packing fraction $\gamma=0.754$. From these data, we see that the increase in the mean force with tracer particle size is faster than linear. It is interesting to contrast these results with what one would expect for a particle, typically much larger than a molecule, that is moving through a viscous fluid. According to Stokes’s law [@pathria_96], the drag force, is proportional to the diameter of the tracer particle, the coefficient of viscosity of the fluid, and the relative velocity of the fluid and the tracer. It is also interesting to compare our results to the experiments by Albert et al.[@albert_01] on drag through a granular material. As noted, these authors observed rate independent forces. They also found a linear dependence of the drag force on the diameter of the drag rod. However, it is perhaps not surprising that in the present experiments the diameter dependence of the drag force is nonlinear, since the tracer particle size is comparable to the size of surrounding grains (the maximum size ratio is 2.6), unlike the situation in the experiments of Albert et al. ### Rescaling of Power Spectra and Avalanches In Fig. \[fig:power\_spec\_gamma\]a, we show power spectra of force time series for different packing fractions. In this case, variations of the power spectra with $\gamma$ are qualitatively similar to those due to changes in the rotation rate shown in Fig.  \[fig:power\_spec\_v\]a, although the magnitude of the changes with $\gamma$ is much greater. It is interesting to rescale these spectra to see if they will collapse onto a common curve. In this regard, we note from Parseval’s Theorem [@nr_92], that the integral of the power spectral density over frequency is equal to the mean square amplitude of the signal, i.e., $$\frac{1}{2\pi}\int_{-\infty}^{\infty}P(\omega)d\omega=\frac{1}{2\pi}\int_{-\infty}^{\infty}|F(\omega)|^2d\omega= \int_{-\infty}^{\infty}|f(t)|^2dt,$$ where $F(\omega)$ and $f(t)$ are a Fourier pair. Hence, the integral of the power spectrum, which is proportional to the mean square amplitude of force signals, $<f^2>=\int_{-\infty}^{\infty}|f(t)|^2dt$, can be used as an appropriate scale factor for the spectra in Fig. \[fig:power\_spec\_gamma\]a. Indeed, when these spectra are normalized by the corresponding $<f^2>$, we obtained a good collapse of data, as shown in Fig. \[fig:power\_spec\_gamma\]b. Additionally, we show the scaling factor, $<f^2>$, vs. the reduced packing fraction, $r$, in Fig. \[fig:f2\_reduced\_gamma\]. These data can also be fitted to a power law, and the exponent is almost twice as large as the exponent associated with the power-law for the mean force, Fig. \[fig:f\_gamma\]. Before turning to the model, we note that the avalanche data calculated from force time series for different packing fractions are similar to those for different rotation rates. We have tested that distributions for both avalanche size and duration decay exponentially, and can be rescaled by the respective mean avalanche size and duration to obtain good collapse of the data. Model and simulations ===================== In this section, we turn to a stochastic failure model, based on one originally proposed by Kahng et al. [@kahng_01] to understand the experimental data of Albert et al.[@albert_99; @albert_00]. We modify this model appropriately to account for several features that are unique to the present 2D granular system. Specifically, we make two modifications to the original model:\ First, we allow the band of thresholds to be wide enough so as to generate random force patterns, and we use exponentially distributed thresholds to produce more realistic force distributions;\ Second, we introduce a time-dependent threshold to explain the slow (logarithmic) increase of the mean drag force with the rate.\ We also note that since the particles are only one layer deep in the 2D experiments, we do not need any depth dependence. In the reminder of this section, we first briefly introduce the basic model. We then make modifications to the model and perform simulations to compare with the present experimental data. The Original Spring Model ------------------------- The original model was constructed to simulate the drag force experienced by a vertical cylinder inserted to a given depth in a granular bed [@kahng_01]. In this model, the grains move with constant speed $v$ in the x-direction, and the tracer particle is simply represented by a block, as shown in Fig. \[fig:model\_cartoon\]a. The tracer particle interacts with grains that are assumed to be supported by force chains. The particle-tracer interactions are modeled as linear springs with a force constant $k_0$, where there are $n$ such springs. (The assumption of a single spring constant is in part justified for the present data by the analysis of an effective force chain force constant, $k_{eff}$, in the experimental data as in Fig. \[fig:force\_chain\_const\].) Necessarily, the spring constant, $k_{eff}$, refers to the collective mean response, instead of a force constant for an individual force chain. As time advances, each spring is compressed by an amount $\Delta x$, which is determined by the velocity $v$ and by $\Delta t$, the time interval over which compression has occurred, i.e. $$f=f_0+k_0\Delta x =f_0+k_0v\Delta t,$$ where, $f_0$ is a small initial force proportional to the local pressure in the system. This is illustrated in Fig. \[fig:model\_cartoon\]b. At $t=0$, a spring makes contact with the tracer particle, corresponding to the formation of a force chain. The spring is then compressed as time advances. If the spring (force chain) is too compressed, e.g., the force $f$ exceeds a threshold, $g$, the spring fails, and the force on the spring is relaxed to $f_0$. In addition, the threshold $g$ is updated to a new value chose at random from an appropriate distribution. In the original model, $g$ was uniformly distributed over an interval \[$g_0$, $g_1$\]. Over time, the process of spring compression (force chain formation) and failure continues. At any given time, the drag force is the sum of the forces from all $n$ springs. The original model [@kahng_01] also assumes that the effective force chain springs are much stronger than the external spring associated with the machine that is pushing the tracer. In such a case, the drag force, which is typified by stick-slip dynamics, is a function of the strength of the external spring. Khang et al. focused on the stick-slip regime, since this corresponded to what was observed in the 3D drag experiments by Albert et al. However, in the present experiments, the effective spring constant of the drive is significantly larger than that of the particles. Consequently, we do not observe stick-slip behavior, but rather random force fluctuations. We must take into account this different feature of our experiments, and we now turn to appropriate modifications of the model. Modification I: Wide Threshold Bands and Exponentially Distributed Thresholds ----------------------------------------------------------------------------- We begin by considering the effect of the width of the threshold band \[$g_0$, $g_1$\]. As one would expect, this width qualitatively affects the drag force patterns. When the threshold band is narrow, as in Fig. \[fig:force\_series\_gamma\]a, for $[g_0, g_1]=[0.49, 0.51]$, the force time series exhibits a regular sawtooth pattern. This is because all the springs fail almost at the same time, resulting in a regular pattern of buildup and release. When the threshold band is wider, the force pattern becomes more random (e.g., $[g_0, g_1]=[0.1, 0.9]$ ). This more closely resembles what occurs in the the present experiments. However, if the threshold $g$ is uniformly distributed between \[$g_0$, $g_1$\], the resulting force distributions are symmetric with respect to the mean drag force, as shown in Fig. \[fig:force\_series\_gamma\]b for a 10 spring system. The symmetry of this distribution differs from those of the experiment, and simply reflects the symmetry of the failure distribution. The data of avalanche size distribution in Fig. \[fig:ava\_size\_dist\] suggest that the probability of finding a large event becomes exponentially small. Thus, it is reasonable to assume that the distribution of $g$’s is likewise exponential. We expect that most of the time, the force chains break at small forces, and only in rare events, do the force chains survive to reach a large threshold. Using this assumption, we obtain a force time series such as that shown in Fig. \[fig:force\_series\_gamma\]c. We show the resulting force distributions (for 10 springs) in (d). In contrast to Fig. \[fig:force\_series\_gamma\]b, these new force distributions obtained with exponentially distributed thresholds are significantly closer in appearance to the experimental data, as in Fig. \[fig:dist\_f\_gamma\]. Note that the mean force in the model is found by summing over $n$ independent variables, $x_i$, where $x_i$ is the compression of spring $i$. The mean value of any one of these is then $\bar {x_i} = (1/2)\bar {g}$, where $\bar{g}$ is the mean determined from the distribution of $g$’s. As $n$ grows, we expect that the distribution of total force $F$ will approach a Gaussian with a mean value $n \bar {g}$ and a width $\sqrt{n} \sigma$, where $\sigma^2$ is the variance of $g$. Indeed, the statistical properties of the model follow from the fact that the force is a sum over $n$ uncorrelated random variables where the maximum of each variable is drawn from the appropriate distribution of $g$’s. Apart from the force distributions, for other aspects of the simulated data (power spectra, distributions of avalanche size/duration, and force chain force constants), uniformly distributed thresholds do not lead to significantly different results than thresholds that are exponentially distributed, as long as the threshold band is wide enough. Below, we will focus on the simulated data derived from exponentially distributed thresholds. In Fig. \[fig:model\_power\_spec\], we show power spectra and their rescaled form for different velocities calculated from the model. These data are in remarkable qualitative agreement with the experimental data shown in Fig. \[fig:power\_spec\_v\]. In Fig. \[fig:model\_ava\_size\_dist\], we show in (a) the distributions of avalanche sizes derived from the model simulations and in (b) the rescaled distributions of avalanche durations derived from the model simulations. Both distributions of avalanche size and duration are roughly exponential for large arguments, as are the experimental data, Fig. \[fig:ava\_size\_dist\]. Note, however, that the size distributions in this figure are not rescaled while those in Fig. \[fig:ava\_size\_dist\]a are. Similarly, in Fig. \[fig:model\_force\_chain\_const\], we show avalanche rate distributions at different velocities in (a) and the derived effective force chain force constant in (b). This figure compares well with the experimental data shown in Fig. \[fig:force\_chain\_const\]. The effective force chain force constant from the simulation data is $k_{eff}=30.7$, which is of the same order of magnitude as $nk_0$, where $n=10$ (the number of springs) and $k_0=1$ (the individual force constant of a spring). Modification II: Decaying Thresholds ------------------------------------ The model so far has been able to reproduce a number of experimental observations. However, if we calculate the mean drag force, $<F>$, as a function of the medium velocity, $v$, we find that $<F>$ is independent of $v$, as shown in Fig. \[fig:model\_f\_v\_no\_decay\]. Fig. \[fig:model\_f\_v\_no\_decay\]a shows force distributions for several different velocities, and they all fall on the same curve, with almost the same mean and variance. Fig. \[fig:model\_f\_v\_no\_decay\]b is a direct plot of mean drag force as a function of velocity, which shows a rate-independent result. This differs from the experimental finding that the mean drag force increases logarithmically with the velocity. The fact that the model is rate-independent is not surprising. The instantaneous force state is found by summing over the $n$ springs. The state of each spring does not depend on the velocity of the block, but only on the displacement of the block since it was last reset to $f_o$. In such a displacement-controlled system, there can be no velocity dependence. One possible way to account for the rate-dependence is to recognize that there is failure of some contacts due to creep, and we explore that possibility here. In this regard, we note recent work by Hartley et al. (Fig. 2 in [@hartley_03]) involving similar particles to those used here. These authors reported logarithmically slow relaxation of the force chain network in their 2D granular Couette system. Specifically, in these experiments, 2D photoelastic grains were sheared steadily so as to establish a strong force chain network. The shearing was abruptly stopped and the particle-scale forces in a section of the Couette annulus were monitored thereafter. The force chains relaxed (became weaker) over many hours, with the total stress in the system decaying logarithmically slowly, presumably due to the collective rearrangements of the grains and failure under creep at contacts that were near to failure. Such failures became progressively more difficult over time because, presumably, the contacts near failure became less numerous, and also perhaps due to geometric constraints on successive rearrangements. To make a connection with the model, we note that one interpretation of the Hartley et al. experiments was that the force chains become logarithmically weaker over time, which means that the threshold of each spring should decrease with time. This is illustrated in Fig. \[fig:decay\_g\]. For two processes with different velocities ($v_1 > v_2$), if the originally chosen thresholds for a spring are $g$ in each case, by the time a spring reaches its failure point, this threshold has become smaller. Since $v_2 < v_1$, by the time failure actually occurs, the threshold for the slow process ($v_2$) is smaller than that of the fast process ($v_1$). The longer one waits, the smaller the threshold. Hence, we assume the threshold, $g$, is time-dependent and decreases logarithmically with a time constant $t_0$: $$%g(t)=1-A \log( t/ t_0). g(t)=1-\frac{\log t}{\log t_0},$$ where, $t_0$ is a large value (about $10^5$ times the time step) that sets the slow relaxation time scale/amplitude. With such a decaying threshold, $g(t)$, we recalculate the drag force distributions and mean drag force for the model. In Fig. \[fig:model\_dist\_f\_v\_decay\], we show the drag force distributions for different velocities in (a), and their rescaled form in (b). Comparison of Fig. \[fig:model\_dist\_f\_v\_decay\] with the experimental data in Fig. \[fig:dist\_f\_v\] shows very good agreement. Fig. \[fig:model\_f\_v\_decay\]a shows the mean drag force from the simulation, which now has a slow increase with velocity. Fig. \[fig:model\_f\_v\_decay\]b shows the same data on a log-lin plot. These results can be fitted by a straight line, indicating a logarithmically slow increase now built into the model. This figure compares well with the experimental data in Fig. \[fig:f\_v\]. Additionally, this modification to the model does not qualitatively change the features reported in previous sections. In summary, the key point of the model is its assumption that the force chains are modeled as “springs” with failure thresholds chosen from a distribution. Thus, the fluctuations and mean properties of the drag force are closely associated with the force chain formation and failure. This understanding is useful in particular because it underscores the important role of the force chains in granular systems. The elastic nature of the model is also interesting, given the current debate over how forces are transmitted in granular systems [@geng_03b]. Another interesting observation from the experiments is the seeming contradiction between the rate-dependence in the mean properties (e.g., mean drag force vs. velocity, mean avalanche size vs. velocity, etc.) and the rate-independence of the fluctuations (e.g., rate-independent power spectra, collapse of the avalanche size distributions, etc.) in the data. However, this may be understood by noting that the mean behavior (or the DC part of the signal) is rate-dependent, while fluctuations (or the AC part of the signal) are rate-independent. This is also consistent with the failure model we have discussed; i.e., once the level of the mean behavior is set, the fluctuating components are subsequently set by the mean behavior. Conclusions =========== To conclude, through experiments and simple failure models, we have characterized the drag force experienced by an object moving slowly through a 2D granular material consisting of bidisperse disks. The drag force is dominated by force chain structures in the bulk of the system. The formation and failure of the force chains leads to strong fluctuations. We have considered the effect of three control parameters: the medium velocity, the packing fraction and the tracer particle size. Experimentally, we find that the mean drag force grows slowly (logarithmically) with the drag velocity, increases rapidly (power-law) with the packing fraction above a critical value, and varies nonlinearly with the size of the tracer particle. The system exhibits strong statistical invariance in the sense that many physical quantities collapse into a single curve under appropriate scaling: force distributions P($f$) collapse when scaled by the mean force, power spectra P($\omega$) collapse when scaled by the drag velocity, and avalanche size and duration distributions collapse when scaled by the mean values of these quantities. We also show that the system can be understood using a simple failure model, which reproduces many experimental observations including: a power law with exponent $\alpha=-2$ for the high-frequency portion of the power spectrum, exponential distributions for the avalanche size and duration, and an exponential fall-off at large forces for the force distributions. The logarithmic increase of the mean force with the drag velocity can also be accounted for if slow relaxation of the material is included. A number of questions remain. One of these is the nonlinear dependence of the drag force on the particle diameter. Heuristically, one might expect that the drag force would grow linearly in proportion to the number of force chains contacting the tracer, and that this would lead to a linear variation of the drag force with diameter. In this regard, the fact that the tracers used here were only somewhat larger than the grains is likely to be important. Obviously, the presence of weak rate dependence in the mean force is of interest, and its origin is still not clear. The relative elasticity of the particles (vs. the driving machinery) may be important in this regard, and future investigations with harder particles would be of interest. The frictional character of the drag force in the dense regime is clear in these experiments. It would be of interest to see what occurs as the packing fraction is reduced. below $\gamma_c$. In the present experiments, the particles experience friction with the base, so that it is not possible to investigate the gas-like regime. We appreciate helpful interactions with R. Hartley and J. Matthews. The work was supported by the US National Science Foundation under Grant DMR-0137119, DMS-0204677, DMS-0244492, and by NASA under Grant NAG3-2372. For a broad perspective see Focus Issue on Granular Materials, [*Chaos*]{} [**9**]{}, 509–696 (1999); [*Physics of Dry Granular Media*]{}, H. J. Herrmann, J.-P. Hovi, and S. Luding, eds. NATO ASI Series, Kluwer, 1997; [*Powders and Grains 97*]{}, R. P. Behringer and J. T. Jenkins, eds. Balkema, 1997; H. M. Jaeger, S. R. Nagel, and R. P. Behringer, Rev. Mod. Phys. [**68**]{}, 1259 (1996); P.-G. de Gennes, Rev. Mod. Phys. [**71**]{}, 374 (1999). P. Dantu, Géotechnique, [**18**]{}, 50 (1968); A. Drescher and G. De Josselin De Jong, J.Mech. Phys. Solids, [**20**]{}, 337 (1972); T. Travers et al. J. Phys. A. [**19**]{}, L1033 (1986). G. W. Baxter and R. P. Behringer, Eur. J. Mech. B [**10**]{}, 181 (1991); C. H. Liu, and S. R. Nagel, Phys. Rev. Lett. [**68**]{}, 2301 (1992); G. W. Baxter, R. Leone and R. P. Behringer, Europhys. Lett. [**21**]{}, 569 (1993); A. Ngadi and J. Rajchenbach, Phys. Rev. Lett. [**80**]{}, 273 (1998); D. M. Mueth, H. M. Jaeger, and S. R. Nagel, Phys. Rev. E [**57**]{}, 3164 (1998). R. M. Nedderman, [*Statics and Kinematics of Granular Materials*]{}, Cambridge Univ. Press, Cambridge, (1992). D. M. Wood, [*Soil Behaviour and Critical State Soil Mechanics*]{} (Cambridge University, Cambridge, England, (1990). B. Miller, C. O’Hern and R.P. Behringer, Phys. Rev. Lett. [**77**]{}, 3110 (1996). Daniel Howell, R.P. Behringer and Christian Veje, Phys. Rev. Lett. [**82**]{}, 5241 (1999). R. Hartley and R.P. Behringer, Nature [**421**]{}, 928. (2003) C. S. Campbell, J. Fluid Mech. [**465**]{}, 261 (2002). I. Albert, P. Tegzes, B. Kahng, R. Albert, J.G. Sample, M. Pfeifer, A.L. Barabasi, T. Vicsek and P. Schiffer, Phys. Rev. Lett. [**84**]{}, 5122 (2000). R. Albert, M.A. Pfeifer, A.L. Barabasi and P. Schiffer, Phys. Rev. Lett. [**82**]{}, 205 (1999). I. Albert, P. Tegzes, R. Albert, J.G. Sample, A.L. Barabasi, T. Vicsek, B. Kahng, and P. Schiffer, Phys. Rev. E [**64**]{}, 031307 (2001) C.-h. Liu, S. R. Nagel, D. A. Schecter, S. N. Coppersmith, S. Majumdar, O. Narayan and T. A. Witten, Science [**269**]{}, 513 (1995); S. N. Coppersmith, C.-h. Liu, S. Majumdar, O. Narayan and T. A. Witten, Phys. Rev. E [**53**]{}, 4673 (1996). J. Geng and R. P. Behringer, Diffusion, mobility, and temperature in a stirred dense granular material, to be published. (2003). B. Kahng, I. Albert, P. Schiffer and A.L. Barabasi, Phys. Rev. E. [**64**]{}, 051303 (2001). A. L. Demirel and Steve Granick, B. Kahng, Phys. Rev. Lett. [**77**]{}, 4330 (1996). M.G. Rozman, M. Urbakh and J. Klafter, Phys. Rev. Lett. [**77**]{}, 683 (1996). A. L. Liu, and S. R. Nagel, Nonlinear Dynamics: Jamming is not just cool anymore. [*Nature*]{}. [**396**]{}, 21-22 (1998). M. E. Cates, J. P. Wittmer, J. -P. Bouchaud and P. Claudin, Phys. Rev. Lett. [**81**]{}, 1841 (1998). G. Reydellet and E. Clément, Phys. Rev. Lett. [**86**]{},3308 (2001). J. Geng, D. Howell, E. Longhi, R.P. Behringer,G. Reydellet, L. Vanel, E. Clément and S. Luding, Phys. Rev. Lett. [**87**]{}, 0335506 (2001). J-P. Bouchaud, P. Claudin and D. Levine, M.Otto, Euro. Phys. J. [**E4**]{}, 451 (2001). J.E.S. Socolar, D.G. Schaeffer, P. Claudin, Euro. Phys. J. [**E7**]{}, 353 (2002). D. A. Head, A. V. Tkachenko, T. A. Witten, Euro. Phys. J. [**E6**]{}, 99 (2001). J. Duran, [*Sands, Powders, and Grains: An Introduction to the Physics of Granular Materials*]{} (Springer-Verlag, New York, 1999. V. Frette, K. Christensen, J. Feder, T. Jossang and P. Meakin, Nature [**379**]{}, 49. (1996) C.-h. Liu, S. R. Nagel, D. A. Schecter, S. N. Coppersmith, S. Majumdar, O. Narayan and T. A. Witten, Science [**269**]{}, 513 (1995), Phys. Rev. E [**53**]{}, 4673 (1996). Christophe Eloy and Eric Clément, J. Phys. I (France) [**7**]{}, 1541 (1997); Mario Nicodemi, Phys. Rev. Lett. [**80**]{}, 1340 (1998); Joshua E. S. Socolar, Phys. Rev. E [**57**]{}, 3204 (1998). F. Radjai, M. Jean, J.-J. Moreau, and S. Roux, Phys. Rev. Lett. [**77**]{}, 274 (1996). See for example, A. Ruina, J. Geophys. Res. [ **88**]{}, 10359 (1983); J. R. Rice and A. L. Ruina, J. Appl Mech. [ **50**]{}, 343 (1983); E. Rabinowicz, Proc. Phys. Soc. London [**71**]{}, 668 (1958); F. Heslot, T. Bauberger, B. Perrin, B. Caroli and C. Caroli, Phys. Rev. E [**49**]{}, 4973 (1994). K. Wieghard, Annu. Rev. Fluid Mech. [**7**]{}, 89 (1975). R.P. Behringer, E. Clément, J. Geng, D. Howell, L. Kondic, G. Metcalfe, C. O’Hern, G. Reydellet, S. Tennakoon, L. Vanel, and C. Veje, Lecture Notes in Physics, vol. 567 , pp. 351 -391 (2001). R. B. Heywood, [*Designing by Photoelasticity*]{}. Chapman and Hall Ltd., London, (1952). W. H. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, 2nd Edition, Cambridge University Press. (1992). R.K. Pathria, Statistical Mechanics, 2nd ed., p.464, Butterworth-Heinemann, Oxford, 1996. J.M. Ziman, Models of Disorder, Cambridge University Press, Cambridge. (1979). J. Geng, E. Longhi, R.P. Behringer and D. Howell, Phys. Rev. E [**64**]{}, 060301(R), 2001. M.E. Cates, J.P. Wittmer, J.-P. Bouchaud, and P. Claudin, Phys. Rev. Lett. [**81**]{}, 1841 (1998). J. Geng, R.P. Behringer, G. Reydellet, and E. Clément, Physica D [**182**]{}, 274 (2003).
--- abstract: 'We evidence critical fluctuations in the strain-rate of granular flows that are weakly vibrated. Strikingly, the critical point arises at [*finite*]{} values of the mean strain rate and vibration strength, far away from the yielding critical point at zero flow rate. We show that the global rheology, as well as the amplitude and correlation time of the fluctuations, are consistent with a mean-field, Landau like description, where strain rate and stress act as conjugated variables. We introduce a general model which captures the observed phenomenology, and argue that this type of critical behavior generically arises when self fluidization competes with friction.' author: - Geert Wortel - Olivier Dauchot - Martin van Hecke title: Criticality in vibrated frictional flows at finite strain rate --- Fluctuations play an essential role in flows of disordered media [@Hebraud:1998vo; @GDR; @behringer; @gutfraind; @lydericnature; @foamcouette; @kiri; @reddy; @CR; @dijksman; @wortelpre2; @schall; @clement; @ruiz; @pouliquen; @kamrin; @Bruno]. In the simplest scenario, such fluctuations are rate-independent, as in thermal systems or strongly vibrated granular flows [@clement; @ruiz; @jia]. New phenomena, such as nonlocal rheology, arise when fluctuations are generated by the flow itself, as observed for emulsions [@lydericnature], foams [@foamcouette], and granular matter [@kiri; @reddy; @CR; @dijksman; @kamrin; @pouliquen; @Bruno]. Granular media are particularly susceptible to flow-generated and externally provided fluctuations. First, the particles are so hard that tiny motions cause large fluctuations in the contact forces  [@umbanhowar; @CR]. Second, sliding friction is nearly rate independent, allowing subtle self-fluidization effects to qualitatively modify the slope of the flow curve, e.g. from neutral to negative [@heinrich; @hatano; @dijksman]. Indeed, self-fluidization is particularly spectacular for granular media: for example, the finite yield threshold, a hallmark of static granular media, completely vanishes in the presence of flow [*anywhere*]{} in the granulate [@kiri; @pouliquen; @kamrin; @Bruno; @dijksman]. ![(color online) Fluctuations and mean flow, characterized by $S:= log(I)$, where $I$ is the inertial number [@noteI]. (a-b) Flow rate fluctuations $S(t)$ detected in stress controlled experiments. (a) Strong and slow fluctuations close to the critical point ($(T,\Gamma)=(0.84,0.71)$). (b) Small fluctuations away from the critical point (top: $(T,\Gamma)=(0.85,0.71)$, bottom: $(T,\Gamma)=(0.79,0.92)$). (c) Flow curves measuring stress $T$ as function of flow rate $S$ for a range of $\Gamma$ around $\Gamma_c$. The marginal flow curve at $\Gamma=\Gamma_c$ and corresponding critical point $(T_c,S_c)$ are indicated.](flow_curves_prl_abc.eps){width="\columnwidth"} \[rawflowcurves\] What is the precise role of fluctuations for granular flows? Can local fluctuations organize in strong and slow collective fluctuations? How can we model the mutual coupling between fluctuations and flow? To answer these questions, we probe the fluctuations and flow of weakly vibrated granular media sheared in a split-bottom cell [@kiri; @dijksman; @splibo]. First, by controlling the driving torque $T$ at finite shaking strength $\Gamma$, and measuring the time-resolved global flow rate $S(t)$, we reveal that fluctuations in $S$ become increasingly large and slow when $(T,\Gamma) \rightarrow (T_c,\Gamma_c^+)$ (Fig. 1a-b). Second, we show that rheological curves $T(S)$, obtained at fixed $S$, and their variation as a function of $\Gamma$ (Fig. 1c), can be captured in a mean-field type expansion around $(T_c,\Gamma_c)$. Together, these experiments evidence the existence of a finite flow-rate (FFR) critical point. While strong fluctuations have been studied near the zero flow-rate yielding point [@behringer], we stress that critical fluctuations at FFR critical points have not been reported before, presumably because they remain hidden in absence of an external source of vibrations. Finally, we introduce a general model that combines a microscopic frictional rheology with fluctuations of the microscopic stresses. This model successfully describes the experimentally observed $\Gamma$-dependent rheology and the emergence of the FFR critical point, naturally capturing the intricate coupling between stress, flow rate, and fluctuations. Our results suggest that the FFR critical point is robust, and that similar critical behavior may arise in other frictional or nearly rate-independent systems, leading to potentially hazardous fluctuations in previously overlooked flow regimes. [*Setup and phenomenology:*]{} The vertically vibrated split-bottom cell has been described in detail previously [@splibo; @dijksman; @wortelpre2; @splibonote]. In this system we drive granular flow by rotation of a disk and probe the driving torque $\tau$ and rotation rate $\Omega$ by a rheometer (Anton Paar DSR 301), which can be employed in rate or stress controlled modes. We vertically vibrate the system as $A \sin(2\pi f t)$, with $f=63$ Hz, and control the dimensionless vibration strength $\Gamma=A(2\pi f)^2/g$, where $g$ is the gravitational acceleration. The rheometer and vibrating flow cell are coupled through a flexure, and to accurately probe the disk rotation we use an optical angular encoder (Heidenhain ERO 2500) directly coupled to the disk. We express our results in dimensionless units $T:=\tau/\tau_y$, where $\tau_y$ is the dynamic yield torque in the absence of external vibrations, and $S:= \log(I)$, where $I$ is the inertial number defined for pressures and strain rates at half depth [@noteI]. [*Critical Fluctuations:*]{} We first perform experiments at constant torque $T$ and vibration amplitude $\Gamma>\Gamma_c$ and determine the magnitude and correlation time of the fluctuations in flow rate via the instantaneous angular position $\theta(t)$ of the bottom disk. We extract the rotation rate $\omega(t):=\partial_t \theta(t)$, after carefully checking that $\theta(t)$ is probed at sufficiently high temporal resolution. We then compute the averaged flow rate $\Omega = \left< \omega\right>$, the amplitude of its fluctuations $\sigma_{\omega}^2 = \left < \delta\omega^2 \right > $ and the temporal correlations $R(\tau) = \left < \delta\omega(t+\tau) \delta\omega(t)\right > / \sigma_{\omega}^2$, where $\delta\omega = \omega - \Omega$ and $\left < \cdot \right >$ denote temporal averages. The correlation time $\tau_c$ is extracted by fitting the autocorrelation to an exponential (and is consistent with the time obtained by integrating the correlation function). Fig. \[flucts\]a-b display the resulting dimensionless fluctuation amplitude $\sigma_{\omega}/\Omega$ and dimensionless correlation time $\tau_c \Omega $ as a function of the relative torque $T^*(\Gamma)=\left(T-T_i(\Gamma)\right)/T_i(\Gamma)$, where $T_i(\Gamma)$ is the inflection point in the flow curves, for different values of $\Gamma>\Gamma_c$. There is a sharp contrast between the fluctuations at either side of the peaks, which we interpret as signaling two qualitatively different flow regimes: a vibration dominated creep regime (strong but short-time correlated fluctuations) and a fast inertial flow regime (small fluctuations with time scale $\approx \Omega^{-1}$). Crucially, there is a sharp transition between these regimes: both the fluctuation amplitude $\sigma_{\omega}/\Omega$ and correlation time $\tau_c \Omega $ exhibit a sharp maximum at $T^*\simeq 0$, which rapidly grows when $\Gamma^*=(\Gamma-\Gamma_c)/\Gamma_c \rightarrow 0^+$. To check the robustness of our measurements, we have also determined the fluctuation magnitude and correlation by considering the rotating disk as a massive random walker with drift. We thus characterize the mean square angular displacement $\Delta\theta(\tau)^2=\left <(\theta(t+\tau) - \theta (t))^2\right>$. The amplitude of the flow rate fluctuations $\tilde\sigma_{\omega}$ and the correlation time $\tilde\tau_c$ are then extracted from the asymptotics: for $\tau/\tilde\tau_c << 1$, we observe ballistic dynamics with $\Delta\theta(\tau)^2 \sim \tilde\sigma_{\omega}^2 \tau$, while for $\tau/\tilde\tau_c >>1 $ the dynamics is diffusive, with $\Delta\theta(\tau)^2 \sim 2 \tilde\sigma_{\omega}^2 \tau_c \tau$. These two independent protocols yield consistent results, as shown in the insets of Fig. \[flucts\]ab. The peak in the flow rate fluctuations diverges in a manner consistent with a power-law scaling $\sim {\Gamma^*}^{-\tilde\gamma}$, with $\tilde\gamma \approx 0.5$ (Fig. \[flucts\]c). The correlation times are too noisy to be reliably fitted to a power law ${\Gamma^*}^{-\mu}$, but if any, $\mu\in[0.5, 1]$ (Fig. \[flucts\]d). Together, these signals provide strong evidence for critical behavior. ![(color online) Critical fluctuations. (a-b) Fluctuation magnitude $\sigma_{\omega}/\Omega$ as function of $T^*$ for $\Gamma$ from 0.65 to 0.94 — red (peaked) curves have $\Gamma$ close to $\Gamma_c$. Inset: $\tilde\sigma_{\omega}$ and $\sigma_{\omega}$ are essentially equal. (b) Correlation time $\tau_c\Omega$ as function of $T^*$. Inset $\tilde\tau_c$ scales linearly with $\tau_c$. (c)-(d) Evidence for critical scaling of respectively $\sigma_{\omega}^{*}$ and $\tau_c^{*}$ with $\Gamma^*$ in two datasets taken several weeks apart.[]{data-label="flucts"}](prl_fig3_newspacing.eps){width="\columnwidth"} *Scaling of the flow curves:* The critical behavior reported above suggest that the torque and flow rate should be related via a Landau type expansion in the critical regime: $$T= a (S-S_i)^3 + b (S-S_i)+T_i~, \label{fit}$$ where $(T_i(\Gamma),S_i(\Gamma))$ are the inflection points of the flow curves. In order to probe this relation, we perform rate controlled experiments, in which we can also access the negative slope regime. The flow curves, shown in Fig. \[rawflowcurves\]c, are indeed reminiscent of a third order polynomial. Fitting the data accordingly, we extract $(S_i,T_i)$, $a$ and $b$, and the local maximum $(S'_{+},T'_{+})$ and minimum $(S'_{-},T'_{-})$ as a function of $\Gamma$. As shown in Fig. \[flowcurves-fit\]a, the flow curves can be rescaled on two distinct branches, below and above $\Gamma_c$, over a substantial range. As expected, the cubic coefficient $a$ remains essentially constant ($a \simeq 2$, not shown here). The coefficient $b$, which sets the slope at the inflection point, akin to an inverse susceptibility $\chi^{-1}$, crosses zero at $\Gamma=\Gamma_c$ and increases linearly with $\Gamma^*$. The location of the extrema $(S_{\pm},T_{\pm})$ in the $(S,\Gamma)$ and $(T,\Gamma)$ planes, displayed on Figs \[flowcurves-fit\]c-d, together with the location of the inflection point $(S_i,T_i)$, determine the so-called spinodal lines, which are the stability limits of the fast and slow flow phases. The region of “coexistence” corresponds here to the set of parameters for which the flow curves have a negative slope. The width of this region $\Delta = S_+ - S_-$ scales like $|\Gamma^*|^{\beta}$, with $\beta = 0.5$ for $\Gamma^*<0$, in agreement with Eq. (\[fit\]) and the linear dependence of $b$ with $\Gamma^*$. ![(color online) (a) Flow curve data collapsed onto master curves. (b) Inverse susceptibility $\chi^{-1}$ vs. $\Gamma$. Inset : log-log plot of $\Delta = S_+ - S_-$ as a function of $-\Gamma^*$. (c-d) Location of the extrema $(S_{\pm},T_{\pm})$ and the inflection point $(S_i,T_i)$ in the planes $(\Gamma, S)$ and $(\Gamma, T)$. The vertical dashed line indicates $\Gamma_c$.[]{data-label="flowcurves-fit"}](prl_fig2.eps){width="\columnwidth"} [*FFR Critical Point:*]{} Our data for both stress-controlled and strain-rate controlled experiments provide strong evidence for the existence of a critical point at finite flow rate, characterized by the following scaling relations: $$\begin{aligned} \Delta \sim& {\Gamma^*}^{\beta}; \quad \beta &\simeq 0.5 \\ \chi \sim& {\Gamma^*}^{-\gamma}; \quad \gamma &\simeq 1 \\ \sigma_\omega/\Omega \sim& {\Gamma^*}^{-\tilde\gamma}; \quad \tilde\gamma &\simeq 0.5 \\ \tau_c \Omega \sim& {\Gamma^*}^{-\mu}; \quad \mu & \in [0.5, 1] .\end{aligned}$$ We note that in stress control experiments, $\Gamma_c=0.65\pm 0.01$, as determined from both the zero slope inflection point of the flow curves, and the diverging fluctuations, while in strain rate controlled experiments, $\Gamma_c = 0.46 \pm 0.01$, as determined from the zero slope inflection point of the flow curves. We believe this difference to be due to the complex combination of large intrinsic fluctuations, finite size effects, and the non-perfect feedback loop of the rheometer in rate controlled experiments. The fact that the critical behavior of $\chi$, obtained for averaged quantities in strain rate controlled experiments, and that of $\sigma_{\omega}^2$, obtained from the fluctuations in stress controlled experiments, coincide is a strong indication of the relevance of our analysis. Both the value of the exponents and the quality of the description of the flow curves by Eq. (\[fit\]) suggest that a mean field description should capture the essence of the observed phenomenology. [*Flow Model:*]{} We finally introduce a general fluctuation-frictional (FF) model that captures the observed rheology. We combine a frictional local rheology with fluctuations that are induced by both vibrations and flow, and show that the average rheology of this model exhibits all the experimentally observed hallmarks, including the FFR critical point. First, we introduce an *agitation strength* $A$, which is a function of both flow-induced and vibration-induced agitations [@footnoteIS]: $$A=A_g(\Gamma,I)~, \label{agi}$$ where $A_g=0$ only when both $\Gamma$ and $I$ are zero. Second, we postulate that the local stresses $T_m$ are fluctuating around their mean $T$, and that the microscopic stress distribution $P(T_m)=\bar{P}((T_m\!-\!T)/A)$, where $\bar{P}(x)$ is a given normalized distribution centered at $x=0$ — note that $A$ sets the width of $P(T_m)$. Third, we determine the global flow rate $I$ as the mean of the microscopic flow rates $I_m$, where $I_m$ and $T_m$ are related by simple frictional Herschel-Bulkley rheology with a finite yield stress — in particular, $I_m=0$ when $|T_m|<1$. Combining these ingredients, we find: $$I(A,T)=\!\int_{-\infty}^{\infty} \frac{dT_m}{A} ~ ~ \bar{ P}\left(\frac{T_m-T}{A}\right) ~ I_m(T_m) ~. \label{Ieqmain}$$ For a prescribed set of functions $A_g, P$ and $ I_m(T_m)$, Eqs. \[agi\]-\[Ieqmain\] completely set the flow curves $T(I,\Gamma)$. We start with a definite choice of $A_g$, $\bar{P}$ and $I_m$ [@detailnote] and then show that our conclusions are insensitive to this choice - details are provided in the Supp. Mat. [@sup]. The FF model exhibits the experimentally observed singularity of the stress $T(I,A)$ at the origin: when $A=0$, Eq. (\[Ieqmain\]) implies that $T_m=T$, $I_m=I$, so that the macro rheology is identical to the microscopic rheology and $T(0+,0)=1$. In contrast, when $A>0$ but $I=0$, Eq. (\[Ieqmain\]) implies that the stress distribution must be symmetric around zero — therefore $T=0$, and in particular $T(0,0^+)=0$. The model thus captures the discontinuous vanishing of the yield stress when $\Gamma$ becomes finite. We now show that the FF model captures all qualitative features of the rheology of weakly vibrated flows. The solutions to this model can be understood by considering the variation of $\Gamma$ and $T$ in the $(I,A)$-plane (Fig. \[model\]). The flow curves $T(I,\Gamma)$ can be determined graphically via the intersections of the contour curves of $T$ and $\Gamma$, by fixing $\Gamma$, varying $T$, and determining the corresponding value(s) of $I$. For $T>1$ there is only one intersection, corresponding to rapid flows, and in the remainder we focus on $T\le 1$. [*(i)*]{} For large $\Gamma$, there is only one intersection (black dot), leading to monotonic flow curves $T(I)$. [*(ii)*]{} For small $\Gamma$, there are three intersections (crosses), corresponding to non monotonic flow curves. [*(iii)*]{} In between these two regimes is the critical $\Gamma_c$ curve (red), for which the three intersection points merge (red dot). [*(iv)*]{} Finally, for $\Gamma=0$, there are precisely two intersections, corresponding to the only flow curve that has finite $T$ and negative slope at $I=0$. We stress that the scenario that emerges captures the essence of the experimental flow curves shown in Fig. 1a, without having to make any assumptions about the behavior of $A$ near the FFR critical point. Clearly, the essence of this scenario does not depend on the details of the agitation function $A_g$, distribution $\bar{P}$ and local rheology $T_m$. The only condition is that the $\Gamma=0$ curve is steeper than the $T=1$ contour at the origin, such that there are two intersections between the $\Gamma=0$ and $T=1$ curves — for other examples of flow curves, including for when this condition is violated, see the Supp. Mat. [@sup]. *Discussion:* We have uncovered a dynamical critical point in agitated frictional flows. We stress again that the concomitant large fluctuations arise at finite flow rates, away from the yielding point where strong fluctuations have been seen before [@behringer], which makes this an experimentally easily accessible yet nontrivial critical point which deserves further investigation. We argued that this criticality emerges from the interplay of external vibration and self-fluidization. Both act as sources of agitation, but influence the rheology very differently: while external agitations set the yield stress to zero and impose a positive slope in the limit of zero flow rate, flow induced fluctuations cause a negative slope in the flow curves, at least in the absence of externally provided fluctuations. The FFR critical behavior emerges due to the competition of these. From a more theoretical point of view, the model we have introduced here is purely phenomenological and essentially mean field. An alternative, more complete strategy would be to write down a dynamical equation for the microscopic stress distribution, as introduced in the Hébraud-Lequeux [@Hebraud:1998vo] and related fluidity models [@Bocquet:2009kt; @Mansard:2011cw], which at present do not capture the critical scaling of the flow curves nor the diverging fluctuations. In these models the self-fluidization is captured by a diffusion term for the local stresses, the amplitude of which is linearly related to the amount of stress exceeding the local yield stress. Adding any finite amount of external noise via a constant term in the diffusion amplitude, we expect the dynamical yield stress to vanish as observed here. Then, following [@Mansard:2011cw], one could work out the relation between model parameters and flow rate, and hopefully obtain the observed non-monotonic rheology. Stress fluctuations could also be taken into account following the very recent work by Agoritsas et al. [@Agoritsas:2015wj]. It is an open question whether such models can exhibit a negative slope at zero flow rate in the absence of external vibrations. *Acknowledgments –* We thank J. Dijksman, J. Mesman, H. Eerkens, E. Agoritsas, E. Bertin and K. Martens for discussions, technical support, and early experiments, and FOM/NWO for funding. [99]{} P. Hebraud, and F. and Lequeux, Phys. Rev. Lett. [**81**]{}, 2934 (1998). D. Howell, R. P. Behringer and C. Veje, Phys. Rev. Lett. [**82**]{}, 5241 (1999). GDR MiDi Collaboration, Eur. Phys. J. E [**14**]{}, 341 (2004). O. Pouliquen and R. Gutfraind, Phys. Rev. E. [**53**]{}, 552 (1996). J. Goyon, A. Colin, G. Ovarlez, A. Ajdari, and L. Bocquet, Nature **454**, 84 (2008). G. Katgert, B. P. Tighe, M. E. Möbius and M. van Hecke, EPL [**90**]{}, 54002 (2010). K. A. Reddy, Y. Forterre and O. Pouliquen, Phys. Rev. Lett. [**106**]{}, 108301 (2011) M. van Hecke, Comptes Rendus Physique [**16**]{}, 37 (2015). K. Nichol, A. Zanin, R. Bastien, E. Wandersman and M. van Hecke, Phys. Rev. Lett. [**104**]{}, 078302 (2010); K. Nichol and M. van Hecke, Phys. Rev. E **85**, 061309 (2012). J. Dijksman, G. Wortel, L. van Dellen, O. Dauchot, and M. van Hecke, Phys. Rev. Lett. **107**, 108303 (2011) ; G. Wortel, J. Dijksman, and M. van Hecke, Phys. Rev. E **89**, 012202 (2014). K. Kamrin and G. Koval, Phys. Rev. Lett. **108**, 178301 (2012); D. Henann and K. Kamrin, PNAS **110**, 6730 (2013); D. Henann and K. Kamrin, Phys. Rev. Lett. **113**, 178001 (2014). M. Bouzid, M. Trulsson, P. Claudin, E. Clement and B. Andreotti, Phys. Rev. Lett. [**111**]{}, 238301 (2013). Y. Forterre and O. Pouliquen, Annu. Rev. Fluid Mech. **40**, 1 (2008). G. Wortel and M. van Hecke, Phys. Rev. E **92**, 040201(R) (2015). P. Schall and M. van Hecke, Ann. Rev. Fl. Mech. [**42**]{}, 67 (2010). G. Caballero-Robledo and E. Clément, Eur. Phys. J. E Soft Matter **30**, 395 (2009). J. Javier Brey, M.J. Ruiz-Montero, and F. Moreno, Phys. Rev. E **63**, 061305 (2001). X. Jia, T. Brunet and J. Laurent, Phys. Rev. E [**84**]{}, 020301 (2011). P. Umbanhowar and M. van Hecke, Phys. Rev. E [**72**]{}, 030301 (2005). H. Jaeger and Chu-Heng Liu, S. Nagel, and T. Witten, Europhys. Lett. **11**, 619 (1990). O. Kuwano, T. Hatano and R. Ando, Geophys. Res. Lett. **40**, 1295 (2013). D. Fenistein, J.-W. van der Meent, and M. van Hecke, Phys. Rev. Lett. **92**, 094301 (2004); D. Fenistein, J.-W. van der Meent, and M. van Hecke, Phys. Rev. Lett. **96**, 118001 (2006); J. Dijksman and M. van Hecke, Soft Matter **6**, 2901 (2010). Split-bottom flow geometries produce smooth, robust, and well controlled granular flows. Our cell consists of an acrylic cylindrical container (inner radius 7.0 cm), at the bottom of which a rotating disk of radius $r_s$ of 4 cm drives the flow. To ensure no-slip boundary conditions, the top surface of the disk and cylinder bottom is made rough. The cell is filled to a height $H$ of 24 mm with black soda-lime glass beads with diameters ranging from 1 to 1.3 mm. All experiments are carried out under ambient temperature, pressure and relative humidity, and our experiments reproduce well over the course of several years — more details in [@dijksman; @wortelpre2]. We use the standard definition of $I:=\dot{\gamma}d/\sqrt{P/\rho}$ [@GDR], which, based on our estimates of typical values of $\dot{\gamma}$ throughout the flowing zone amounts to $I=(0.3 s) \Omega$ [@splibo]. We use $I$ rather than $S$ in the model, as we want to consider the absence of flow at $I=0$. We take $A_g(I, \Gamma)=\Gamma+(1-\exp(4 I))$ which captures that both vibrations and flow induce fluctuations which saturate for large flow rates, $\bar{P}(x)=1/2\exp(-|x|)$, and $T_m$ follows Herschel-Bulkley rheology: $T_m=\mbox{sign}(I_m) \left[1+|I_m|\right]$, with $I_m=0$ when $|T_m|<1$. See Supplemental Material at -insert link - L. Bocquet, A. Colin, and A Ajdari, Phys. Rev. Lett. [**103**]{}, 036001 (2009). V. Mansard, A. Colin, P. Chauduri, and L. Bocquet, Soft Matter [**7**]{}, 5524 (2011). E. Agoritsas, E. M. Bertin, K. Martens, and J.-L. Barrat, Eur. Phys. J. E [**38**]{}, 71 (2015).   [*Supplemental Material* ]{} Here we provide a detailed discussion of the FF model that relates fluctuations, flow and stress. The aim of this model is to explore how the main experimental findings for slow, i.e., non-inertial, weakly vibrated granular flows can be captured by introducing a scalar agitation strength $A$. We note that the experimentally observed increase of $T(I)$ for large values of $I$ is due to inertial effects that our model does not necessarily capture. The central idea of the FF model is illustrated in Fig. \[idea\]. We assume that in the presence of agitations of magnitude $A$, a macroscopically applied stress $T$ leads to a distribution of microscopic stresses $P(T_m)$. We assume $P(T_m)$ to be symmetric, peaked around $T_m=T$, and with a width proportional to $A$ — in the limit of vanishing agitation strength, $P(T_m)$ approaches a $\delta$-function (Fig. \[idea\]). For definiteness, we assume here that $P$ has exponential tails, although this is not essential for our qualitative picture: $$P(T_m)=(2A)^{-1} \exp(-|(T_m-T)|/A)~. \label{SIP}$$ We furthermore assume that a local rheology relates a distribution of flow rates $I_m$ to $P(T_m)$, and that the global flow rate $I$ is the mean of $I_m$: $$I(A,T)=\!\int_{-\infty}^{\infty} dT_m ~{ P}\left(T_m\right) ~ I_m(T_m) ~. \label{SIint}$$ For the local rheology we take a Herschel-Bulkley form with unit yield stress, i.e., $$T_m=\mbox{sign}(I_m) \left[1+|I_m|^{\alpha}\right]~, \label{SIHB}$$ where for simplicity we set $\alpha=1$. This captures the essential feature of a microscopic flow threshold, as $I_m=0$ in the jammed region $|T_m|<1$. An important consequence of Eq. (\[SIP\]-\[SIHB\]) is the singular difference in global flow rheology between $A=0$ and $A\ne 0$. When $A=0$, the global flow follows the local rheology, and there only will be flow when $|T|>1$. However, when $A\ne0$, $P(T_m)$ has a finite weight outside the jammed region, and $I$ will be finite for any value of $T$, unequal to zero. In other words, $\lim_{A\rightarrow 0} (\lim_{I \rightarrow 0} T) = 0$ $\lim_{I\rightarrow 0} (\lim_{A \rightarrow 0} T) = 1$. What sets the agitation strength? Clearly, $A=0$ when both $\Gamma$ and $I$ are zero, and $A$ should be non-zero when either $I$ or $\Gamma$ are finite. This strongly suggests that $A_g(\Gamma,I)$ is linear in both $\Gamma$ and $I$ for small values of these arguments. Below, we show the results for two functional forms of $A$. The simplest linear agitation function reads: $$A_g(\Gamma,I)= \kappa \Gamma + \lambda I~, \label{lina}$$ where we can set $\kappa=1$ by overall scaling of $A$, and where we will show $\lambda$ to be an important parameter. In the main text, we have taken a slightly more complex form for $A_g(I, \Gamma)$: $$A_g(\Gamma,I)= \Gamma+(1-\exp(I/I_0)), \mbox{ with } I_0=0.25~, \label{expa}$$ where the exponential form is motivated by the observation that the flow-induced fluctuations likely saturate at large flow rates. [*Qualitative Properties:*]{} Even without solving the coupled equations (\[SIP\]-\[expa\]), most qualitative properties of their solutions can be shown in a straightforward manner to match our experimental findings. The first two properties follow from the singular difference in global flow rheology between $A=0$ and $A\ne 0$ discussed above, and capture the singular vanishing of the yield stress when $\Gamma$ becomes finite:\ [*(1)*]{} When $\Gamma \ne 0$ and $T \ne 0$, there is flow.\ [*(2)*]{} When $\Gamma=0$, the system is jammed for $|T|<1$.\ A corollary of the first property is that when $\Gamma \ne 0$, the stress smoothly goes to zero when $I\rightarrow 0$, implying that the flowcurve $T(I)$ has a positive slope for small $I$. The slope of the flow curve for $\Gamma=0$ requires a more subtle analysis, involving the rheological curves. These rheological curves $T(I)$ for fixed $\Gamma$ can be obtained graphically, by considering both $T$ and $\Gamma$ as a function of $I$ and $A$. To do so, we fix $T$ and vary $A$ to obtain $I$ using Eq. (\[SIint\]), which yields contours of fixed $T$, and use Eq. (\[lina\]) or (\[expa\]) to plot $A(\Gamma,I)$ at fixed $\Gamma$ yielding $\Gamma$-contours. In Fig. \[SIZO\]a we show these curves, for the agitation function given by Eq. (\[expa\]). Fixing $\Gamma$, the rheology, i.e. $I(T)$ follows from intersections of the pertaining $\Gamma$-contours and $T$-contours, where we note that there may be multiple solutions. Fig. \[SIZO\]b-c schematically illustrate that it is the relative slope of the $\Gamma$-contours and $T$-contours which determines whether the flow curve $T(I)$ has a positive or negative slope. In particular, the question whether the flow curve for $\Gamma=0$ has a negative slope for small $I$, can be answered by inspecting the slopes of the $\Gamma=0$ curve and the $T=1$ curve which meet at the origin (Fig. \[SIZO\]a). Expanding Eq. \[SIint\] for small $A$ yields that $A \approx 2 I$ along the $T=1$ contour. Hence, as long as $\partial_A I|I=0>2$ (evaluated along the $\Gamma=0$ contour), the $\Gamma=0$ flow curve will have a negative slope. This is manifestly true for the agitation function given by Eq. (\[expa\]), and is also true for the agitation function given by Eq. (\[lina\]), provided that $\lambda >2$. In Figs. \[l3\] and \[l15\] we show examples of the contourplots and corresponding rheological curves for $\lambda=3$ and $\lambda=1.5$. The former clearly exhibits a $\Gamma=0$ flow curve with negative slope, and corresponding FFR critical point: details of the agitation function are not important for the overall scenario. However, when $\lambda$ is too low, the scenario changes qualitatively: even though the origin is still singular, there is no longer a finite yield stress for $\Gamma=0$, no negative slope, and no FFR critical point (Figs. \[l15\]).
--- abstract: 'This paper is devoted to Nash equilibrium for games in capacities. Such games with payoff expressed by Choquet integral were considered in [@KZ] and existence of Nash equilibrium was proved. We also consider games in capacities but with expected payoff expressed by Sugeno integral. We prove existence of Nash equilibrium using categorical methods and abstract convexity theory.' author: - Taras Radul title: Nash equilibrium with Sugeno payoff --- Institute of Mathematics, Casimirus the Great University, Bydgoszcz, Poland; Department of Mechanics and Mathematics, Lviv National University, Universytetska st.,1, 79000 Lviv, Ukraine. e-mail: tarasradul yahoo.co.uk **Key words and phrases:** Nash equilibrium, game in capacities, Sugeno integral Introduction ============ The classical Nash equilibrium theory is based on fixed point theory and was developed in frames of linear convexity. The mixed strategies of a player are probability (additive) measures on a set of pure strategies. But an interest to Nash equilibria in more general frames is rapidly growing in last decades. There are also results about Nash equilibrium for non-linear convexities. For instance, Briec and Horvath proved in [@Ch] existence of Nash equilibrium point for $B$-convexity and MaxPlus convexity. Let us remark that MaxPlus convexity is related to idempotent (Maslov) measures in the same sense as linear convexity is related to probability measures. We can use additive measures only when we know precisely probabilities of all events considered in a game. However it is not the case in many modern economic models. The decision theory under uncertainty considers a model when probabilities of states are either not known or imprecisely specified. Gilboa [@Gil] and Schmeidler [@Sch] axiomatized expectations expressed by Choquet integrals attached to non-additive measures called capacities, as a formal approach to decision-making under uncertainty. Dow and Werlang [@DW] generalized this approach for two players game where belief of each player about a choice of the strategy by the other player is a capacity. This result was extended onto games with arbitrary finite number of players [@EK]. Kozhan and Zaricznyi introduced in [@KZ] a formal mathematical generalization of Dow and Werlang’s concept of Nash equilibrium of a game where players are allowed to form non-additive beliefs about opponent’s decision but also to play their mixed non-additive strategies. Such game is called by authors game in capacities. The expected payoff function was there defined using a Choquet integral. Kozhan and Zaricznyi proved existence theorem using a linear convexity on the space of capacities which is preserved by Choquet integral. There was stated a problem of existence of Nash equilibrium for another functors [@KZ]. An alternative to so-called Choquet expected utility model is the qualitative decision theory. The corresponding expected utility is expressed by Sugeno integral. See for example papers [@DP], [@DP1], [@CH1], [@CH] and others. Sugeno integral chooses a median value of utilities which is qualitative counterpart of the averaging operation by Choquet integral. Following [@KZ] we introduce in this paper the general mathematical concept of Nash equilibrium of a game in capacities. However, motivated by the qualitative approach, we consider expected payoff function defined by Sugeno integral. To prove existence theorem for this concrete case, we consider more general framework which could unify all mentioned before situations and give us a method to prove theorems about existence of Nash equilibrium in different contexts. We use categorical methods and abstract convexity theory. The notion of convexity considered in this paper is considerably broader then the classic one; specifically, it is not restricted to the context of linear spaces. Such convexities appeared in the process of studying different structures like partially ordered sets, semilattices, lattices, superextensions etc. We base our approach on the notion of topological convexity from [@vV] where the general convexity theory is covered from axioms to application in different areas. Particularly, there is proved Kakutani fixed point theorem for abstract convexity. Above mentioned constructions of the spaces of probability measures, idempotent measures and capacities are functorial and could be completed to monads (see [@RZ], [@Z] and [@NZ] for more details). There was introduced in [@R1] a convexity structure on each ${\mathbb F}$-algebra for any monad ${\mathbb F}$ in the category of compact Hausdorff spaces and continuous maps. Particularly, topological properties of monads with binary convexities were investigated. We prove a counterpart of Nash theorem for an abstract convexity in this paper. Particularly, we consider binary convexities. These results we use to obtain Nash theorem for algebras of any L- monad with binary convexity. Since capacity monad is an L-monad with binary convexity [@R2], we obtain as corollary the corresponding result for capacities. Games in capacities =================== By ${\mathsf{Comp}}$ we denote the category of compact Hausdorff spaces (compacta) and continuous maps. For each compactum $X$ we denote by $C(X)$ the Banach space of all continuous functions on $X$ with the usual $\sup$-norm. In what follows, all spaces and maps are assumed to be in ${\mathsf{Comp}}$ except for ${\mathbb R}$ and maps in sets $C(X)$ with $X$ compact Hausdorff. We need the definition of capacity on a compactum $X$. We follow a terminology of [@NZ]. A function $c$ which assign each closed subset $A$ of $X$ a real number $c(A)\in [0,1]$ is called an [*upper-semicontinuous capacity*]{} on $X$ if the three following properties hold for each closed subsets $F$ and $G$ of $X$: 1\. $c(X)=1$, $c(\emptyset)=0$, 2\. if $F\subset G$, then $c(F)\le c(G)$, 3\. if $c(F)<a$, then there exists an open set $O\supset F$ such that $c(B)<a$ for each compactum $B\subset O$. We extend a capacity $c$ to all open subsets $U\subset X$ by the formula $c(U)=\sup\{c(K)\mid K$ is a closed subset of $X$ such that $K\subset U\}$. It was proved in [@NZ] that the space $MX$ of all upper-semicontinuous capacities on a compactum $X$ is a compactum as well, if a topology on $MX$ is defined by a subbase that consists of all sets of the form $O_-(F,a)=\{c\in MX\mid c(F)<a\}$, where $F$ is a closed subset of $X$, $a\in [0,1]$, and $O_+(U,a)=\{c\in MX\mid c(U)>a\}$, where $U$ is an open subset of $X$, $a\in [0,1]$. Since all capacities we consider here are upper-semicontinuous, in the following we call elements of $MX$ simply capacities. There is considered in [@KZ] a tensor product for capacities, which is a continuous map $\otimes:MX_1\times\dots\times MX_n\to M(X_1\times\dots\times X_n)$. Note that, despite the space of capacities contains the space of probability measures, the tensor product of capacities does not extend tensor product of probability measures. Due to Zhou [@Zh] we can identify the set $MX$ with some set of functionals defined on the space $C(X)$ using the Choquet integral. We consider for each $\mu\in MX$ its value on a function $f\in C(X)$ defined by the formulae $$\mu(f)=\int fd\mu=\int_0^\infty\mu\{x\in X|f(X)\ge t\}dt+\int^0_{-\infty}(\mu\{x\in X|f(X)\ge t\}-1)dt$$ Let us remember the definition of Nash equilibrium. We consider a $n$-players game $f:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ with compact Hausdorff spaces of strategies $X_i$. The coordinate function $f_i:X\to {\mathbb R}$ we call payoff function of $i$-th player. For $x\in X$ and $t_i\in X_i$ we use the notation $(x;t_i)=(x_1,\dots,x_{i-1},t_i,x_{i+1},\dots,x_n)$. A point $x\in X$ is called a Nash equilibrium point if for each $i\in\{1,\dots,n\}$ and for each $t_i\in X_i$ we have $f_i(x;t_i)\le f_i(x)$. Kozhan and Zarichnyj proved in [@KZ] existence of Nash equilibrium for game in capacities $ef:\prod_{i=1}^n MX_i\to{\mathbb R}^n$ with expected payoff functions defined by $$ef_i(\mu_1,\dots,\mu_n)=\int_{X_1\times\dots\times X_n}f_id(\mu_1\otimes\dots\otimes\mu_n)$$ Let us remark that the Choquet functional representation of capacities preserves the natural linear convexity structure on $MX$ which was used in the proof of existence of Nash equilibrium [@KZ]. However this representation does not preserve the capacity monad structure. (We will introduce the monad notion in Section 4). There was introduced [@R2] another functional representation of capacities using Sugeno integral (see also [@NR] for similar result). This representation preserves the capacity monad structure. Let us describe such representation. Fix any increasing homeomorphism $\psi:(0,1)\to{\mathbb R}$. We put additionally $\psi(0)=-\infty$, $\psi(1)=+\infty$ and assume $-\infty<t<+\infty$ for each $t\in{\mathbb R}$. We consider for each $\mu\in MX$ its value on a function $f\in C(X)$ defined by the formulae $$\mu(f)=\int_X^{Sug} fd\mu=\max\{t\in{\mathbb R}\mid \mu(f^{-1}([t,+\infty)))\ge\psi^{-1}(t)\}$$ Let us remark that we use some modification of Sugeno integral. The original Sugeno integral [@Su] “ignores” function values outside the interval $[0,1]$ and we introduce a “correction” homeomorphism $\psi$ to avoid this problem. Now, following [@KZ], we consider a game in capacities $sf:\prod_{i=1}^n MX_i\to{\mathbb R}^n$, but motivated by [@DP], we consider Sugeno expected payoff functions defined by $$sf_i(\mu_1,\dots,\mu_n)=\int^{Sug}_{X_1\times\dots\times X_n}f_id(\mu_1\otimes\dots\otimes\mu_n)$$ The main goal of this paper is to prove existence of Nash equilibrium for such game. Since Sugeno integral does not preserve linear convexity on $MX$ we can not use methods from [@KZ]. We will use some another natural convexity structure which has the binarity property (has Helly number 2). We will obtain some general result for such convexities which could be useful to investigate existence of Nash equilibrium for diverse construction. Finally, we will obtain the result for capacities as a corollary of these general results. Binary convexities ================== A family ${\mathcal C}$ of closed subsets of a compactum $X$ is called a [*convexity*]{} on $X$ if ${\mathcal C}$ is stable for intersection and contains $X$ and the empty set. Elements of ${\mathcal C}$ are called ${\mathcal C}$-convex (or simply convex). Although we follow general concept of abstract convexity from [@vV], our definition is different. We consider only closed convex sets. Such structure is called closure structure in [@vV]. The whole family of convex sets in the sense of [@vV] could be obtained by the operation of union of up-directed families. In what follows, we assume that each convexity contains all singletons. A convexity ${\mathcal C}$ on $X$ is called $T_2$ if for each distinct $x_1$, $x_2\in X$ there exist $S_1$, $S_2\in{\mathcal C}$ such that $S_1\cup S_2=X$, $x_1\notin S_2$ and $x_2\notin S_1$. Let us remark that if a convexity ${\mathcal C}$ on a compactum $X$ is $T_2$, then ${\mathcal C}$ is a subbase for closed sets. A convexity ${\mathcal C}$ on $X$ is called $T_4$ (normal) if for each disjoint $C_1$, $C_2\in {\mathcal C}$ there exist $S_1$, $S_2\in{\mathcal C}$ such that $S_1\cup S_2=X$, $C_1\cap S_2=\emptyset$ and $C_2\cap S_1=\emptyset$. Let $(X,{\mathcal C})$, $(Y,{\mathcal D})$ be two compacta with convexity structures. A continuous map $f:X\to Y$ is called [*CP-map*]{} (convexity preserving map) if $f^{-1}(D)\in{\mathcal C}$ for each $D\in{\mathcal D}$; $f$ is called [*CC-map*]{} (convex-to-convex map) if $f(C)\in{\mathcal D}$ for each $C\in{\mathcal C}$. By a multimap (set-valued map) of a set $X$ into a set $Y$ we mean a map $F:X\to 2^Y$. We use the notation $F:X\multimap Y$. If $X$ and $Y$ are topological spaces, then a multimap $F:X\multimap Y$ is called upper semi-continuous (USC) provided for each open set $O\subset Y$ the set $\{x\in X\mid F(x)\subset O\}$ is open in $X$. It is well-known that a multimap is USC iff its graph is closed in $X\times Y$. Let $F:X\multimap X$ be a multimap. We say that a point $x\in X$ is a fixed point of $F$ if $x\in F(x)$. The following counterpart of Kakutani theorem for abstract convexity is a partial case of Theorem 3 from [@W] (it also could be obtain combining Theorem 6.15, Ch.IV and Theorem 4.10, Ch.III from [@vV]). \[KA\] Let ${\mathcal C}$ be a normal convexity on a compactum $X$ such that all convex sets are connected and $F:X\multimap X$ is a USC multimap with values in ${\mathcal C}$. Then $F$ has a fixed point. Let ${\mathcal C}$ be a family of subsets of a compactum $X$. We say that ${\mathcal C}$ is [*linked*]{} if the intersection of every two elements is non-empty. A convexity ${\mathcal C}$ is called [*binary*]{} if the intersection of every linked subsystem of ${\mathcal C}$ is non-empty. \[BC\] Let ${\mathcal C}$ be a $T_2$ binary convexity on a continuum $X$. Then ${\mathcal C}$ is normal and all convex sets are connected. The first assertion of the lemma is proved in Lemma 3.1 [@RZ]. Let us prove the second one. Consider any $A\in{\mathcal C}$. There was defined in [@MV] a retraction $h_A:X\to A$ by the formula $h_A(x)=\cap\{C\in{\mathcal C}\mid x\in C$ and $C\cap A\ne\emptyset\}$. Hence $A$ is connected and the lemma is proved. Now we can reformulate Theorem \[KA\] for binary convexities. \[KB\] Let ${\mathcal C}$ be a $T_2$ binary convexity on a continuum $X$ and $F:X\multimap X$ is a USC multimap with values in ${\mathcal C}$. Then $F$ has a fixed point. Now, let ${\mathcal C}_i$ be a convexity on $X_i$. We say that the function $f_i:X\to{\mathbb R}$ is quasi concave by $i$-th coordinate if we have $(f_i^x)^{-1}([t;+\infty))\in{\mathcal C}_i$ for each $t\in{\mathbb R}$ and $x\in X$ where $f_i^x:X_i\to{\mathbb R}$ is a function defined as follows $f_i^x(t_i)=f_i(x;t_i)$ for $t_i\in X_i$. \[NN\] Let $f:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ be a game with a normal convexity ${\mathcal C}_i$ defined on each compactum $X_i$ such that all convex sets are connected, the function $f$ is continuous and the function $f_i:X\to{\mathbb R}$ is quasi concave by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. Fix any $x\in X$. For each $i\in\{1,\dots,n\}$ consider a set $M_i^x\subset X_i$ defined as follows $M_i^x=\{t\in X_i\mid f_i^x(t)=\max_{s\in X_i}f_i^x(s)\}$. We have that $M_i^x$ is a closed subset $X_i$. Since the function $f_i:X\to{\mathbb R}$ is quasi concave by $i$-th coordinate, we have that $M_i^x\in{\mathcal C}_i$. Define a multimap $F:X\multimap X$ by the formulae $F(x)=\prod_{i=1}^n M_i^x$ for $x\in X$. Let us show that $F$ is USC. Consider any point $(x,y)\in X\times X$ such that $y\notin F(x)$. Then there exists $i\in\{1,\dots,n\}$ such that $f_i^x(y_i )<\max_{s\in X_i}f_i^x(s)\}$. Hence we can choose $t_i\in X_i$ such that $f_i(x;y_i)<f_i(x;t_i)$. Since $f_i$ is continuous, there exists a neighborhood $O_x$ of $x$ in $X$ and a neighborhood $O_{y_i}$ of $y_i$ in $Y_i$ such that for each $x'\in O_x$ and $y_i'\in O_{y_i}$ we have $f_i(x;y_i')<f_i(x;t_i)$. Put $O_y=({\mathrm{pr}}_i)^{-1}(O_{y_i})$. Then for each $(x',y')\in O_x\times O_y$ we have $y'\notin F(x')$. Thus the graph of $F$ is closed in $X\times Y$, hence $F$ is upper semicontinuous. We consider on $X$ the family ${\mathcal C}=\{\prod_{i=1}^n C_i\mid C_i\in{\mathcal C}_i\}$. It is easy to see that ${\mathcal C}$ forms a normal convexity on compactum $X$ such that all convex sets are connected. Then by Theorem \[KA\] $F$ has a fixed point which is a Nash equilibrium point. Now, the following corollary follows from the previous theorem and Lemma \[BC\]. \[NB\] Let $f:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ be a game such that there is defined a $T_2$ binary convexity ${\mathcal C}_i$ on each continuum $X_i$, the function $f$ is continuous and the function $f_i:X\to{\mathbb R}$ is quasi concave by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. L-monads and its algebras ========================= We apply Corollary \[NB\] to study games defined on algebras of binary L-monads. We recall some categorical notions (see [@Mc] and [@TZ] for more details). We define them only for the category ${\mathsf{Comp}}$. Let $F:{\mathsf{Comp}}\to{\mathsf{Comp}}$ be a covariant functor. A functor $F$ is called continuous if it preserves the limits of inverse systems. In what follows, all functors assumed to preserve monomorphisms, epimorphisms, weight of infinite compacta. We also assume that our functors are continuous. For a functor $F$ which preserves monomorphisms and an embedding $i:A\to X$ we shall identify the space $FA$ and the subspace $F(i)(FA)\subset FX$. A [*monad*]{} ${\mathbb T}=(T,\eta,\mu)$ in the category ${\mathsf{Comp}}$ consists of an endofunctor $T:{{\mathsf{Comp}}}\to{{\mathsf{Comp}}}$ and natural transformations $\eta:{\mathrm{Id}}_{{\mathsf{Comp}}}\to T$ (unity), $\mu:T^2\to T$ (multiplication) satisfying the relations $\mu\circ T\eta=\mu\circ\eta T=$[**1**]{}$_T$ and $\mu\circ\mu T=\mu\circ T\mu$. (By ${\mathrm{Id}}_{{\mathsf{Comp}}}$ we denote the identity functor on the category ${{\mathsf{Comp}}}$ and $T^2$ is the superposition $T\circ T$ of $T$.) Let ${\mathbb T}=(T,\eta,\mu)$ be a monad in the category ${{\mathsf{Comp}}}$. The pair $(X,\xi)$ where $\xi:TX\to X$ is a map is called a ${\mathbb T}$-[*algebra*]{} if $\xi\circ\eta X=id_X$ and $\xi\circ\mu X=\xi\circ T\xi$. Let $(X,\xi)$, $(Y,\xi')$ be two ${\mathbb T}$-algebras. A map $f:X\to Y$ is called a ${\mathbb T}$-algebras morphism if $\xi'\circ Tf=f\circ\xi$. Let $(X,\xi)$ be an ${\mathbb F}$-algebra for a monad ${\mathbb F}=(F,\eta,\mu)$ and $A$ is a closed subset of $X$. Denote by $f_A$ the quotient map $f_A:X\to X/A$ (the classes of equivalence are one-point sets $\{x\}$ for $x\in X\setminus A$ and the set $A$) and put $a=f_A(A)$. Denote $A^+=(Ff_A)^{-1}(\eta(X/A)(a))$. Define the ${\mathbb F}$-[*convex hull*]{} $C_{\mathbb F}(A)$ of $A$ as follows $C_{\mathbb F}(A)=\xi(A^+)$. Put additionally $C_{\mathbb F}(\emptyset)=\emptyset$. We define the family ${\mathcal C}_{\mathbb F}(X,\xi)=\{A\subset X|A $ is closed and ${\mathcal C}_{\mathbb F}(A)=A\}$. Elements of the family ${\mathcal C}_{\mathbb F}(X,\xi)$ we call ${\mathbb F}$-[*convex*]{}. It was shown in [@R1] that the family ${\mathcal C}_{\mathbb F}(X,\xi)$ forms a convexity on $X$, moreover, each morphism of ${\mathbb F}$-algebras is a $CP$-map. Let us remark that one-point sets are always ${\mathbb F}$-convex. We don’t know if the convexities we have introduced are $T_2$. We consider in this section a class of monads generating convexities which have this property. The class of $L$-monads was introduced in [@R1] and it contains many well-known monads in ${\mathsf{Comp}}$ like superextension, hyperspace, probability measure, capacity, idempotent measure etc. For $\phi\in C(X)$ by $\max\phi$ ($\min\phi$) we denote $\max_{x\in X}\phi(x)$ ($\min_{x\in X}\phi(x)$) and $\pi_\phi$ or $\pi(\phi)$ denote the corresponding projection $\pi_\phi:\prod_{\psi\in C(X)}[\min\psi,\max\psi]\to[\min\phi,\max\phi]$. It was shown in [@R3] that for each L-monad ${\mathbb F}=(F,\eta,\mu)$ we can consider $FX$ as subset of the product $\prod_{\phi\in C(X)}[\min\phi,\max\phi]$, moreover, we have $\pi_\phi\circ \eta X=\phi$, $\pi_\phi\circ \mu X=\pi(\pi_\phi)$ for all $\phi\in C(X)$ and $\pi_\psi\circ Ff=\pi_{\psi\circ f}$ for all $\psi\in C(Y)$, $f:X\to Y$. We could consider these properties of $L$-monads as a definition [@R3]. We say that an L-monad ${\mathbb F}=(F,\eta,\mu)$ weakly preserves preimages if for each map $f:X\to Y$ and each closed subset $A\subset Y$ we have $\pi_\phi(\nu)\in[\min\phi(f^{-1}(A)),$ $\max\phi(f^{-1}(A))]$ for each $\nu\in (Ff)^{-1}(A)$ and $\phi\in C(X)$ [@R1]. It was shown in [@R1] that for each L-monad ${\mathbb F}$ which weakly preserves preimages the convexity ${\mathcal C}_{\mathbb F}(FX,\mu X)$ is $T_2$. \[CC\] Let $(X,\xi)$ be an ${\mathbb F}$-algebra for an $L$-monad ${\mathbb F}=(F,\eta,\mu)$ which weakly preserves preimages. Then the map $\xi:FX\to X$ is a CC-map for convexities ${\mathcal C}_{\mathbb F}(FX,\mu)$ and ${\mathcal C}_{\mathbb F}(X,\xi)$ respectively. Consider any $B\in {\mathcal C}_{\mathbb F}(FX,\mu)$. We should show that $\xi(B)\in{\mathcal C}_{\mathbb F}(X,\xi)$. Denote by $\chi:X\to X/\xi(B)$ the quotient map and put $b=\chi(\xi(B))$. Consider any ${\mathcal A}\in FX$ such that $F\chi({\mathcal A})=(\eta(X/\xi(B))(b))$. We should show that $\xi({\mathcal A})\in\xi(B)$. Consider the quotient map $\chi_1:FX\to FX/B$ and put $b_1=\chi_1(B)$. There exists a (unique) continuous map $\xi':FX/B\to X/\xi(B)$ such that $\xi'(b_1)=b$ and $\xi'\circ \chi_1=\chi\circ \xi$. Put ${\mathcal D}=F(\eta X)({\mathcal A})$. We have $F\xi({\mathcal D})={\mathcal A}$, hence $F\xi'\circ F\chi_1({\mathcal D})=F\chi\circ F\xi({\mathcal D})=F\chi({\mathcal A})=\eta(X/\xi(B))(b)$. Since $F$ weakly preserves preimages, we have $F\chi_1({\mathcal D})=\eta(FX/B)(b_1)$. Since $B\in {\mathcal C}_{\mathbb F}(FX,\mu)$, we have $\mu X({\mathcal D})\in B$. Hence $\xi({\mathcal A})=\xi\circ F\xi({\mathcal D})=\xi\circ \mu({\mathcal D})\in\xi(B)$. The lemma is proved. We call a monad ${\mathbb F}$ binary if ${\mathcal C}_{\mathbb F}(X,\xi)$ is binary for each ${\mathbb F}$-algebra $(X,\xi)$. \[BT\] Let ${\mathbb F}=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. Then for each ${\mathbb F}$-algebra $(X,\xi)$ the convexity ${\mathcal C}_{\mathbb F}(X,\xi)$ is $T_2$. Consider any two distinct points $x$, $y\in X$. Since $\xi$ is a morphism of ${\mathbb F}$-algebras $(FX,\mu X)$ and $(X,\xi)$, it is a CP-map and we have $\xi^{-1}(x)$, $\xi^{-1}(y)\in {\mathcal C}_{\mathbb F}(FX,\mu)$. Since ${\mathcal C}_{\mathbb F}(FX,\mu)$ is $T_2$ and binary, it is normal by Lemma \[BC\]. Hence we can choose $L_1$, $L_2\in {\mathcal C}_{\mathbb F}(FX,\mu)$ such that $L_1\cup L_2=FX$ and $L_1\cap\xi^{-1}(x)=\emptyset$, $L_2\cap\xi^{-1}(y)=\emptyset$. Then we have $\xi(L_1)$, $\xi(L_2)\in{\mathcal C}_{\mathbb F}(X,\xi)$ by Lemma \[CC\], $\xi(L_1)\cup\xi(L_2)=X$, $x\notin L_1$ and $y\notin L_2$. The lemma is proved. Consider any L-monad ${\mathbb F}=(F,\eta,\mu)$. It is easy to check that for each segment $[a,b]\subset{\mathbb R}$ the pair $([a,b],\xi_{[a,b]})$ is an $F$-algebra where $\xi_{[a,b]}=\pi_{{\mathrm{id}}_{[a,b]}}$. Consider a game $f:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ where for each compactum $X_i$ there exists a map $\xi_i:FX_i\to X_i$ such that the pair $(X_i,\xi_i)$ is an ${\mathbb F}$-algebra. We say that the function $f_i:X\to{\mathbb R}$ is an ${\mathbb F}$-algebras morphism by $i$-th coordinate if for each $x\in X$ the function $f_i^x:X_i\to{\mathbb R}$ is a morphism of ${\mathbb F}$-algebras $(X_i,\xi_i)$ and $([\min f_i^x,\max f_i^x],\xi_{[\min f_i^x,\max f_i^x]})$. \[NA\] Let ${\mathbb F}=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. Let $f:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ be a game such that there is defined an ${\mathbb F}$-algebra map $\xi_i:FX_i\to X_i$ on each continuum $X_i$, the function $f$ is continuous and the function $f_i:X\to{\mathbb R}$ is an ${\mathbb F}$-algebras morphism by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. Since for each $x\in X$ the function $f_i^x:X_i\to{\mathbb R}$ is an ${\mathbb F}$-algebras morphism, it is a CP-map, hence quasi concave. Now, our theorem follows from Lemma \[BT\] and Corollary \[NB\]. Pure and mixed strategies ========================= Let ${\mathbb F}=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. We consider Nash equilibrium for free algebras $(FX,\mu X)$ in this section. Points of a compactum $X$ we call pure strategies and points of $FX$ we call mixed strategies. Such approach is a natural generalization of the model from [@KZ] where spaces of capacities $MX$ were considered. We consider a game $u:X=\prod_{i=1}^n X_i\to{\mathbb R}^n$ with compact Hausdorff spaces of pure strategies $X_1,\dots,X_n$ and continuous payoff functions $u_i:\prod_{i=1}^n X_i\to{\mathbb R}$. It is well known how to construct the tensor product of two (or finite number) probability measures. This operation was generalized in [@TZ] for each monad in the category ${\mathsf{Comp}}$. More precisely there was constructed for each compacta $X_1,\dots,X_n$ a continuous map $\otimes:\prod_{i=1}^n F X_i\to F(\prod_{i=1}^n X_i)$ which is natural by each argument and for each $i$ we have $F(p_i)\circ\otimes= {\mathrm{pr}}_i$ where $p_i:\prod_{j=1}^nX_j\to X_i$ and ${\mathrm{pr}}_i:\prod_{j=1}^n FX_j\to FX_i$ are natural projections. We define the payoff functions $eu_i:FX_1\times\dots\times FX_n\to{\mathbb R}$ by the formula $eu_i=\pi_{u_i}\circ\otimes$. Evidently, $eu_i$ is continuous. Consider any $t\in{\mathbb R}$ and $\nu\in FX_1\times\dots\times FX_n$. Then we have $(eu_i^\nu)^{-1}[t;+\infty)=\{\mu_i\in FX_i\mid eu_i(\nu;\mu_i)\ge t_i\}=l^{-1}(\pi_{u_i}^{-1}[t;+\infty)\cap\{\nu_i\}\times\dots\times FX_i\times\dots\times\{\nu_n\})$, where $l:FX_i\to\prod_{j=1}^n FX_j$ is an embedding defined by $l(\mu_i)=(\nu;\mu_i)$ for $\mu_i\in FX_i$. A structure of ${\mathbb F}$-algebra on the product $\prod_{j=1}^n FX_j$ of ${\mathbb F}$-algebras $(FX_i,\mu X_i)$ is given by a map $\xi:F(\prod_{i=1}^n FX_i)\to\prod_{i=1}^n FX_i$ defined by the formula $\xi=(\mu X_i\circ F(p_i))_{i=1}^n$. It is easy to check that a product of convex in $FX_i$ sets is convex in $\prod_{i=1}^n FX_i$. Since ${\mathbb F}$ weakly preserves preimages, $\pi_{u_i}^{-1}[t;+\infty)$ is convex in $\prod_{i=1}^n FX_i$. It is easy to see that $l$ is a CP-map, hence the map $eu_i$ is quasiconcave on $i$-th coordinate. Hence, using Corollary \[NB\], we obtain the following theorem. The game with payoff functions $eu_i$ has a Nash equilibrium point provided each $FX_i$ is connected. Now, consider a game in capacities with Sugeno payoff functions introduced in the beginning of the paper. The assignment $M$ extends to the capacity functor $M$ in the category of compacta, if the map $Mf:MX\to MY$ for a continuous map of compacta $f:X \to Y$ is defined by the formula $Mf(c)(F)=c(f^{-1}(F))$ where $c\in MX$ and $F$ is a closed subset of $X$. This functor was completed to the monad ${\mathbb M}=(M,\eta,\mu)$ [@NZ], where the components of the natural transformations are defined as follows: $\eta X(x)(F)=1$ if $x\in F$ and $\eta X(x)(F)=0$ if $x\notin F$; $\mu X({\mathcal C})(F)=\sup\{t\in[0,1]\mid {\mathcal C}(\{c\in MX\mid c(F)\ge t\})\ge t\}$, where $x\in X$, $F$ is a closed subset of $X$ and ${\mathcal C}\in M^2(X)$. Since capacity monad ${\mathbb M}$ is a binary L-monad which weakly preserves preimages with $\pi_\varphi(\nu)=\int_X^{Sug} fd\nu$ for any $\nu\in MX$ and $\varphi\in C(X)$ [@R2], we obtain as a consequence \[NC\] A game in capacities $sf:\prod_{i=1}^n MX_i\to{\mathbb R}^n$ with Sugeno payoff functions has a Nash equilibrium point. W.Briec, Ch.Horvath [*Nash points, Ku Fan inequality and equilibria of abstract economies in Max-Plus and ${\mathbb B}$-convexity,*]{} J. Math. Anal. Appl. [**341**]{} (2008), 188–199. A. Chateauneuf, M. Grabisch, A. Rico, [*Modeling attitudes toward uncertainty through the use of the Sugeno integral*]{}, Journal of Mathematical Economics [**44**]{} (2008) 1084–1099. J.Dow, S.Werlang, [*Nash equilibrium under Knightian uncertainty: breaking down backward induction*]{},J Econ. Theory [**64**]{} (1994) 205–224. D.Dubois, H.Prade, R.Sabbadin, [*Qualitative decision theory with Sugeno integrals*]{}, arxiv.org 1301.7372 D. Dubois, J.-L. Marichal, H. Prade, M. Roubens, R. Sabbadin, [*The use of the discrete Sugeno integral in decision making: a survey*]{}, Internat. J. Uncertainty, Fuzziness Knowledge-Based Systems [**9**]{} (5) (2001) 539–-561. J.Eichberger, D.Kelsey, [*Non-additive beliefs and strategic equilibria*]{}, Games Econ Behav [**30**]{} (2000) 183–215. I.Gilboa, [*Expected utility with purely subjective non-additive probabilities*]{}, J. of Mathematical Economics [**16**]{} (1987) 65–88. R.Kozhan, M.Zarichnyi, [*Nash equilibria for games in capacities*]{}, Econ. Theory [**35**]{} (2008) 321–331. S.MacLane, [*Categories for Working Matematicians*]{}, Springer Verlag, 1976. J.van Mill, M.van de Vel, [*Convexity preserving mappings in subbase convexity theory*]{}, Proc. Kon. Ned. Acad. Wet. [**81**]{} (1978) 76–90. O.R.Nykyforchyn, [*The Sugeno integral and functional representation of the monad of lattice-valued capacities.*]{}, Topology [**48**]{} (2009) 137–148. O.R.Nykyforchyn, M.M.Zarichnyi, [*Capacity functor in the category of compacta*]{}, Mat.Sb. [**199**]{} (2008) 3–26. T.Radul, [*Convexities generated by L-monads*]{}, Applied Categorical Structures [**19**]{} (2011) 729–739. T.Radul, [*A functional representation of capacity monad*]{}, Topology [**48**]{} (2009) 100–104. T.Radul, [*On strongly Lawson and I-Lawson monads*]{}, Boletin de Matematicas [**6**]{} (1999) 69–76. T.N.Radul, M.M.Zarichnyi, [*Monads in the category of compacta*]{}, Uspekhi Mat.Nauk. [**50**]{} (1995) 83-–108. A. Rico, M. Grabisch, Ch. Labreuchea, A. Chateauneuf [*Preference modeling on totally ordered sets by the Sugeno integral*]{}, Discrete Applied Mathematics [**147**]{} (2005) 113–124. D.Schmeidler, [*Subjective probability and expected utility without additivity*]{}, Econometrica [**57**]{} (1989) 571–587. M.Sugeno, [*Fuzzy measures and fuzzy integrals*]{}, A survey. In Fuzzy Automata and Decision Processes. North-Holland, Amsterdam: M. M. Gupta, G. N. Saridis et B. R. Gaines editeurs. 89–102. 1977 A.Teleiko, M.Zarichnyi, [*Categorical Topology of Compact Hausdorff Spaces*]{}, VNTL Publishers. Lviv, 1999. M.van de Vel, [*Theory of convex strutures*]{}, North-Holland, 1993. A.Wieczorek [*The Kakutani property and the fixed point property of topological spaces with abstract convexity,*]{} J. Math. Anal. Appl. [**168**]{} (1992), 483–499. M.Zarichnyi [*Spaces and mappings of idempotent measures*]{}, Izv. Ross. Akad. Nauk Ser. Mat., [**74**]{} (2010), 45–64. L.Zhou [*Integral representation of continuous comonotonically additive functionals*]{}, Trans Am Math Soc, [**350**]{} (1998), 1811–1822.
Introduction {#sec:introduction} ============ A considerable part of the statistical physics community is interested in financial market mechanisms and related problems [@bouchaud; @stanley]. One major challenge in this area is to give a detailed picture of the emergence of group quantities from market microstructure. For instance, how prices and their fluctuations are related to the balance of buyers and sellers. The major difference between this approach and that using traditional financial theories is that the emphasis is now put on comparing hypotheses and their implications to real market data. In this work, we investigate simple models of auctions, with various settings, and obtain explicit expressions for distributions that are intrinsic properties of sellers. As such, we are not able to provide any comparison with any measurable data, and in fact, from the simplicity of our models, it is unlikely that any convincing similarity could be spotted. Instead we present our work as a first step in auction modelling from a physicist’s point of view, a problem that has not attracted the attention it deserves up to now. Much of economic activity is based on mini auctions or tenders in which a potential buyer offers a particular amount of money for a product or a potential seller offers to sell the product at a particular price. The recipient of this offer then compares it with the offers of competitor buyers or sellers to determine the best deal and consequently with whom to trade. Sellers who overprice their goods or buyers who are not prepared to spend sufficient money seldom trade and risk going out of business. There have been numerous works on auctions, and a useful summary can be found in Ref. [@ohara] and its references, while Ref. [@cohen] still provides a very interesting review of the subject. Much of this work has concentrated on modelling the generation of an equilibrium price, determined by some extremization procedure, either by profit maximization, by risk minimization, by considering inventory constraints, or by considering the price of transactions [@ohara]. These models are mainly concerned with the dynamics of price formation. In contrast, here we want to consider sellers competing to attract buyers, reducing their behaviour to a trial and error process. We want to model the learning of sellers that are repetitively competing against each other. In practice, our models are not specially devised to reproduce financial markets but rather to tackle the more general problem of competing sellers acting inductively [@conlisk]. In an attempt to model this type of process we introduce simple models in which two or more players repeatedly bid against one another. Each player has a probability distribution from which they draw their bids at random. When a player is unsuccessful he discards that bid and replaces it with another bid selected at random. In practice, we do not associate the bid proposed by a player to a market price, but rather to the profit made by a player over a fair market price. As such, we let bids be in the range $(0,1)$, with 0 for no profit and 1 for a maximum profit. This implies that a bid is a simultaneous proxy for both the profit of a player and his risk. In the next section we introduce the two player system and solve it analytically in two particular cases; when both players have the same set of bids at the beginning and in the long time limit. We argue that, except for very specific situations, the system converges towards a symmetric situation for the players. In Sec. \[sec:the d player game\] we solve two different $d$ player versions of the same game and in Sec. \[sec:market makers\], we extend the model to mimic market makers. We investigate in Sec. \[sec:price volatility as a measure of risk\] the effect of price volatility on the bid distribution. In Sec. \[sec:implement a market structure\], we let players be heterogeneous by implementing a market structure, and solve exactly one simple situation. Our results are summarized in the last section, where we also discuss improvements to make the models more realistic. In this work, we restrict our attention to the random picking of new bids from a uniform distribution. This minimalist adaptation process assumes that players do not have a very efficient record of past bids. But this is in line with the idea of players trying to make the maximum profit, while minimizing risk. If players only try to minimize their exposure, they keep track of the winning bids and no room is left for profit. By always picking bids from a uniform distribution, players keep trying to improve their profit. We will discuss on extending the models to incorporate more general adaptation processes, but we keep a general analysis of this problem for future work. We have to mention that we use the term auction to refer to the competition between buyers or sellers, but this does not compare with the usual definitions of auctions in the economics literature. Our auctions are concerned with the dynamics of intermediaries, trying to make a profit from the competitive sale of a commodity or a service. This is completely different from auction as defined in finance, where participants take part in several rounds of bids before a sale takes place. The Two player game {#sec:the two player game} =================== Imagine two players who each have an infinite set of numbers described by a probability distribution. At each time set the two players draw a number at random from their respective distributions. They compare numbers; the player with the smallest number wins and does nothing, the player who loses replaces his losing number in the probability distribution with another number chosen at random from a uniform distribution. We will call the players $P$ and $Q$ and their corresponding probability distributions at time $t$, $P(x,t)$ and $Q(x,t)$. The probability distributions obey the non-linear coupled integro-differential equations $$\begin{aligned} \nonumber \frac{\partial P (x,t)}{\partial t} &=& - P (x,t) \int_0^x Q (y,t) dy \\ &+& \int_0^1 P (y,t) \int_0^{y} Q (z,t) dz dy \label{eq:evolution p,2 players}\end{aligned}$$ and $$\begin{aligned} \nonumber \frac{\partial Q (x,t)}{\partial t} &=& - Q (x,t) \int_0^x P (y,t) dy \\ &+& \int_0^1 Q (y,t) \int_0^{y} P (z,t) dz dy. \label{eq:evolution q,2 players}\end{aligned}$$ The first term on the right hand side in Eq. (\[eq:evolution p,2 players\]) corresponds to the destruction of numbers in $P(x,t)$ when player $P$ draws a number larger than that drawn by player $Q$. The second term on the right hand side corresponds to the creation of new numbers in $P(x,t)$ after $P$ has lost. Eq. (\[eq:evolution q,2 players\]) has similar terms. Providing that the initial distributions $P(x,0)$ and $Q(x,0)$ are normalised then we have $$\int_0^1 P (x,t) dx = \int_0^1 Q (x,t) dx = 1$$ for all time. We will find it useful to define the probability that $Q$ will win at time $t$, $\alpha (t)$, by $$\alpha (t) = \int_0^1 P (y,t) \int_0^y Q (z,t) dz dy \label{eq:definition of alpha}$$ and similarly the probability that $P$ will win at time $t$ by $\beta (t) = 1- \alpha (t)$. $\alpha (t)$ and $\beta (t)$ are the second terms on the right hand sides of Eqs (\[eq:evolution p,2 players\]) and (\[eq:evolution q,2 players\]) respectively. We can solve (\[eq:evolution p,2 players\]) and (\[eq:evolution q,2 players\]) completely if $P(x,0) = Q(x,0)$. Then we have $P(x,t) = Q(x,t)$ for all time and $$\frac{\partial P (x,t)}{\partial t} = - P (x,t) \int_0^x P (y,t) dy + \frac{1}{2}. \label{eq:evolution p,same initial conditions}$$ Introducing the cumulative probability distribution $$F (x,t) = \int_0^x P (y,t) dy,$$ we can rewrite (\[eq:evolution p,same initial conditions\]) in terms of $F(x,t)$ as $$\frac{\partial F (x,t)}{\partial t} = - \frac{F^2 (x,t)}{2} + \frac{x}{2}.$$ This is easily solved to give $$F (x,t) = \sqrt{x} \left( \frac{F (x,0) + \sqrt{x} + (F (x,0) - \sqrt{x}) e^{-\sqrt{x} t}}{F (x,0) + \sqrt{x} - (F (x,0) - \sqrt{x}) e^{-\sqrt{x} t}}\right). \label{eq:f for the two player simple game}$$ Consequently, for all initial conditions $P(x,0) = Q(x,0)$, the long time state is stationary with $F(x,\infty ) = \sqrt{x}$, or $P(x,\infty ) = 1/(2 \sqrt{x})$. We cannot solve (\[eq:evolution p,2 players\]) and (\[eq:evolution q,2 players\]) for general initial conditions, except in the long time stationary limit. This can be done by setting the derivatives on the left hand side of (\[eq:evolution p,2 players\]) and (\[eq:evolution q,2 players\]) to zero and dropping the time dependence. This reveals $$P (x) = \alpha x^{\alpha - 1} \qquad \hbox{and} \qquad Q (x) = (1- \alpha ) x^{-\alpha}$$ where $$\alpha = \lim_{t\rightarrow \infty} \alpha (t). \label{eq:long time limit of alpha}$$ Consequently, for all initial conditions the stationary state is a one parameter family of power laws with the exponents equal to the negative of the probability that a player will win in the long time limit. This probability is itself determined by the initial conditions. From a symmetry principle, one expects the stable states to be extrema of a characteristic function of both $P (x)$ and $Q (x)$. It seems reasonable to expect that $\alpha$ and $1-\alpha$ will characterize these distributions, respectively. Forming simple functions from these two expressions gives three characteristic values for $\alpha$, namely, $\alpha = 0$, 1/2 and 1. We have performed a number of simulations to confirm that $\alpha = 1/2$ gives the stable solution for most initial conditions. In fact, with very specific initial conditions, the system also converges towards $\alpha = 0$ or $\alpha =1$, starting with these values as initial conditions, for instance. In practice, one would not except any of these peculiar conditions to be realised. If the model can mimic a real situation, $\alpha = 1/2$ is the only value that one should encounter. In other words, for most initial conditions, the system is driven towards a state where the two players are identical. Of particular interest is the stationary distribution of the prices, which is equal to $$Z (x) = P (x) \int_x^1 Q (y) dy + Q (x) \int_x^1 P (y) dy$$ or $$Z (x) = \alpha x^{\alpha - 1} + (1- \alpha) x^{-\alpha} - 1.$$ In the most common situation, that is, when $\alpha = 1/2$, $Z (x) = x^{-1/2} -1$. The moments of the price distribution are equal to $$\begin{aligned} \label{eq:definition of the moments} M_n &\equiv& \int_0^1 dx x^n Z (x)\\ &=& \frac{\alpha (1-\alpha) - n (n+1)}{(\alpha +n)(1-\alpha +n) (n+1)}.\end{aligned}$$ In particular, the average price $M_1$ is given by $$M_1 = \frac{3\alpha (1-\alpha)}{2(\alpha + 1) (2-\alpha)}.$$ It achieves its maximum value for $\alpha = 1/2$. Hence, the adaptative process, even if very simple, is efficient because, of all solutions, the system selects the one that gives the sellers the maximum profit. Note that the basic adaptation process can be improved. The model can be generalised so that when a player loses the new number received is drawn from a probability distribution $\omega (x)$ rather than from the uniform distribution. In this case, in the long time limit, we have $$\begin{array}{cc} P (x) = \alpha \omega (x) \left( \int_0^x \omega (y) dy \right)^{\alpha - 1}\\ Q (x) = (1 -\alpha) \omega (x) \left( \int_0^x \omega (y) dy \right)^{-\alpha}\\ \end{array}$$ where $\alpha$ is given by (\[eq:long time limit of alpha\]). Again, $\alpha = 1/2$ gives the stable solution. The previous equations clearly shows that the power law found for the player distributions are not characteristic of competition, as any other distribution would give another result. This conclusion, namely, that the power law distributions are not robust, will be attained in several occasions in this work. The [$d$]{.nodecor} player game {#sec:the d player game} =============================== We can obtain similar solutions for the $d$ player game where only the player with the highest number changes his distribution. In particular, when all the players have the same starting conditions they all have the same probability distribution at time $t$. This obeys the non-linear integro-differential equation $$\frac{\partial P (x,t)}{\partial t} = - P (x,t) \left( \int_0^x P (y,t) dy \right)^{d-1} + \frac{1}{d}.$$ In the long time limit this evolves to the distribution $$P (x) = \frac{1}{d} x^{\frac{1}{d}-1}.$$ When the initial conditions are unequal the probability distribution of player $i$ ($i$ = 1,..., $d$) evolves to $$P_i (x) = ( 1 - \nabla_i ) x^{-\nabla_i} \label{eq: general d, distribution}$$ where $\nabla_i$ is the probability that player $i$ wins in the long time limit and $$\sum_{i=1}^d \nabla_i = 1. \label{eq:sumtoone of the exponents}$$ The same symmetry argument as in the previous section can be put forward here, and it suggests that $\nabla_i = 1/d$ is the stable solution for most initial conditions. The function we extremize is formed by the product of all the prefactors in Eq. (\[eq: general d, distribution\]). This function appears naturally as a scale factor in the evolution equation, because all probability distributions are multiplied by one another. For instance, in Eq. (\[eq:evolution p,2 players\]), we have to multiply $P$ and $Q$ in both terms on the right hand side. We checked numerically that $\nabla_i = 1/d$ characterizes the most common stationary state. A more rigorous argument can be put forward by considering the price distribution $Z(x)$, defined as the probability that the selling price is equal to $x$. This distribution is given by $$Z (x) = \left( \sum_{i=1}^d \frac{1 - \nabla_i}{x^{\nabla_i}-x} \right) \left( \prod_{i=1}^d (1 - x^{1-\nabla_i}) \right).$$ The first moment of this distribution is the average price and is a function of the set of exponents $\lbrace \nabla_i; i=1,..,d\rbrace$. By looking to the extrema of $Z (x)$ with respect to these exponents, with the added condition Eq. (\[eq:sumtoone of the exponents\]), one obtains that $\nabla_i = 1-1/d$ for all $i$ corresponds to a maximum of $Z (x)$. By extension, it is also a maximum of its first moment, meaning that the system reaches a stationary state where the players maximize their profit. In particular, $$Z (x) = \left( \frac{1}{x^{1/d}} - 1 \right)^{d-1}$$ when all players are identical. The moments $M_n (d)$ of this distribution, defined in Eq. (\[eq:definition of the moments\]), are given by $$M_n (d) = \frac{d! (nd)!}{((n+1)d)!}.$$ The average price,equal to $M_1 (d)$, is a decreasing function of the number of players. As this price compares with a profit, we can associate it to a measure of the spread between ask and bid prices. In this case, we conclude that the spread is a decreasing function of the number of players. This is a well-known fact that has been observed empirically. As in the two player game, the previous model can be generalised so that when a player loses the new number received is drawn from a probability distribution $\omega (x)$ rather than from a uniform distribution. In this case, $$P_i (x) = (1-\nabla_i) \omega (x) \left( \int_0^x \omega (y) dy \right)^{-\nabla_i}$$ in the stationary limit, which provides a slight improvement to the adaptation process. In the previous model, only one player updates his distribution at each round of bidding. However, for auctions where agents are quoting prices to a buyer, the asset will only be sold by the agent proposing the lowest price. We consider the $d$ player game where all players that have not proposed the lowest price discard their proposal and draw a new price from a flat distribution. The players distributions follow coupled differential equations and for the particular case of all players starting from the same probability distribution, they all keep the same probability distribution during the whole game. This probability distribution obeys $$\begin{aligned} \nonumber \frac{\partial P (x,t)}{\partial t} &=& - P (x,t) \left( 1 - \left( 1 - \int_0^x P (y,t) dy \right)^{d-1} \right)\\ &+& \frac{d-1}{d} \label{eq:d player distribution one winner}\end{aligned}$$ where the first term on the right hand side shows that a player proposing $x$ is discarding this value as soon as another player proposes a lower bid. The second term means that a player is always changing his bid unless he wins. We have set equal to $1/d$ the winning probability of all players, because of the model symmetry. Note that from now on, we always consider symmetric initial conditions, because as we showed, this is the most common stationary state. In the long time limit, Eq. (\[eq:d player distribution one winner\]) is equivalent to the following equation $$f (x)^d - f(x) d + (1-x)(d-1) = 0 \label{eq:f(x)}$$ where we defined $$f (x) \equiv \int_x^1 P (y) dy.$$ It is easy to check that for $d=2$, the solution of Sec. \[sec:the two player game\] is recovered. By definition, $0\le f(x)\le 1$, so that $f (x)^d$ can be neglected compared to $f (x)$ when $d$ is large. This can be verified graphically, by using Eq. (\[eq:f(x)\]) to express $x$ as a function of $f$. We can then plot $x$ as a function of $f$ and exchange the axes to obtain graphically $f$ as a function $x$. Fig. \[fig:f(x)\] present $f (x)$ for $d=2$, 3, 4 and 10, from bottom to top. It is easy to appreciate that $\lim_{d\rightarrow\infty} f (x) = 1-x$. In this case, $P (x)$ is a flat distribution. Note also that $df/dx = - P (x)$. Differentiating Eq. (\[eq:f(x)\]) with respect to $x$, we can express $f$ as a function of $df/dx$ and from Eq. (\[eq:f(x)\]), $x$ as a function of $df/dx$. Finally, this allows us to draw the graphic of $P (x)$. In Fig. \[fig:P(x)\], we show $P (x)$ for $d=2$, 3, 4 and 10, from top to bottom on the left of the figure. Note that we can show that $P(x) \ge 1-1/d$, and that it achieves this minimum value at $x=1$. The curves in Fig. \[fig:P(x)\] don’t all cross at the same point. As can be appreciated in Fig. \[fig:P(x)\], $P (x)$ is a power-law from $x=0$ up to a critical value $x_c$, where it saturates and becomes a flat distribution. For $P (x) > 1$, the expression for $x (P)$ can be expanded in a series of $1/P$, giving $$x (P) = \frac{d}{2(d-1)^4} \frac{1}{P^2}+ O \left(\frac{1}{P^3}\right).$$ Hence, $P (x) \sim (d^3 x)^{-1/2}$ for $x< x_c$. A higher bound for $x_c$ can be found by setting $P (x_c) = 1$, which gives $$x_c \le 1 - \frac{d^{(d-2)/(d-1)}}{d-1} + \frac{d^{-d/(d-1)}}{d-1}.$$ In the inset of Fig. \[fig:P(x)\], we compare the analytical solution to numerical simulations of the auction for $d=2$ and $d=10$ for $x> 0.1$. The agreement is good. We conclude that in a strongly competitive environment, where only one player can win, the behaviour of the bid distribution is similar to the one obtained when only two players are competing, at least where the low bid values are concerned. Of course, the lowest bids are of particular interest because deals are usually made at these values. As in the previous models, it is interesting to consider the distribution of prices, $Z (x)$, corresponding to the probability that the deal will be concluded at a price $x$. From Eq. (\[eq:f(x)\]), $Z (x)$ is equal to $$Z (x) = d ( P (x) -1 ) +1.$$ The determination of $Z (x)$ is dependent on the determination of $P (x)$ but, fortunately, even if we do not have any explicit solution for $P (x)$, it is still possible to obtain an expression for $M_1 (d)$, the first moment of $Z (x)$. This first moment, which corresponds to the average price, is calculated by changing $x$ to $f$ in Eq. (\[eq:definition of the moments\]) using Eq. (\[eq:f(x)\]). $M_1 (d)$ is equal to $$M_1 (d) = \frac{d}{2(d+1)}.$$ This result is counterintuitive, as it predicts that when the competition is stronger, with more players, the average price increases. In fact, it shows that when there are a lot of players around, the probability of winning is very small. Hence, players keep on trying to improve and they do not keep any memory of past bids. They are adapting too fast for the game. This could have been anticipated from Fig. \[fig:P(x)\], where one sees that the distribution flattens as $d$ increases. Comparing this result with the similar result from the previous $d$-player model, one sees that real life corresponds rather to a game where only the worst player adapts than a game where everyone trys to be the best. Market makers {#sec:market makers} ============= The previous models of auctions are interesting mechanisms to generate an ask price. A buyer solicits several sellers, compares their prices and takes the lowest one available. On the contrary, the generation of a bid price is characterized by a seller considering several buyers. He selects the one offering the highest price. All the results obtained for ask prices $x$ in the previous sections can be transposed to bid prices by changing $x$ to $1-x$. Most financial exchanges use market makers to add liquidity to the market [@hull]. A market maker is a person that will quote both an ask and a bid prices whenever asked to do so. The bid price is the price he is prepared to pay for the asset and the ask price is the price he is prepared to sell the asset at. When solicited, market makers have to give both ask and bid prices because they do not know whether the trader wants to buy or sell the asset. The existence of market makers allows traders to place buy and sell orders whenever they want, without having to wait for somebody else to match their order. This is known as non-synchronical trading. To cover themselves against the risks of possessing unwanted stocks, the ask price proposed by market makers is higher than the bid price. The difference or *spread*, is their risk insurance and their margin for profit. Usually, the exchange regulatory body sets upper limits for spreads. There are of course several market makers on any exchange and they try to quote the lowest ask and highest bid prices, to attract as many traders as possible. However, they cannot afford to be excessively exposed to market risks and have to maintain a minimum spread. As a more realistic model with direct application to market mechanisms, we consider a mixed auction, where players are market makers. At each time step, the players are required to give a bid and an ask price. Hence, each player has two probability distributions at their disposal. To avoid arbitrage opportunities, each player has to evaluate what the others are likely to propose, such that all ask prices are higher than all bid prices. In practice, the only reference for a market maker is the history of the prices. We assume that a player never proposes an ask price that is lower than a previous winning bid price, thinking that this bid price is likely to be proposed again. Similarly, no market maker will ever propose a bid price that is higher than a previous winning ask price. The model works as follows, restricting our attention to a two player game. As we only consider similar initial conditions for both players, we assume that they are using the same distributions. The two players $P$ and $Q$ draw at each time step an ask and a bid price from the same probability distributions $R_a (x,t)$ and $R_b (x,t)$, respectively. The subscript $a$ refers to ask prices and $b$ to bid prices. These prices have to be such that the ask prices are both larger than the higher bid price proposed in the last $h$ time steps, $M_b$, and the bid prices are both smaller than the lower ask price proposed in the last $h$ time steps, $m_a$. $h$ represents the size of the history of the system. We set $M_b = 0$ and $m_a = 1$ at the beginning of the simulations. In case a player selects a bid price higher than $m_a$, he draws a new bid price from a uniform distribution between 0 and $m_a$. Similarly, a player selecting an ask price lower than $M_b$ draws a new ask price from a uniform distribution between $M_b$ and 1. When the trader is a buyer, the market maker with the lowest ask price gets the deal, while for a seller, the market maker with the highest bid price gets the deal. We call $p$ the probability that the trader is a seller. The market maker that does not get the deal discards his proposal and draws a new one, a new ask price if the trader wanted to buy, a new bid price otherwise. New ask prices are drawn from a uniform distribution between $M_b$ and 1, while new bid prices are drawn from a uniform distribution between 0 and $m_a$. We pay no attention to spread requirement. As at each time step $M_b$ or $m_a$ can be updated, but not both of them simultaneously, we always have $M_b < m_a$. If the trader is a seller, $M_b$ either does not change or increases to a value lower than $m_a$. For a buyer, $m_a$ does not change or decreases to a value higher than $M_b$. Hence, in the limit $h\rightarrow \infty$, $M_b$ and $m_a$ converge to the same value $M$ and stay fixed. We call this value $M$ the market price. It changes from one simulation to the next, with an average value of $\overline{M} = p$ over several simulations. When the history $h$ is relaxed to a finite value, the market price converges towards $M=p$, for every value of $h$, and oscillates around this value, the larger $h$, the smaller the oscillations. The probability distributions of the bid and ask price follow differential equations that depend on $M_b$ and $m_a$. For ask prices less than $M_b$, we have $$\frac{\partial R_a (x,t)}{\partial t} = - (1-p) R_a (x,t) \label{eq:disparition of R_a}$$ and $R_b (x,t)$ follows a similar equation for $x > m_a$. The $1-p$ factor gives the probability that an ask price is required. The previous equation shows that $R_a (x)$, the stationary limit of $R_a (x,t)$, is zero for $x < M_b$. Similarly, $R_b (x)$ is zero for $x > m_a$. In reality, both $M_b$ and $m_a$ are functions of time for finite $h$ and the distributions are non-zero on a small interval around $p$. However, as already mentioned, $M_b$ and $m_a$ oscillate around a fixed value in the stationary state, so that for the sake of simplicity, we assume that $M_b = m_a = p$, independent of time. The effect of boundary fluctuations is addressed in the next section. Within this framework, the distribution of ask prices for prices higher than $M_b$ is the solution to $$R_a (x) \int_{p}^x R_a (y) dy = \frac{1}{1-p} \int_{p}^1 R_a (y) \int_{p}^y R_a (z) dz dy.$$ Note that we have dropped the time dependence as we only consider the stationary limit. This equation is similar to Eq. (\[eq:evolution p,2 players\]) in the stationary limit, with $p$ as the lower limit instead of 0 and a symmetric condition on the two players. As $R_a (x) = 0$ for $x\le p$, it is not necessary to change the lower limits, but we did it for clarity. The $1/(1-p)$ factor is necessary to allow a proper normalisation of the distribution. This arises because the new ask prices are chosen in $(p,1)$, not in $(0,1)$. Introducing $$F_{p} (x,t) = \int_{p}^x R_a (y,t) dy,$$ we obtain that $F_{p} (x) = \sqrt{(x-p)/(1-p)}$ and $R_a (x) = 1/(2\sqrt{(x-p)(1-p)})$. A similar calculation gives $R_b (x) = 1/(2\sqrt{p(p - x)})$. The solution for the auction of Sec. \[sec:the two player game\] is obtained for $p=0$, as expected. In our market maker model, the relative volume of buy and sell orders is controlled by the probability $p$. As explained, the market price $M$ settles close to $M = p$ when the history $h$ is finite. One could wonder to this aspect of the model: when the number of sell orders increases, the model predicts a price increase, while it is well-known than an increase in the supply makes the price go down. We should however stress that $M$ does not represent, in itself, a market price, but that we use this name to simplify the explanations in this section. As stressed in the introduction, $M$ is a measure of the profit made by a market maker whenever a sale is agreed. For $p$ close to 1, market makers are very rarely concluding sales so that they have to make a large profit from each possible sale. They can make a smaller profit from purchases, because the number of occasions to conclude such deals are more aboundant. A more complex model would not only consider $M$ as a profit but as the price of the asset itself. In this case, it should incorporate the fact that market makers are not only competing to increase their number of deals. They have to balance the number of buy and sell orders if they don’t want to artificially sustain the price by accepting all sell orders, for instance. As soon as they try to sell, the price will fall quickly, with nobody to match their sale. Hence, the last model has the shortcoming that it does not address the dynamics of matching the orders. As our main interest is on the profit made over the market price, this is not really an issue here. Interesting models where buy and sell orders are matched can be found in [@bak96; @eliezer98]. The previous model can easily be generalised to $d$ market makers and in this case, the results of the previous section can be adapted as we did for the two player auction. To give an idea of a realistic value for $d$, the number of market makers per security varies from a minimum of 2 to a maximum of 68 on the Nasdaq [@wahal97], while George and Longstaff witnessed around 300 market makers among 400 S&P 100 index option traders [@george]. As in the previous sections, the model can also be generalised to cope with prices chosen from a distribution $\omega (x)$ instead of a uniform distribution. Price volatility as a measure of risk {#sec:price volatility as a measure of risk} ===================================== Up to this point, the only uncertainty facing the players has been the decision of the other players. In reality, a major source of uncertainty can be found in price fluctuations. This corresponds in our framework to variations in the mininum profit necessary to hedge against market fluctuations. We consider a simple auction model where two players $P$ and $Q$ propose ask prices drawn from the range $(M , 1)$, with $M$ chosen from a uniform distribution in $(0, \Delta )$ at each time step. As in the model of Sec. \[sec:the two player game\], $P$ and $Q$ are given probability distributions, $P (x, t)$ and $Q (x,t)$ respectively, to choose their bids. Whenever a chosen bid is less than $M$, it is discarded and another bid chosen at random from the range $(M,1)$ is proposed. The player with the lowest bid gets the deal. As we consider similar initial conditions for both players, the probability distribution $P (x, t)$ follows $$\frac{\partial P (x,t)}{\partial t} = - P (x,t)$$ for $x\le M$ and $$\frac{\partial P (x,t)}{\partial t} = - P (x) \int_0^x P (y,t) dy + \frac{\alpha}{1-M}$$ for $x\ge M$. We have defined $\alpha$ as the probability that $Q$ wins, as in Eq. (\[eq:definition of alpha\]). Considering stationary solutions and from our choice for the dynamics of $M$, $P (x)$ is the solution to $$P (x) \left( 1- \frac{x}{\Delta} + \frac{x}{\Delta} \int_0^x P (y) dy \right) = -\frac{\alpha}{\Delta} \ln (1 - x) \label{eq:solution for 0-x-delta}$$ for $0 \le x\le \Delta$ and $$P (x) \int_0^x P (y) dy = - \frac{\alpha}{\Delta} \ln (1 - \Delta) \label{eq:solution for delta-x-1}$$ for $\Delta \le x\le 1$. The exact solution to the second equation is $$P (x) = N_0 \left( \frac{P^2 (\Delta) \alpha \ln (1 - \Delta)}{\alpha \ln (1-\Delta) + 2 P^2 (\Delta) \Delta (\Delta -x)}\right)^{1/2}$$ for $\Delta \le x\le 1$, where $N_0$ is a normalisation coefficient. The distribution is a power-law in this range, with the same exponent as in the two player auction of Sec. \[sec:the two player game\]. We could not solve the first equation, but its numerical solution can be compared with direct simulations of the model. The results are presented in Fig. \[fig:P (x)-price fluctuations\] and as can be seen, Eq. (\[eq:solution for 0-x-delta\]) compares very well with the model. The previous analysis leads us to the conclusion that the distributions obtained in the framework of simple auction models are robust outside the range of price fluctuations. However, agents should be reluctant to propose prices inside this range, in agreement with usual pricing models that take the volatility as a proxy to investment risk [@hull]. The simple model presented here can of course be easily generalized to deal with more realistic price fluctuations. Even if we think that the present conclusion should remain applicable for most situations, it is however important to stress that price fluctuations are known to be non-Gaussian [@bouchaud]. Large fluctuations are not so rare and that has a major impact on the value of $M$. Hence, in period of quiescence, the conclusion of this section should apply, while for more agitated markets, what we call the range $M$ of the fluctuations could force agents to propose prices well inside the range of fluctuations. Implementing a market structure {#sec:implement a market structure} =============================== In the previous models, sellers are competing in an abstract infinite dimensional space, where every trader is identical, apart possibly from initial conditions. However, lots of trades rely on a strict market structure, where buyers are interacting with only a restricted set of sellers. To investigate the effect of space on sellers price distribution, we consider a variation of the $d$ player game of Sec. \[sec:the d player game\] where players are nodes of a network, competing with their first neighbours. It should be obvious that an important quantity in such a framework is the connectivity distribution, that is, the number of sellers you are competing with. But who you are competing with is also of major importance. For a regular network with $d-1$ neighbours for each site, the $d$ player game is recovered. More interesting is the case where a site has $k$ neighbours with a probability $c_k$. As for the $d$ player game of Sec. \[sec:the d player game\], two extreme situations can be considered; either a player is happy unless he gets no deal, or a player is happy only if he gets all possible deals. Based on our conclusions of Sec. \[sec:the d player game\], we only consider the former. There is one customer and one player at each node of the network. At each time step, every player proposes a price drawn at random from his personal bid distribution. A customer at one node buys from the cheapest price among the price proposed by the player at his node and the prices proposed by the players of the neighbouring nodes. Hence, a player located at a side with $k$ neighbours can get from 0 to $k+1$ customers in every round of bidding. As long as a player gets one customer, he does nothing, while a player with no customer discards the price he proposed and draws a new price at random from a uniform distribution. The bid distribution of a player $P$ with $k$ first neighbours, $Q_i$, with $i = 1$, ..., $k$, evolves according to $$\frac{\partial P (x,t)}{\partial t} = - P (x,t) \prod_{i=1}^k \int_0^x Q_i (y,t) dy + 1 - \alpha_P$$ where $\alpha_P$ is the probability that $P$ wins and $Q_i (x,t)$ the bid distributions of the neighbours of $P$. Of course, these distributions follow similar evolution equations involving their own neighbours. With the chosen updating rule, winning is synonymous with getting at least one deal. We are not able to solve the previous set of equations in the general situation, but it is tempting to assume that $Q_i (x) = (1-\alpha_i) x^{-\alpha_i}$ in the stationary state, by analogy with the previous models. In this case, it is easy to show that the condition imposed on the exponent at one site is that the sum of this exponent and the exponents of all neighbouring sites is equal to the number of neighbours. However, this solution is not compatible with the condition $\alpha_P \in (0,1)$, at least for some special situations, hinting that this works only for special cases, when all players have the same number of neighbours for instance. To show that the previous assumption does not capture the complete picture, we consider the stationary limit of a very simple network of $k+1$ nodes. $k$ of these nodes have only one link pointing to the central $k+1^{\hbox{\small th}}$ node. This corresponds to one central seller $P$ trying to compete with $k$ local sellers, $Q_i$, with $i=1$, ..., $k$. By symmetry, all local sellers should have the same distribution $Q_i (x) \equiv Q (x)$. One could think of the central node as a supermarket and all the neighbouring nodes as small differenciated shops. As the small shops do not sell similar goods, they do not compete with each other, while the supermarket is competing on all goods. The fact that we take only one price for all goods is justified by the fact that the different goods prices are correlated, being all sold by the supermarket. Alternatively, some particular geographical situation could make going to other shops uninteresting, like restricted parking places, while the supermarket could provide easy access. In this case, we obtain in the stationary limit that $$P (x) \left( \int_0^x Q (y) dy \right)^k = 1 - \alpha_P$$ and $$Q (x) \int_0^x P (y) dy = 1 - \alpha_Q.$$ Introducing $G (x) = \int_0^x Q (y) dy$, we can show that $$(\alpha_Q - 1) G (x)^k \frac{\partial^2 G (x)}{\partial x^2} = (1 - \alpha_P) \left( \frac{\partial G (x)}{\partial x}\right)^2.$$ The previous equation can be solved to obtain $$Ax + B = \int dG\ \exp \left( \frac{(1 - \alpha_P)G^{1-k}}{(1 - \alpha_Q)(1-k)} \right)$$ where $A$ and $B$ are integration constants. Using $G (0) = 0$ and $G (1) = 1$, we obtain the implicit solution $$1 - x = \frac{\Theta (G^{1-k}) }{\Theta (\infty)}$$ where we defined $$\Theta (v) = \int_1^v u^{\frac{k}{1-k}} \exp \left(\frac{(1 - \alpha_P) u}{(1 - \alpha_Q) (1-k)}\right) du.$$ This shows that for $x$ close to 1, $Q(x)$ is uniform, while $P (x) \sim x^{-k}$. For $x$ close to 0, the leading term for $Q (x)$ is $1/x$, with important logarithmic corrections, and it can be written $$Q (x) \sim \frac{1}{x \left(\ln (x/\gamma ) \right)^{k/(k-1)}}$$ for some function $\gamma$ that depends on $\alpha_P$, $\alpha_Q$ and $k$. In this limit, $P (x) \sim x Q(x)$. In Fig. \[fig:P (x)-price for one central\], we present the results of a simulation where one central seller competes with 4 side sellers, that do not compete with each other. Each seller had $10^4$ different prices at their disposal and they played $10^7$ rounds of bidding. For the particular simulation presented, $1 - \alpha_P \approx 0.03$ and $1 - \alpha_Q \approx 0.7$. As mentionned earlier, the different probabilities do not have to sum up to 1, as they don’t refer to exclusive events. As can be appreciated in Fig. \[fig:P (x)-price for one central\], for $x$ close to 1, $Q(x)$ becomes uniform, while $P (x)$ can arguably said to converge towards a power law. We could not check numerically the value of the exponent of this power law because it extends over less than a decade. In Fig. \[fig:P (x)-price for one central\], it is apparent that the distribution for the side players is uniform from 0.1 to 1, which signals a bad adaptation. This impression is justified by the fact that they do not win very often. In simple terms, the central player benefits from having to compete with several players. This is to be expected, as the central player wins if he is not the worst player out of $k$, while the side players have to be the best of two not to lose. We arrive at the interesting conclusion that, as in Sec. \[sec:the d player game\], it pays not to change prices too often. In fact, new prices being choosen from $(0,1)$, they are unlikely to be competitive. Assuming that we can extend this conclusion to more general networks, we expect to find that sites with a larger connectivity, surrounded by sites with smaller connectivity, are winning more often. This conclusion has some echoes in real life, where supermarkets benefit from attracting a wider range of customers than small shops. Conclusions {#sec:conclusions} =========== We have introduced simple models for tendering processes in an attempt to model the dynamics of intermediaries trying to make a profit from the competitive sale of a commodity or a service. Starting with a simple 2-player game, we extended our model to a $d$-player game, considered the problem of market making, investigated the effect of price fluctuations and implemented a strict market structure. In all these cases, the bid distribution of the players has been our main concern and we showed that it can strongly depend on the system details. Nevertheless, two generic features could be seen in all these models. First, unless there is a strict market structure to differentiate the players, they tend to become identical in the long time limit. This stationary state corresponds to a maximum profit state for the whole system. Hence, cooperation has appeared in a system made up of selfish individuals, a property reminiscent of the Minority Game [@challet97]. Second, we showed that players generally benefit from waiting longer before updating their beliefs. Players updating their bid distribution too quickly are not able to discern between a trend and a fluctuation. This was also spotted in an evolutionary variant of the Minority Game, where it was shown that players prefer to keep on playing with only one strategy [@johnson98-2; @dhulst99]. We can compare this particularity to models of growing boundaries, where noise can be drastically reduced by only allowing sites that have been selected a given number of times to grow [@barabasi]. Similarly, we can improve the adaptation process by updating bids only if they have lost a given number of times. Considering each variant of the model separately, we can refine our conclusions. For the two-player game, the stationary bid distribution is a power law and this result can be extended to a $d$-player game. However, this result is dependent on the type of adaptation process chosen, and we showed that it would not be the solution of a simple generalisation. By considering the average price generated by the model, which corresponds to a player’s expected profit, we showed that a model with only the worst player adapting compares better with reality than a model in which all the non-winning players adapt. This suggests that in real life, unless you are the worst, you still make a profit from business. Considering a simple generalisation to mimic market making, the previous results have been extended. The major difference is that the reference price, corresponding to 0 in the first models, is not fixed now. This leads us to consider the effect of price uncertainty over the bid distribution. We showed that outside the range of the fluctuations, the preliminary result obtained in the simple models can be extended. So, in a quiescent market, one expects our result to apply, while for unsettled markets like markets from emerging countries, fluctuations are so important that they should strongly affect the bid distribution of the participants. Finally, putting the players on a network allowed us to generate heterogeneous players. We showed that a player’s bid distribution is a function of his neighbours. From a simple example, we concluded that players with a high connectivity connected with players of low connectivity are optimal, in the sense that they should get most of the deals. J.-P. Bouchaud and M. Potters, [*Theory of financial risks: from statistical physics to risk management*]{} (Cambridge University Press, Cambridge, 2000). R. Mantegna and H. E. Stanley, [*An introduction to econophysics*]{} (Cambridge University Press, Cambridge, 1999). M. O’Hara, [*Market microstructure theory*]{} (Blackwell Publishers, , 1995). K. J. Cohen, S. F. Maier, R. A. Schwartz, D. K. Whitcomb, [*Market makers and the market spread: a review of recent literature*]{}, Journal of Financial and Quantitative Analysis [**14**]{}, 813 (1979). J. Conlisk, [*Why bounded rationality?*]{}, J. of Economic Literature [**34**]{}, 669 (1996). J. C. Hull, [*Options, Futures and Other Derivatives*]{}, 4th edition (Prentice-Hall, London, 2000), p. 158. P. Bak, M. Paczuski and M. Shubik, [*Price variations in a stock market with many agents*]{}, Physica A [**246**]{}, 430 (1997), cond-mat/9609144. D. Eliezer and I. I. Kogan, [*Scaling laws for the market microstructure of the interdealer broker markets*]{}, cond-mat/9808240. S. Wahal, [*Entry, Exit, market Makers and the Bid-Ask Spread*]{}, The Review of Financial Studies [**10**]{}, 871 (1997). T. J. George and F. A. Longstaff, [*Bid-ask spreads and trading activity in the S&P 100 index options market*]{}, J. of Financial and Quantitative Analysis [**28**]{}, 381 (1993). D. Challet and Y.-C. Zhang, [*Emergence of cooperation and organization in an evolutionary game*]{}, Physica A [**246**]{}, 407 (1997) (see also adap-org/9708006). N. F. Johnson, P. M. Hui, R. Jonson and T. S. Lo, [*Self-organised segregation within an evolving population*]{}, Phys. Rev. Lett. [**82**]{}, 3360 (1999) (see also cond-mat/9810142). R. D’Hulst and G. J. Rodgers, [*The Hamming distance in the Minority Game*]{}, Physica A [**270**]{}, (1999) 514 (see also adap-org/9902001). A.-L. Barabàsi and H. E. Stanley, [*Fractal concepts in surface growth*]{}, p. 79-80 (Cambridge University Press, Cambridge, 1995).
--- author: - 'Annamária <span style="font-variant:small-caps;">Kiss</span>[^1] and Yoshio <span style="font-variant:small-caps;">Kuramoto</span>[^2]' title: 'Scalar order: possible candidate for order parameters in skutterudites ' --- Rare-earth filled skutterudites have been attracting considerable attention from both experimental and theoretical sides because of their intriguing behaviors. Among them, Pr-based compound PrFe$_4$P$_{12}$ shows a phase transition at $T_0=6.5$K, which can be seen as sharp anomaly in the magnetic susceptibility[@aoki]. In the ordered phase staggered dipoles are induced by magnetic field[@hao], which suggests that the order parameter does not break the time-reversal symmetry. PrRu$_{4}$P$_{12}$ has a metal-insulator phase transition at $T_{\rm MI}=65$K, and its crystalline electric field (CEF) states show drastic change below $T_{\rm MI}$[@iwasa4], which seems to be described by antiferro-type order of hexadecapole moments with $\Gamma_{1g}$ symmetry[@takimoto]. SmRu$_{4}$P$_{12}$ has also a metal-insulator phase transition at $T_{\rm MI}=16.5$K. The nature of the order parameter in phase II is not clear until know. An octupolar order with the $\Gamma_{5u}$ symmetry, which breaks the time-reversal symmetry, has been proposed for this phase[@yoshizawa]. In fact, nonzero internal field was observed below $T_{\rm MI}$ by recent $\mu$SR experiment[@hachitani]. In this paper we propose that both PrFe$_4$P$_{12}$ and PrRu$_4$P$_{12}$ have a scalar-type order parameter with the $\Gamma_{1g}$ symmetry. We show by phenomenological analysis that the scalar order model explains the main properties of PrFe$_{4}$P$_{12}$ including the NMR results: (i) the absence of field induced dipoles perpendicular to the magnetic field, (ii) isotropic magnetic susceptibility in the ordered phase, (iii) the field angle dependence of the transition temperature, and (iv) the splitting pattern of the $^{31}$P NMR spectra. As a first step, we concentrate on behaviors at low magnetic fields. For SmRu$_{4}$P$_{12}$, in view of its nearly isotropic behavior in the ordered phase, we propose another candidate of the octupole order of the $T_{xyz}$ type, which transforms as a pseudo-scalar with the $\Gamma_{1u}$ symmetry. It is proposed how the order parameter in SmRu$_4$P$_{12}$ is identified by NMR for the single crystal. Up to the present, the order parameter in PrFe$_4$P$_{12}$ has widely been considered as an antiferro-quadrupolar (AFQ) order of $\Gamma_3$ moments. However, this AFQ model fails to account for the isotropic susceptibility in the ordered phase, for example. Furthermore, with static $\Gamma_3$ quadrupoles it is difficult to explain why the field induced staggered dipoles are always parallel to the field direction, as indicated by neutron diffraction[@hao] and NMR [@kikuchi1; @kikuchi2]. A scalar-type order can be of two different kinds: one ($\Gamma_{1u}$) breaks, while the other ($\Gamma_{1g}$) does not break the time-reversal symmetry. On the other hand, both of them preserve the cubic symmetry even in the ordered phase. Therefore, the Landau-type expansion of the free energy ${\cal F}$ contains cubic invariants composed by the magnetic field components. Around a second order phase transition we expand ${\cal F}$ up to fourth order in the order parameter and magnetic field as $$\begin{aligned} {\cal F}(\psi_{\bf Q},H) = {\cal F}_0(H)+ \frac 12 a_s [T-T_c(H)]\psi_{\bf Q}^2+ \frac 14 b_s \psi_{\bf Q}^4, %\\&& -\frac{1}{2}\chi H^2, \label{eq:free3}\end{aligned}$$ where $\psi_{\bf Q}$ is the staggered component of the scalar order parameter with ordering vector ${\bf Q}=[1,0,0]$, and ${\bf H}=H(h_x,h_y,h_z)$ is the external magnetic field. The transition temperature in magnetic field has a dependence: $$\begin{aligned} T_c(H) = T_0 +\frac 12 t_2 H^2 +\frac 14(t_{4} + t_{4a} h_{4}) H^4, \label{Tc}\end{aligned}$$ where $t_2, t_4, t_{4a}$ are expansion coefficients, and $h_{4}=h_x^4+h_y^4+h_z^4-3/5$. This invariant $h_4$ is common in the case of cubic ($O_h$) and the tetrahedral ($T_h$) point group symmetries. Therefore, our present treatment is valid for both cases. The first part ${\cal F}_0(H)$ has a field dependence similar to eq.(\[Tc\]). The anisotropy in eq.(\[Tc\]) is independent of the microscopic details of the scalar order. Using $h_4[100]=2/5, h_4[110]=-1/10, h_4[111]=-4/15$ we obtain the ratio $$\begin{aligned} \frac{T_c [001]-T_c [111]} {T_c [110]-T_c [111]} =4. \label{eq:ratio}\end{aligned}$$ This relation should hold as long as the magnetic field is weak enough when the scalar order emerges. Figure \[fig:1\] shows the transition temperature in PrFe$_{4}$P$_{12}$ measured as a function of field angle[@sakakibara] which is defined by $(h_x,h_y,h_z) = %\equiv {\bf h}= ( \cos\phi \sin\theta ,\sin\phi \sin\theta ,\cos\theta ) $. The anisotropy of eq.(\[Tc\]) with $t_{4a}<0$ provides excellent fit to the observed $T_c$. This result strongly suggests the scalar order in this compound. ![$\theta$-dependence of the transition temperature with $\phi=\pi/4$ choosing coefficients $t_2, t_4$ and $t_{4a}$ to fit $T_c$ for fields along (001) and (111). Boxes represent the measured result at $H=2.7$T[@sakakibara].[]{data-label="fig:1"}](fig1.eps) Equation (\[eq:free3\]) also shows that the magnetization along three principal axes should have the anisotropy ratio: $$\begin{aligned} (M_{100}-M_{111})/(M_{110}-M_{111})=4, \label{anisotropic-M}\end{aligned}$$ or, equivalently, $(M_{111}-M_{100})/(M_{110}-M_{100})=4/3$. The ratio given by eq.(\[anisotropic-M\]) holds both in the paramagnetic and ordered phase as long as the magnetic field is weak enough. Because of the coupling with $\psi_{\bf Q}$, however, the weight of anisotropic part should change at the transition temperature. It is possible that anisotropy is reversed by keeping the ratio given by eq.(\[anisotropic-M\]). Experimentally, the reversed anisotropy is indeed observed at $T=0.3$K [@sakakibara]. We now discuss anisotropy of the staggered magnetization $m_{\bf Q}$ with the order parameter $\Gamma_{1g}$. The change of the free energy due to $m_{\bf Q}$ is given by $$\begin{aligned} &{\cal F}(\psi_{\bf Q},m_{\bf Q},H)-{\cal F}(\psi_{\bf Q},H) \nonumber\\ &=\frac 12 a_m m_{\bf Q}^2 + \psi_{\bf Q}m_{\bf -Q} \left[ c_{1}H +(c_3 +c_{3a}h_{4})H^3\right], \label{eq:free1}\end{aligned}$$ where ${\cal F}(\psi_{\bf Q},H)$ is given by eq.(\[eq:free3\]). In the presence of $\psi_{\bf Q}$ with symmetry $\Gamma_{1g}$, $m_{\bf Q}$ is induced by external magnetic field. Terms with coefficient $c_i$ in expression (\[eq:free1\]) come from the invariant $\Gamma_{1g}({\bf Q})\otimes \Gamma_{4u}({\bf -Q})\otimes \Gamma_{4u}({\bf 0})$, which requires that the induced dipoles are always parallel or anti-parallel to the field direction. From the condition $\partial {\cal F}/\partial m_{\bf Q}=0$ we obtain $$\begin{aligned} m_{\bf Q}=-\frac{1}{a_m}\left[ c_1 H+(c_3 +c_{3a }h_{4})H^3 \right] \psi_{\bf Q},\label{eq:indm}\end{aligned}$$ which shows explicitly that $m_{\bf Q}$ develops in magnetic field when the order parameter $\psi_{\bf Q}$ is non-zero. In the linear order of magnetic field, there is no anisotropy in the induced staggered dipoles. In the third order, the anisotropy ratio is again given by $$\begin{aligned} (m_{\bf Q}[100]-m_{\bf Q}[111])/(m_{\bf Q}[110]-m_{\bf Q}[111])=4. \label{anisotropic-m_Q}\end{aligned}$$ At present, there is no experimental information for the above ratio, since neutron scattering has been done only for $m_{\bf Q}[100]$ and $m_{\bf Q}[110]$ [@hao]. The same anistropy ratio given by eqs.(\[eq:ratio\]), (\[anisotropic-M\]) and (\[anisotropic-m\_Q\]) comes from the property of $h_4$ alone, and is a universal feature for the scalar order. Let us explain how the difference between PrFe$_4$P$_{12}$ and PrRu$_4$P$_{12}$ can be interpreted in a phenomenological framework. The most obvious difference appears in the magnetic susceptibility; PrFe$_4$P$_{12}$ shows a sharp peak at the transition [@aoki], while no conspicuous anomaly is seen in PrRu$_4$P$_{12}$. In considering the magnetic susceptibility around the zero-field phase transition temperature $T_0$, it is more convenient to use the Gibbs potential which is obtained from the free energy ${\cal F}$ by Legendre transformation. Namely we obtain using the homogenous dipole moment ${\bf M}={\bf m}_{\bf 0}$, $$\begin{aligned} &{\cal G}(\psi_{\bf Q},M)={\cal F}(\psi_{\bf Q},H)+{\bf M}\cdot {\bf H} \nonumber\\ &= {\cal F}(\psi_{\bf Q},H=0)+ \frac 12 a_f (T-T_F)M^2+ \frac 12 \lambda \psi_{\bf Q}^2 M^2,\end{aligned}$$ where ${\cal F}(\psi_{\bf Q},H=0)$ is given by eq.(\[eq:free3\]) with $H=0$, and the last term represents the coupling between the scalar order and the magnetization. The term with $a_f$ becomes important if the ferromagnetic instability is close to the scalar phase transition: $T_F \lesssim T_0$, as in the case of PrFe$_4$P$_{12}$. Indeed, a positive Curie-Weiss temperature $T_F\approx 3.5$K has been found in experiments[@aoki]. To the contrary, PrRu$_4$P$_{12}$ seems far from the ferromagnetic instability. In this case, $M$ does not have a significant amplitude, and the coupling term with $\lambda$ is less significant. Around the transition temperature $T_0$, the magnetic susceptibility $\chi_+$ for $T>T_0$ and $\chi_-$ for $T<T_0$ can be expressed as $$\begin{aligned} & \chi_{+}^{-1} = a_f (T-T_F), \\ & \chi_-^{-1} = (a_f -\lambda a_s /b_s)T- a_f T_F+\lambda a_s T_0/b_s, \label{chi_-}\end{aligned}$$ where both $\chi_{\pm}$ are isotropic and follow the Curie-Weiss law. If the coupling with magnetic moment is strong enough, we obtain $ \lambda a_s /b_s > a_f $, which leads to a peak in the susceptibility at $T_0$. Figure \[fig:CW-fit\] shows comparison between theory and experiment for PrFe$_4$P$_{12}$. The agreement is excellent with the choice of $\lambda a_s /(a_f b_s)=2.8$. In consistency with the sharp peak in the susceptibility, the phase boundary is suppressed appreciably by magnetic field in PrFe$_4$P$_{12}$. In the opposite limit of negligible $\lambda$, $\chi_-^{-1}$ given by eq.(\[chi\_-\]) reduces to $\chi_+^{-1}$. Hence there is no change at the scalar phase transition. This limit seems to explain the situation in PrRu$_4$P$_{12}$. Accordingly, the transition temperature is hardly affected by magnetic field[@sekine]. We proceed to analyze $^{31}$P NMR spectra in skutterudites in terms of the scalar order. We discuss mainly the case of PrFe$_4$P$_{12}$, but also touch on SmRu$_4$P$_{12}$ where the pseudo-scalar $\Gamma_{1u}$ is a candidate of the order parameter. Around each Pr ion at position (0,0,0), there are six P positions ${\bf r}_{1(2)}=(0,v, \pm u)$, ${\bf r}_{3(4)}=(\pm u,0,v)$ and ${\bf r}_{5(6)}=(v, \pm u,0)$, which are crystallographically equivalent. With finite magnetic field, these positions are no longer equivalent, and splitting of NMR lines takes place. The splitting is of purely magnetic origin, since $^{31}$P ions have no quadrupolar moment with $I=1/2$. Therefore, interactions are only between the P nuclear spin $\bf I$ and the dipole $\bf J$ and octupole ${\bf T^\beta, T}_{xyz}$ moments of a Pr ion. Taking a representative pair at ${\bf r}_{3}$ and ${\bf r}_{4}$, we write down the hyperfine interaction characterized by energies $e_{k,l}^{(d)}$ for the dipoles and $e_{k,l}^{(o)}$ for octupoles with $k,l$ distinguishing independent components[@sakai]. The invariant form of the interaction contains terms such as $I_x J_z$ and $I_y T_{xyz}$ since there is only a mirror symmetry against the $xz$-plane for the pair. The hyperfine interaction is given by $$\begin{aligned} && H_{\rm hf}(3,4) = I_x[e_{1,1}^{(d)} J_x\pm e_{1,2}^{(d)}J_z+e_{1,1}^{(o)}{T}^{\beta}_x\pm e_{1,2}^{(o)}{ T}^{\beta}_z]\nonumber\\ &&+I_y[e_{2,1}^{(d)}J_y+e_{2,1}^{(o)}{T}^{\beta}_y\pm e_{2,2}^{(o)}T_{xyz}]\nonumber\\ &&+I_z[e_{3,1}^{(d)}J_z\pm e_{3,2}^{(d)}J_x+e_{3,1}^{(o)}{T}^{\beta}_z\pm e_{3,2}^{(o)}{T}^{\beta}_x]\,,\label{eq:int1}\end{aligned}$$ where the negative sign corresponds to P ion at position ${\bf r}_{4}$. The interaction for other pairs can be obtained from eq.(\[eq:int1\]) by using proper rotational operations. Let us discuss first the case of PrFe$_4$P$_{12}$. In $T_h$ symmetry, the dipole moment ${\bf J}$ corresponds to $\Gamma_{4u}^{(1)}$ and the octupole moment ${\bf T}^{\beta}$ to $\Gamma_{4u}^{(2)}$, but they are mixed due to the lower symmetry. Therefore, both are induced by the external magnetic field. In the disordered phase ($\psi_{\bf Q}=0$), the homogenous moments are induced as ${\bf m}_{\bf 0} (= {\bf J}_{\bf 0}) =M{\bf H}/|{\bf H}|$ and ${\bf T}^{\beta}_{\bf 0}=T{\bf H}/|{\bf H}|$, where $M$ and $T$ are magnitudes at a given temperature and magnetic field. They cause the splitting of the P NMR line depending on the field direction. Since the external magnetic field is much larger than the hyperfine field, we assume $(I_x,I_y,I_z)\propto (h_x,h_y,h_z)$. In the case of UFe$_4$P$_{12}$, classical approximation of the dipolar field shows good qualitative agreement with the measured NMR spectra[@tokunaga]. However in PrFe$_4$P$_{12}$, we checked that the approximation leads to a large deviation from the measured result[@kikuchi1]. Hence we fix the parameters in eq.(\[eq:int1\]) in a phenomenological manner. We define the hyperfine field $h_{\rm hf}$ so that $H_{\rm hf} = \gamma_n I h_{\rm hf}=I(f-f_0)$, where $\gamma_n$ is the nuclear gyromagnetic ratio of $^{31}$P, $f$ is the resonance frequency and $f_0$ is the zero shift. We write $h_{\rm hf} (1, 2)\equiv g_1$, $h_{\rm hf} (3,4)\equiv g_2$ and $h_{\rm hf} (5,6)\equiv g_3$ for ${\bf H}\parallel (001)$, and $h_{\rm hf} (1,3,5)\equiv k_1$ for ${\bf H}\parallel (111)$. We obtain $$\begin{aligned} & h_{\rm hf} (1, 2) = g_3h_x^2 + g_2h_y^2 + g_1h_z^2 \pm g_4 h_yh_z, \nonumber\\ & h_{\rm hf} (3, 4) = g_1h_x^2 + g_3h_y^2 + g_2h_z^2 \pm g_4 h_zh_x, \nonumber\\ & h_{\rm hf} (5, 6) = g_2h_x^2 + g_1h_y^2 + g_3h_z^2 \pm g_4 h_xh_y, \end{aligned}$$ where $g_4=3k_1-g_1-g_2-g_3$ and parameters $g_i \ (1\le i\le 4)$ are linear combination of $e_{k,l}^{(\alpha)} \ (\alpha=d,o) $ times $M$ or $T$. We determine the four parameters $g_i $ by fitting to the observed three lines for ${\bf H}\parallel (001)$ and the three degenerate ones from P1, P3, P5 for ${\bf H}\parallel (111)$. Figure \[fig:2\] shows our results for the fitting in the disordered phase. The values for parameters $g_i$ are summarized in Table \[tab:1\]. The result for ${\bf H}\parallel (110)$ is a consequence of the form of the hyperfine interaction given by eq.(\[eq:int1\]), which is independent of the microscopic details of the model. The spectrum computed for ${\bf H}\parallel (110)$ is found to be in reasonable agreement with experimental results. Since eq.(\[eq:int1\]) includes only nearest neighbor interaction between the Pr and P ions, the slight deviation between theory and experiment is ascribed to effects of distant Pr-P pairs. Let us now consider the case of $\psi_{\bf Q}\neq 0$ with the symmetry $\Gamma_{1g}$. We restrict our discussion to the case of low magnetic fields, and expand the quantities in linear order of magnetic field. Thus we write the staggered dipoles and octupoles induced by the magnetic field as ${\bf m}_{\bf Q} = K_{1,d}\psi_{\bf Q}{\bf H}$ and ${\bf T}^{\beta}_{\bf Q} =K_{1,o}\psi_{\bf Q}{\bf H}$. As a result, Pr positions $(0,0,0)$ and $(1/2,1/2,1/2)$ become inequivalent, and extra splitting of the NMR lines develops below $T_c$. In magnetic field along (001), for example, the extra splittings of the three main lines are described by $$\begin{aligned} &\Delta h_{\rm hf}(1,2) = \psi_{\bf Q} \left(K_{1,d} e_{1,1}^{(d)}+K_{1,o} e_{1,1}^{(o)}\right)H \equiv a_1 H,\nonumber\\ &\Delta h_{\rm hf}(3,4) = \psi_{\bf Q} \left(K_{1,d} e_{3,1}^{(d)}+K_{1,o} e_{3,1}^{(o)}\right)H\equiv a_2 H,\nonumber\\ &\Delta h_{\rm hf}(5,6) = \psi_{\bf Q} \left(K_{1,d} e_{2,1}^{(d)}+K_{1,o} e_{2,1}^{(o)}\right)H\equiv a_3 H.\nonumber %\label{eq:paral2}\end{aligned}$$ We determine magnitudes of parameters $a_i \ (i=1,2,3$) so as to reproduce the corresponding experimental results[@note1]. Figure \[fig:3\] shows the result of fitting together with experimental results. It is obvious that the experimental results deviate from the linear behavior for $H\gtrsim 1$T. The different field dependence for the splittings $\Delta h_{\rm hf}(1,2)(\approx \Delta h_{\rm hf}(5,6))$ and $\Delta h_{\rm hf}(3,4)$ is intriguing. It seems hard to understand the difference without consideration of induced staggered octupoles. The analysis with non-linear effects of magnetic field is rather complicated, and will be presented in a separate publication. $g_1$ $g_2$ $g_3$ $k_1$ ----------------------- ---------------------- ----------------------- ----------------------- $5.2$mT $10.7$mT $0.2$mT $6.3$mT $a_1=a_3$ $a_2$ $c_1$ $ c_2$ $1.2 \times 10^{-2}$ $0.8 \times 10^{-2}$ $1.14 \times 10^{-2}$ $0.99 \times 10^{-2}$ $d_1$ $d_2$ $d_3$ $d_4$ $1.0 \times 10^{-2}$ $1.2 \times 10^{-2}$ $1.1 \times 10^{-2}$ $0.9 \times 10^{-2}$ : Choice for the parameters to describe the measured NMR results.[]{data-label="tab:1"} For ${\bf H}\parallel (111)$, we define the parameters $c_1$ for $\Delta h_{\rm hf}(1,3,5)$ and $c_2$ for $\Delta h_{\rm hf}(2,4,6)$ in a way analogous to $a_i$. The simple splitting of lines $(1,3,5)$ and $(2,4,6)$ in the ordered phase can be explained with scalar order, but not with quadrupolar order. The splittings $\Delta h_{\rm hf}(1,3,5)$ and $\Delta h_{\rm hf}(2,4,6)$ have almost identical field dependence with a tiny deviation at low fields[@kikuchi1]. We fix the parameter $c_1$ to reproduce the experimental result, and then parameter $c_2$ is determined as $c_2=(2a_1+2a_2+2a_3-3c_1)/3$. For ${\bf H}\parallel (110)$, we define $d_1$ for $\Delta h_{\rm hf}(1,2)$, $d_2$ for $\Delta h_{\rm hf}(3,4)$, $d_3$ for $\Delta h_{\rm hf}(5)$, and $d_4$ for $\Delta h_{\rm hf}(6)$. It turns out that all $d_i$ can be fixed by the parameters $a_j$ and $c_1$ as $d_1 =(a_2 + a_3)/2, \ d_2 =(a_1 + a_3)/2, \ d_3 =(3c_1-a_3)/2$ and $d_4=(2a_1+2a_2+a_3-3c_1)/2$. Namely, experimental results along (110) should be reproduced without further adjustable parameters provided eq.(\[eq:int1\]) applies to the actual system. Experimental values with superscript $e$ are $ c_2^{e}=1.0\times 10^{-2}, \ d_1^{e}\sim d_3^{e} = 0.85\times 10^{-2}, \ d_2^{e}=1.2 \times 10^{-2}, \ d_4^{e}=1.1 \times 10^{-2}$ \[\], which show reasonable agreement with corresponding theoretical values shown in Table \[tab:1\]. We expect qualitatively similar pattern for the NMR spectra in the ordered phase of PrRu$_4$P$_{12}$, since its order parameter should also be a scalar. We now discuss briefly the expected splitting pattern of the NMR spectra in the case of $T_{xyz}$ octupolar order. When the magnetic field is applied along $(001)$, the hyperfine interaction for different pairs is given as $$\begin{aligned} & H_{\rm hf}(1,2) = e_{1,1}^{(d)} I_z J_z, \nonumber\\ & H_{\rm hf}(3,4) = e_{3,1}^{(d)} I_z J_z, \nonumber\\ & H_{\rm hf}(5,6) = e_{2,1}^{(d)} I_z J_z\pm e_{2,2}^{(o)}I_z T_{xyz}. \label{eq:paral4}\end{aligned}$$ The $T_{xyz}$ octupolar order causes the splitting of lines $(5,6)$. With the octupole order parameter $\psi_{\bf Q}$ we obtain $\Delta h_{\rm hf}(5)= \Delta h_{\rm hf}(6)\propto e_{2,2}^{(o)}\psi_{\bf Q}$. The splitting occurs also for the octupole ordering vector ${\bf q}=0$ because $h_{\rm hf}(5)-h_{\rm hf}(6)\propto e_{2,2}^{(o)}\psi_{\bf 0}\ne 0$ in this case. Then four NMR lines appear in the ordered phases with ${\bf q}={\bf Q}$ as well as ${\bf q}=0$. However, in the case of field along $(111)$, doubling of each line occurs with ${\bf q}={\bf Q}$, while no extra splitting is expected with ordering vector ${\bf q}=0$. The former splitting pattern is similar to the case of PrFe$_4$P$_{12}$. In this paper we have considered the characteristic features of scalar orders with symmetries $\Gamma_{1g}$ and $\Gamma_{1u}$ in weak magnetic fields. We have found the universal anisotropy ratio in weak magnetic field by phenomenological Landau-type analysis. We conclude that the scalar order scenario with symmetry $\Gamma_{1g}$ can explain the known properties of PrFe$_4$P$_{12}$ consistently. We have shown that the splitting pattern of the $^{31}$P NMR spectra in the disordered and ordered phases can also be described within this framework. We have predicted the $^{31}$P NMR spectra in the case of $T_{xyz}$ octupole order for fields along $(001)$ and $(111)$. This octupole moment transforms as a pseudo-scalar with the $\Gamma_{1u}$ symmetry and can be a good candidate for the order parameter in SmRu$_4$P$_{12}$ below the metal-insulator transition. Acknowledgment {#acknowledgment .unnumbered} ============== The authors are grateful to K. Iwasa, J. Kikuchi, M. Takigawa, D. Kikuchi, T. Tayama, and T. Sakakibara for showing their experimental results prior to publication, and O. Sakai for inspiring discussions. [99]{} Y. Aoki, T. Namiki, T. D. Matsuda, K. Abe, H. Sugawara, H. Sato, Phys. Rev. B [**65**]{} (2002) 064446. L. Hao, K. Iwasa, M. Nakajima, D. Kawana, K. Kuwahara, M. Koghi, H. Sugawara, T. D. Matsuda, Y. Aoki and H. Sato, Acta Physica Polonica B [**34**]{} (2003) 1113. K. Iwasa, L. Hao, K. Kuwahara, M. Koghi, S. R. Saha, H. Sugawara, Y. Aoki, H. Sato, T. Tayama, T. Sakakibara, Phys. Rev. B [**72**]{} (2005) 024414. T. Takimoto, J. Phys. Soc. Japan [**75**]{} (2006) 034714. M. Yoshizawa, Y. Nakanishi, M. Oikawa, C. Sekine, I. Shirotani, S. R. Saha, H. Sugawara, H. Sato, J. Phys. Soc. Japan [**74**]{} (2005) 2141. K. Hachitani, H. Fukazawa, Y. Kohori, I. Watanabe, C. Sekine, I. Shirotani, Phys. Rev. B [**73**]{} (2006) 052408. J. Kikuchi, M. Takigawa, H. Sugawara, H. Sato, to be published. J. Kikuchi, M. Takigawa, H. Sugawara, H. Sato, Physica B [**359-361**]{} (2005) 877. H. Sato, T. Sakakibara, T. Tayama, H. Sugawara, H. Sato, to be published. C. Sekine, T. Uchiumi, I. Shirotani, T. Yagi, Phys. Rev. Lett. [**79**]{} (1997) 3218. O. Sakai, R. Shiina, H. Shiba, P. Thalmeier, J. Phys. Soc. Japan [**66**]{} (1997) 3005. Y. Tokunaga, T. D. Matsuda, H. Sakai, H. Kato, S. Kambe, R. E. Walstedt, Y. Haga, Y. Onuki, H. Yasuoka, Phys. Rev B [**71**]{} (2005) 045124. For the fitting in linear order in $H$ we use the second lowest point of the measured spectra instead of the first one because of its ambiguity due to the finite linewidth and small value of the magnetic field. [^1]: E-mail address: amk@cmpt.phys.tohoku.ac.jp [^2]: E-mail address: kuramoto@cmpt.phys.tohoku.ac.jp
--- abstract: 'In this article we analyze the isotropic oscillator system on the two-dimensional sphere in the spherical systems of coordinates. The expansion coefficients for transitions between three spherical bases of the oscillator are calculated. It is shown that these coefficients are expressed through the Clebsch-Gordan coefficients for SU(2) group analytically continued to real values of their argument.' --- 8.5in 0.5cm 0.5cm -1cm 1.5cm 0.5 cm [*Laboratory of Theoretical Physics,\ Joint Institute for Nuclear Research,*]{}\ [*Dubna, Moscow Region 141980, Russia*]{} Introduction ============ The present article is devoted to the oscillator system on the two-dimensional sphere $s_1^2+s_2^2+s_3^2=R^2$, which is also known as a Higgs oscillator [@HIG] $$\begin{aligned} \label{I1} V = \frac{\alpha^2 R^2}{2}\, \frac{s_1^2+s_2^2}{s_3^2},\end{aligned}$$ where $s_i$ are the Cartesian coordinates in the ambient Euclidean space and $R$ is a radius of sphere. As a “flat” space partner [@POG], this is a superintegrable system and has the same properties as accidental degeneracy of the energy spectrum [@HIG], separation of variables in more than one coordinate systems [@GRO1; @KMP] and nontrivial realization of hidden symmetries [@DASK] (see also [@ZHED]). The aim of this paper is to describe of solutions to the Schrödinger equation for the potential (1) in three spherical systems of coordinates and to calculate the coefficients of interbasis expansion between the corresponding wave functions. Quantum motion on the two dimensional sphere ============================================ The Schrödinger equation on the two-dimensional sphere has the following form: $$\begin{aligned} {H} \Psi = \left[- \frac{1}{2} \Delta_{LB} + V\right] \Psi = E \Psi\end{aligned}$$ where $\Delta_{LB}$ is the Laplace–Beltrami operator $$\Delta_{LB} = \frac{1}{R^2} (L^2_1 + L^2_2 + L^2_3).$$ and $L_i$ are the generators of the Lie algebra $o(3)$ $$\begin{aligned} L_i = -\epsilon_{ikj}s_k\frac{\partial}{\partial s_j},\quad [L_i, L_k]=\epsilon_{ikj} L_j,\quad i,k,j = 1,2,3\end{aligned}$$ For $V=0$ the separated eigenfunctions of the Laplace-Beltrami operator satisfy $$\begin{aligned} \label{SEP1} \Delta_{LB} \Psi = - \frac{l(l+1)}{R^2}, \quad I \Psi = k \Psi, \quad \Psi_{lk}(\alpha, \beta) = \psi_{lk}(\alpha) \psi_{lk}(\beta)\end{aligned}$$ where $I$ is a second order operator in the enveloping algebra of o(3) $$\begin{aligned} I = a_{ik} L_i L_k, \qquad a_{ik} = a_{ki}\end{aligned}$$ The matrix $a_{ik}$ can be diagonalized to give [@WIN] $$\begin{aligned} I(a_1, a_2, a_3) = a_1 L_1^2 + a_2 L_2^2 + a_3 L_3^2\end{aligned}$$ When all three eigenvalues $a_i$ are different, the separable coordinates in (\[SEP1\]) are elliptic [@PW]. If the two eigenvalues of $a_i$ are equal, e.g. $a_1=a_2\not=a_3$ or $a_1\not=a_2=a_3$, or $a_1=a_3\not=a_2$ we can transform the operator $I$ into the operators: $I(0,0,1) = L_3^2$, $I(0,1,0)=L_2^2$, or $I(1,0,0) = L_1^2$. Thus, the corresponding separable coordinates on $S_2$ are the three type of spherical coordinates $$\begin{aligned} \label{COOR1} \begin{array}{llll} s_1 &=\, R\sin\theta\cos\varphi &=\, R\cos\theta' &=\, R\sin\theta''\sin\varphi'', \\ s_2 &=\, R\sin\theta\sin\varphi &=\, R\sin\theta'\cos\varphi' &=\, R\cos\theta'', \\ s_3 &=\, R\cos\theta &=\, R\sin\theta'\sin\varphi' &=\, R\sin\theta''\cos\varphi'' \end{array}\end{aligned}$$ where $\varphi \in [0, 2\pi),$ $\theta \in (0, \pi)$. The eigenfunctions of the three sets of operators $ \{\Delta_{LB}, L_i\}$ are the usual spherical functions on $S_2$: $$\begin{aligned} \Delta_{LB}\, Y_{lm_i}(\theta, \varphi) = - \frac{l(l+1)}{R^2}\, Y_{lm_i}(\theta, \varphi) \quad L_i^2 Y_{lm_i}(\theta, \varphi) = m_i^2 Y_{lm_i}(\theta, \varphi)\end{aligned}$$ Geometrically, the spherical coordinates (\[COOR1\]) are connected with each other by rotation which may be expressed through the Euler angles $(\alpha,\beta,\gamma)$ in accordance with the relations [@V] $$\begin{aligned} \cos\theta' &=& \cos\theta \cos\beta + \sin\theta\sin\beta\cos(\varphi-\alpha) \\[2mm] \cot(\varphi'+ \gamma) &=& \cot(\varphi-\alpha)\cos\beta -\frac{\cot\theta\sin\beta}{\sin(\varphi-\alpha)}\end{aligned}$$ Correspondingly, the spherical functions $Y_{lm}(\theta,\varphi)$ are transformed by the formulae [@V] $$\begin{aligned} Y_{l,m'}(\theta',\varphi')&=&\sum_{m=-l}^{l}D_{mm'}^l (0,\frac{\pi}{2},\frac{\pi}{2}) Y_{l,m}(\theta,\varphi), \\[2mm] Y_{l,m''}(\theta'',\varphi'')&=&\sum_{m=-l}^{l}D_{mm''}^l (\frac{\pi}{2},\frac{\pi}{2},0) Y_{l,m}(\theta,\varphi), \\[2mm] Y_{l,m''}(\theta'',\varphi'')&=&\sum_{m'=-l}^{l}D_{m'm''}^l (0,\frac{\pi}{2},\frac{\pi}{2}) Y_{l,m'}(\theta',\varphi'),\end{aligned}$$ where $D_{m_1, m_2}^l (\alpha,\beta,\gamma)$ - are the Wigner $D$-functions. Higgs oscillator on the two-dimensional sphere ============================================== Solution to the Schrödinger equation ------------------------------------ [*3.1*]{} The oscillator potential (1) in the spherical coordinate $(\theta,\varphi)$ is $$\begin{aligned} \label{P1} V = \frac{\alpha^2 R^2}{2}\, \frac{s_1^2+s_2^2}{s_3^2} = \frac{\alpha^2 R^2}{2}\tan^2\theta\end{aligned}$$ and the Schrödinger equation (2) has the following form: $$\begin{aligned} \label{SCH1} \frac{1}{\sin\theta}\frac{\partial}{\partial \theta} \sin\theta\frac{\partial\Psi}{\partial \theta} +\frac{1}{\sin^2\theta}\frac{\partial^2\Psi}{\partial\varphi^2} +2 R^2\left[E- \frac{\alpha^2 R^2}{2}\tan^2\theta \right]\Psi = 0\end{aligned}$$ Choosing the wave function according to $$\begin{aligned} \label{WF1} \Psi(\theta, \varphi) = \frac{Z(\theta)}{\sqrt{\sin\theta}} \, \frac{e^{i m \varphi}}{\sqrt{2\pi}}, \qquad m \in {\mbox{\bf Z}},\end{aligned}$$ after the separation of variables in equation (\[SCH1\]) we come to the Pöschl-Teller - type equation: $$\begin{aligned} \label{PT} \frac{d^2 Z}{d\theta^2}+\left[\varepsilon -\frac{m^2-\frac14}{\sin^2\theta} -\frac{\nu^2-\frac14}{\cos^2\theta} \right] Z = 0\end{aligned}$$ where $\nu =\sqrt{\alpha^2 R^4+\frac{1}{4}}$ and $\varepsilon = 2 R^2E+\alpha^2 R^4+\frac14$. The solution of the above equation orthonormalised in the interval $\theta\in [0,\pi/2]$ is $$\begin{aligned} \label{} Z(\theta) \equiv Z_{n_rm} (\theta) &=& \sqrt{\frac{2(2n_r+|m|+\nu+1)(n_r)!\Gamma(n_r+|m|+\nu+1)} {(n_r+|m|)!\Gamma(n_r+\nu+1)}} \nonumber \\[2mm] &\cdot& (\sin\theta)^{|m|+\frac12} (\cos\theta)^{\nu+\frac{1}{2}} P_{n_r}^{(|m|, \nu)}(\cos 2\theta)\end{aligned}$$ where $P_{n}^{(\alpha, \beta)}(x)$ are the Jacobi polynomials [@BE] and the energy $E$ takes the values $$\begin{aligned} \label{EN} E_n=\frac{1}{2R^2}\left[(n+1)(n+2) + (2\nu-1)(n+1)\right]\end{aligned}$$ where $n_r$ is a “radial” quantum number and $n = 2n_r+|m|$ is the principal quantum number. The degree of degeneracy of the energy spectrum, like the flat two-dimensional oscillator system, is equal to $2n+1$. Note also that in contraction limit when $R\rightarrow \infty$, we have $\nu\sim\alpha R^2$ and from formula (\[EN\]) the energy spectrum for two-dimensional circular oscillator is restored [@FLUG]. [*3.2*]{} In the second spherical coordinate $(\theta', \varphi')$ the potential (1) has the form $$\begin{aligned} \label{P2} V = \frac{\alpha^2 R^2}{2}\left(\frac{1} {\sin^2\theta' \sin^2\varphi'}-1\right)\end{aligned}$$ After the substitution $$\begin{aligned} \label{WF2} \Psi(\theta', \varphi') = \frac{1}{\sqrt{\sin\theta'}} S(\theta')S(\varphi')\end{aligned}$$ we come to the system of differential equations $$\begin{aligned} \label{SCH2} \frac{d^2 S}{d\theta^{'2}}+ \left[\varepsilon -\frac{A^2-\frac{1}{4}}{\sin^2\theta'}\right] S = 0 \qquad \frac{d^2 S}{d\varphi^{'2}} + \left[A^2 - \frac{\nu^2-\frac{1}{4}}{\sin^2\varphi'}\right]S = 0\end{aligned}$$ where $A$ is the separation constant. Solving equations (\[SCH2\]) we obtain $$\begin{aligned} \label{} A = n_1+\nu+\frac{1}{2}, \quad \varepsilon = \left(n_2+A+\frac{1}{2}\right)^2 = (n_1+n_2+\nu+1)^2 = (n+\nu+1)^2\end{aligned}$$ where $n_1, n_2 \in {\bf N}$ and the principal quantum number $n=n_1+n_2$, so that the energy spectrum is given by equation (\[EN\]). The orthonormalized eigenfunctions $\Psi(\theta', \varphi')$ could be written as $$\begin{aligned} \label{SOL2} \Psi_{n_1 n_2}(\theta', \varphi') = \frac{1}{\sqrt{\sin\theta'}} S_{n_2}^A (\theta') S_{n_1}^\nu(\varphi')\end{aligned}$$ where $$\begin{aligned} \label{SOL1} S_n^a(\varphi) &=& \frac{\Gamma(a+1)\Gamma(n+a+\frac12)}{\Gamma(n+a+1)} \sqrt{\frac{(n+a+1/2)n!}{\pi \Gamma(n+2a+1)}} \, (\sin\varphi )^{{1\over 2}+a} C^{a+\frac12}_{n}(\cos\varphi)\end{aligned}$$ and $C_n^{\lambda}$ are the Gegenbauer polynomials [@BE]. Finally, note that the operator characterizing the separation solutions in this coordinate system is $$\begin{aligned} \label{} J_1\Psi_{n_1 n_2} = \left( \frac{\partial^2}{\partial \varphi'^2} -\frac{\nu^2-\frac{1}{4}}{\sin^2\varphi'}\right) \Psi_{n_1 n_2} = \left[L_1^2- (s_2^2+s_3^2) \frac{\nu^2-\frac{1}{4}}{s_3^2}\right] \Psi_{n_1 n_2} = - A^2 \Psi_{n_1 n_2}\end{aligned}$$ [*3.3*]{} For the potential (1) in the coordinate system $(\theta'', \varphi'')$ we have $$\begin{aligned} \label{P3} V = \frac{\alpha^2 R^2}{2}\left(\frac{1} {\sin^2\theta'' \cos^2\varphi''}-1\right)\end{aligned}$$ The orthonormalized solution to the Schrödinger equation (2) have the following form: $$\begin{aligned} \label{WF3} \Psi_{l_1 l_2}(\theta'', \varphi'') = \frac{1}{\sqrt{\sin\theta''}} S_{l_1}^\nu (\varphi''+ \frac{\pi}{2}) \, S_{l_2}^B (\theta'')\end{aligned}$$ where $l_1, l_2 \in {\bf N}$, the principal quantum number $n=l_1+l_2$ and the constant $B=l_1+\nu+\frac12$. For the energy spectrum we come to expression (\[EN\]) and the wave function $S_n^a(\theta)$ is given by formula (\[SOL1\]). The additional operator describing this solution and separation is $$\begin{aligned} \label{} J_2\Psi_{l_1 l_2}= \left(\frac{\partial^2}{\partial \varphi''^2} - \frac{\nu^2-\frac14}{\cos^2\varphi''}\right) \Psi_{l_1 l_2}= \left[L_2^2- (s_1^2+s_3^2)\frac{\nu^2-\frac14}{s_3^2} \right] \Psi_{l_1 l_2} = - B^2 \Psi_{l_1 l_2}\end{aligned}$$ Algebra ------- If we take the constant of motion in the form $$\begin{aligned} {\tilde J_3} = L_3, \qquad {\tilde J_1} = L_1^2 - \alpha^2 R^4 \, \frac{s_2^2}{s_3^2}, \quad {\tilde J_2} = L_2^2 - \alpha^2 R^4 \, \frac{s_1^2}{s_3^2},\end{aligned}$$ we have the Hamiltonian $$\begin{aligned} H = - \frac{1}{2 R^2}\bigg[{\tilde J_1}+ {\tilde J_2}+ {\tilde J_3^2}\bigg],\end{aligned}$$ and the commutator relations $$\begin{aligned} [\tilde J_1, \tilde J_2] &=& \{L_1,\{L_2,L_3\}\}+2 \alpha^2 R^4\left( \frac{s_2^2-s_1^2}{s_3^2}+\frac{2 s_1 s_2}{s_3^2}L_3 \right) \nonumber \\[2mm] [\tilde J_1, \tilde J_3] &=& -\{L_1,L_2\}-2 \alpha^2 R^4 \frac{s_1s_2}{s_3^2} \nonumber \\[2mm] [\tilde J_2, \tilde J_3] &=& \{L_1,L_2\}+2 \alpha^2 R^4 \frac{s_1s_2}{s_3^2}\end{aligned}$$ where $\{,\}$ is the anticommutator. To close this algebra, we use the redefined operators $$\begin{aligned} S_1 = {\tilde J_3}, \qquad S_2 = \tilde J_1-\tilde J_2, \qquad S_3 = [S_1,S_2]\end{aligned}$$ and derive the following relations: $$\begin{aligned} S_3&=&2\{L_1 , L_2\} + 4\alpha^2 R^4 \frac{s_1s_2}{s_3^2}, \\[2mm] [S_3, S_1]&=& 4 S_2, \quad [S_3, S_2] = \frac{4 H S_1}{R^2} + 8 S_1^3 +4\left(4\alpha^2 R^4-1\right) S_1.\end{aligned}$$ Thus, the operators $S_1, S_2, S_3$ a generate nonlinear algebra, the so-called cubic or Higgs algebra. Interbasis expansions ===================== Let us now consider interbasis expansion between two spherical wave functions $$\begin{aligned} \label{EXP1} \Psi_{n_1,n_2}(\theta',\varphi')= \sum_{m = - n}^{n} W_{n_1 n_2}^{m} \Psi_{n, m}(\theta,\varphi)\end{aligned}$$ To calculate an explicit form of the expansion coefficients $W_{n_1 n_2}^{m}$ it is sufficient to use the orthogonality for the wave function on one of the variables in the right-hand side of (\[EXP1\]) and to fix, at the most appropriate point, the second variable that does not participate in integration. Rewrite the left-hand side of (\[EXP1\]) in the spherical coordinates $(\theta, \varphi)$ according to the formulae $$\begin{aligned} \cos\theta' = \sin\theta\cos\varphi, \quad \cos\varphi'= \frac{\sin\theta\sin\varphi} {\sqrt{1-\sin^2\theta\cos^2\varphi}}.\end{aligned}$$ Then, by substituting $\theta = \frac{\pi}{2}$ and taking into account that $$\begin{aligned} C_{n}^{\lambda}(1) = \frac{\Gamma(2\lambda+n)} {n!\Gamma(2\lambda)}\end{aligned}$$ we obtain an equation depending only on the variable $\varphi$. Thus, using the orthogonality relation for the function $e^{im\varphi}$ upon the quantum number $m$, we arrive at the following integral representation for the coefficients $W_{n_1,n_2}^{m}$: $$W_{n_1 n_2}^{m}(\nu) = \frac{(-1)^{\frac{n-|m|}{2}}}{2^{\nu+1}\pi} \, \sqrt{\frac{(n_2)!\left(n_1+\nu+\frac12\right)\Gamma(n_1+2\nu+1) (\frac{n+m}{2})!(\frac{n-m}{2})!} {(n_1)!\Gamma(n+n_1+2\nu+2) \Gamma(\frac{n-m}{2}+\nu+1)\Gamma(\frac{n+m}{2}+\nu+1)}}$$ $$\begin{aligned} \label{EXP11} \cdot \, \frac{\Gamma(n_1+\nu+\frac32)\Gamma(n+\nu+1)} {\Gamma(n+\nu+\frac32)} \,\, I_{n_1 n_2 m}^{\nu}\end{aligned}$$ where $$\begin{aligned} \label{INT1} I_{n_1 n_2 m}^{\nu} = \frac{1}{\sqrt{2\pi}} {\int_{0}^{2\pi} (\sin\varphi)^{n_1} C_{n_2}^{n_1+\nu+1}(\cos \varphi) e^{-i m \varphi} d\varphi}.\end{aligned}$$ To calculate the integral $I_{n_1 n_2 m}^{\nu}$ it is sufficient to write the Gegenbauer polynomial $C_{n_2}^{n_1+\nu}(\cos \varphi)$ and $(\sin\varphi)^{n_1}$ as a series in terms of the exponents. After integration we obtain $$\begin{aligned} {\int_{0}^{2\pi} (\sin\varphi)^{k} C_{n}^{\lambda}(\cos \varphi) e^{-i m \varphi} d\varphi}&=& \frac{(-1)^{\frac{n-m}{2}} 2^{\lambda-k+\frac12} \pi\Gamma(\lambda+n +\frac12) k!} {n!\Gamma(\lambda+\frac12)\Gamma\left(\frac{n+k-m}{2}+1\right) \Gamma\left(\frac{k-n+m}{2}+1\right)} \nonumber \\[3mm] &\cdot&{_3F_2}\left\{\left.\matrix{ -n,\quad\,\,\,-\frac{n+k-m}{2},\quad\,\,\lambda\cr \quad\,\,\,\,\,\,\,\,\cr -\lambda-n+1,\,\frac{k-n+m}{2}+1\cr}\right| 1\right\}\end{aligned}$$ The introduction of (\[INT1\]) into (\[EXP11\]) gives us the interbasis coefficients in the closed form $$\begin{aligned} \label{COEF1} W_{n_1 n_2}^{m}(\nu) &=& (-1)^{\frac{|m|-m-n_1}{2}} \sqrt{\frac{2 (n_1+\nu+\frac{1}{2}) (n_1)! \Gamma(n_1+2\nu+1)} {(n_2)!\Gamma(n+n_1+2\nu+2)\Gamma(\frac{n-m}{2}+\nu+1) \Gamma(\frac{n+m}{2}+\nu+1)}} \nonumber \\[3mm] &\cdot& \sqrt{\frac{(\frac{n+m}{2})!}{(\frac{n-m}{2})!}} \frac{\Gamma(n+\nu+1)}{\Gamma(\frac{n_1-n_2+m}{2}+1)}\,\,\, {_3F_2}\left\{\left.\matrix{ -n_2,\,\,\,-\frac{n-m}{2},\,\,\,n_1+\nu+1\cr \quad\,\,\,\,\,\,\,\,\cr -n-\nu,\,\frac{n_1-n_2+m}{2}+1\cr}\right| 1\right\}\end{aligned}$$ The interbasis coefficients $W_{n_1 n_2}^{m}(\nu)$ could also be expressed in term of the Clebsch–Gordon coefficients for $SU(2)$ group, analytically continued to the real values of their arguments. Using the formula for the Clebsch-Gordon coefficients $C_{a, \alpha; b, \beta}^{c, \gamma}$ [@V] $$\begin{aligned} \label{CG1} C_{a, \alpha; b, \beta}^{c,\gamma}= \delta_{\gamma,\alpha+\beta} \sqrt{\frac{(a+\alpha)!(b-\beta)!(c+\gamma)!(c-\gamma)!(2c+1)} {(a+b-c)!(a+b+c+1)!(a-\alpha)!(b+\beta)!}} \nonumber \\[3mm] \frac{\sqrt{(a-b+c)!(c-a+b)!}}{(-b+c+\alpha)!(-a+c-\beta)!}\,\,\, {_3F_2}\left\{\left.\matrix{ -a-b+c,\,\,\,-a+\alpha,\,\,\,-b-\beta\cr -a+c-\beta+1,\,-b+c+\alpha+1\cr}\right| 1\right\}\end{aligned}$$ and following the property of the polynomial hypergeometric function $_3F_2$ $$\begin{aligned} {_3F_2}\left\{\left.\matrix{ a,\,\,\,b,\,\,\,c\cr d,\quad e\cr}\right| 1\right\}=\frac{\Gamma(d) \Gamma(d-a-b)} {\Gamma(d-a)\Gamma(d-b)}\,\,\, {_3F_2}\left\{\left.\matrix{ a,\,\,\,b,\,\,\,e-c\cr a+b-d+1,\,e\cr}\right| 1\right\}\end{aligned}$$ We can rewrite the formula (\[CG1\]) in the form $$\begin{aligned} \label{CG2} C_{a,\alpha; b, \beta}^{c, \gamma}=\delta_{\gamma,\alpha+\beta} \sqrt{\frac{(2c+1)(b+c-a)!(b-\beta)!(c+\gamma)!(c-\gamma)! } {(a+b-c)!(a-b+c)!(a+b+c+1)!(a+\alpha)!(a-\alpha)!(b+\beta)!}} \nonumber \\[2mm] \frac{(2a)!(c-b+\alpha)!} {(c-b+\alpha)!(c-a-\beta)!}\,\,\, {_3F_2}\left\{\left.\matrix{ -a-b+c,\,\,\,-a+\alpha,\,\,\,b-a+c+1\cr -2a,\,c-a-\beta+1\cr}\right| 1\right\}\end{aligned}$$ By comparing equations (\[CG2\]) and (\[COEF1\]) we finally obtain $$\begin{aligned} \label{COEF2} W_{n_1 n_2}^m(\nu) = (-1)^{\frac{|m|-m-n_1}{2}} \, C_{\frac{n+\nu}{2}, \, \frac{\nu+m}{2}; \, \frac{n+\nu}{2}, \, \frac{\nu-m}{2}}^{n_1+\nu, \, \nu}\end{aligned}$$ The inverse expansion of (\[EXP1\]), namely $$\begin{aligned} \label{EXP2} \Psi_{n m}(\theta, \varphi) = \sum_{n_1 = 0}^{n} {\tilde W_{n m}^{n_1}(\nu)} \Psi_{n_1 n_2}(\theta', \varphi')\end{aligned}$$ immediately follows from the orthogonality property of the $SU(2)$ Clebsch-Gordon coefficient. Thus, the interbasis coefficients in expansion (\[EXP2\]) are given by $$\begin{aligned} \label{COEF3} {\tilde W_{n m}^{n_1}(\nu)} = (-1)^{\frac{|m|-m+n_1}{2}} \, C_{\frac{n+\nu}{2}, \, \frac{\nu+m}{2}; \, \frac{n+\nu}{2}, \, \frac{\nu-m}{2}}^{n_1+\nu, \, \nu}\end{aligned}$$ and may be expressed in terms of the $_3F_2$ function through (\[COEF1\]). Using the same method we could calculate the coefficients of the interbasis expansion for the wave functions (\[WF3\]) and (\[WF1\]). We have $$\begin{aligned} \label{EXP3} \Psi_{l_1 l_2}(\theta'', \varphi'') = \sum_{m = 0}^{n} (-1)^{n+\frac{m}{2}} W_{l_1 l_2}^{m}(\nu) \Psi_{n m}(\theta, \varphi)\end{aligned}$$ where the coefficients $W_{l_1 l_2}^{m}(\nu)$ are given by formulae (\[COEF1\]) or (\[COEF2\]) by replacing the quantum number $n_i\rightarrow l_i$. The last interbasis expansion between two spherical wave functions (\[WF3\]) and (\[WF2\]) can be constructed by using equations (\[EXP3\]) and (\[EXP2\]) $$\begin{aligned} \label{EXP4} \Psi_{l_1 l_2}(\theta'', \varphi'') = \sum_{n_1 = 0}^{n} U_{l_1 l_2}^{n_1}(\nu) \Psi_{n_1 n_2}(\theta', \varphi'),\end{aligned}$$ where $$\begin{aligned} U_{l_1 l_2}^{n_1}(\nu)= (-1)^{l_2+\frac{l_1+n_1}{2}} \sum_{m = - n}^{n} (-1)^{\frac{m}{2}} C_{\frac{n+\nu}{2}, \, \frac{\nu+m}{2}; \, \frac{n+\nu}{2}, \, \frac{\nu-m}{2}}^{l_1+\nu, \, \nu} C_{\frac{n+\nu}{2}, \, \frac{\nu+m}{2}; \, \frac{n+\nu}{2}, \, \frac{\nu-m}{2}}^{n_1+\nu, \, \nu}\end{aligned}$$ Finally, note that direct methods of calculation of the coefficients in expansion (\[EXP4\]) give us the hypergeometrical function $_4F_3$ from the unit argument. end of last section Acknowledgment(s) {#acknowledgments .unnumbered} ================= The authors thank V.M.Ter-Antonyan and L.G.Mardoyan for interesting discussions. One of the authors (G. P.) thankful to the Organizers of the II International Workshop “Lie Theory and its Application in Physics” for financial support and very kind hospitality. [\*\*]{} P.W.Higgs. Dynamical Symmetries in a Spherical Geometry. [*J.Phys*]{} [**A12**]{}, 309, 1979. L.G.Mardoyan, G.S.Pogosyan, A.N.Sissakian and V.M.Ter-Antonyan. Elliptic Basis for a Circular Oscillator. [*Nuovo Cimento*]{}, [**B 88**]{}, (1985), 43; C.Grosche, G.S.Pogosyan, A.N.Sissakian. Path Integral Discussion for Smorodinsky - Winternitz Potentials: II. The Two - and Three Dimensional Sphere. [*Fortschritte der Physik*]{}, [**43**]{}, 523, 1995. E.G.Kalnins, W.MIller Jr. and G.S.Pogosyan. Superintegrability and associated polynomial solutions. Euclidean space and sphere in two-dimensions. [*J.Math.Phys.*]{} [**37**]{}, 6439, 1996 D.Bonatos, C.Daskaloyannis and K.Kokkotas. Deformed Oscillator Algebras for Two-Dimensional Quantum Superintegrable Systems; [*Phys. Rev.*]{} [**A 50**]{}, (1994), 3700. Ya.A.Granovsky, A.S.Zhedanov and I.M.Lutzenko. Quadratic algebras and dynamics into the curvature space. I. Oscillator. [*Teor. Mat. Fiz.*]{} [**91**]{}, 207-216, 1992; Quadratic algebras and dynamics into the curvature space. II. The Kepler problem. [*Teor. Mat. Fiz.*]{} [**91**]{}, 396-410, 1992. P.Winternitz, I.Lukac, and Ya.A.Smorodinskii. Quantum numbers in the little groups of the Poincaré group. [*Sov. J. Nucl. Phys.*]{} [**7**]{}, 139-145, (1968). J.Patera and P.Winternitz. A New Basis for the Representation of the Rotation Group. Lamé and Heun Polynomials; [*J.Math.Phys.*]{} [**14**]{} (1973) 1130 S.Flügge. [*Practical Quantum Mechanics*]{}, V1, Springer-Verlag, Berlin–Heidelberg–New York, 1971 D.A.Varshalovich, A.N. Moskalev, and V.K. Khersonskii, [*Quantum Theory of Angular Momentum*]{} (World Scientific, Singapore, 1988). A.Erdélyi, W.Magnus, F.Oberhettinger, and F.Tricomi, [*Higher Transcendental Functions*]{} (McGraw-Hill, New York, 1953), Vols. I and II.
--- abstract: 'We study the Rabi model composed of three qubits coupled to a harmonic oscillator without involving the rotating-wave approximation. We show that the ground state of the three-qubit Rabi model can be analytically treated by using the transformation method, and the transformed ground state agrees well with the exactly numerical simulation under a wide range of qubit-oscillator coupling strengths for different detunings. We use the pairwise entanglement to characterize the ground state entanglement between any two qubits and show that it has an approximately quadratic dependence on the qubit-oscillator coupling strength. Interestingly, we find that there is no qubit-qubit entanglement for the ground state if the qubit-oscillator coupling strength is large enough.' author: - 'Li-Tuo Shen' - 'Zhen-Biao Yang' - 'Rong-Xin Chen' title: Ground state of three qubits coupled to a harmonic oscillator with ultrastrong coupling --- Introduction ============ Recent experimental progresses related to the qubit-oscillator system in ultrastrong coupling regime have been reported in different light-matter interaction systems [@PRB-78-180502-2008; @PRB-79-201303-2009; @Nature-458-178-2009; @Nature-6-772-2010; @PRL-105-237001-2010; @PRL-105-196402-2010; @PRL-106-196405-2011; @Science-335-1323-2012; @PRL-108-163601-2012; @PRB-86-045408-2012], where the coupling strength between a single qubit and a single oscillator reaches a significant fraction of the oscillator and qubit frequencies. In this ultrastrong coupling regime, the ubiquitous Jaynes-Cummings model [@IEEE-51-89-1963] under the rotating-wave approximation (RWA) is expected to break down leading to a mass of unexplored physics and giving rise to fascinating quantum phenomena [@NJP-13-073002-2011; @PRL-109-193602-2012; @PRA-87-013826-2013; @PRA-59-4589-1999; @PRA-62-033807-2000; @PRB-72-195410-2005; @PRA-74-033811-2006; @PRA-77-053808-2008; @PRA-82-022119-2010; @PRL-107-190402-2011; @PRL-108-180401-2012; @PRA-87-022124-2013; @PRA-86-014303-2012]. For example, superradiance transition [@PRA-87-013826-2013], vacuum Rabi-splitting [@PRB-78-180502-2008; @NJP-13-073002-2011], photon blockade [@PRL-109-193602-2012], Bloch-Siegert shift [@PRL-105-237001-2010], and plasmonic effect [@PRB-86-045408-2012]. Since the Hamiltonian of a qubit-oscillator system contains counter-rotating wave terms that make the computational subspace unclosed, the fully analytical solution to the ground state of this Hamiltonian in the ultrastrong coupling limit is still not found. Although the spectrum and eigenfunction of the Rabi model beyond the RWA are known by numerical diagonalization in a truncated finite-dimensional Hilbert space [@JPA-29-4035-1996; @EPL-96-14003-2011], the analytical solution to the qubit-oscillator system beyond the RWA is necessary for clearly capturing the fundamental physics. Such an analytical treatment has the potential to be extended to more complicated models for the implementation of the quantum information processing (QIP) [@PRA-81-042311-2010]. Therefore, various mathematics approaches have been proposed to analytically obtain the ground state properties of the single-qubit Rabi model in the ultrastrong coupling regime [@RPB-40-11326-1989; @PRB-42-6704-1990; @PRL-99-173601-2007; @EPL-86-54003-2009; @PRA-80-033846-2009; @PRL-105-263603-2010; @PRA-82-025802-2010; @PRL-107-100401-2011; @EPJD-66-1-2012; @PRA-86-015803-2012; @PRA-85-043815-2012; @PRA-86-023822-2012]. For example, the generalized-RWA mehtod [@PRL-99-173601-2007; @EPL-86-54003-2009] functions well when the qubit frequency is smaller than the oscillator frequency, the variational treatment [@PRA-82-025802-2010; @RPB-40-11326-1989; @PRB-42-6704-1990] reasonably captures the properties of ground state in the single-qubit Rabi system which is very hard to be generalized to multi-qubit Rabi systems, and the transformation method is a perturbation expansion and has been successfully applied to the single-qubit Rabi system [@PRA-86-015803-2012; @EPJB-38-559-2004; @PRB-75-054302-2007; @EPJD-59-473-2010]. Recently, the Tavis-Cummings model beyond the RWA has been extended to the multi-qubit case by using an adiabatic approximation method when the qubit frequency is far larger than the oscillator frequency [@PRA-85-043815-2012], and the ground state of the nearly resonant Rabi model of two qubits coupled to a harmonic oscillator has been analytically treated by using both the variational and the transformation methods [@arXiv-1303-3367v2-2013]. The Rabi model of three and more qubits coupled to a common harmonic oscillator in the ultrastrong coupling regime has more potential applications in QIP [@PRL-107-190402-2011; @PRL-108-180401-2012] than that the single-qubit Rabi model has, such as protected quantum computation [@PRL-107-190402-2011], which is expected to be very promising with the circuit QED architecture. However, the ground states of the three- and more-qubit Rabi models in the ultrastrong coupling regime have not been extensively studied. Recently, Braak has generalized the method based on the $Z_{2}$ symmetry [@PRL-107-100401-2011] to the three-qubit Dicke model [@arXiv-1304-2529v1-2013] to analytically determine the system’s spectrum, which is dependent on the composite transcendental function defined through its power series. However, this method can not be extended to determine the concrete form of the ground state. Different from the Ref. [@arXiv-1304-2529v1-2013], we focus here on the analytic ground state of the three-qubit Rabi model in the ultrastrong coupling regime by the transformation method. By mapping the three-qubit Rabi model into a solvable Jaynes-Cummings-like model, we show that the ground state energy and the ground state of this three-qubit Rabi model can be approximately determined by the analytic expression based on the transformation method, which agrees well with the exactly numerical simulation in the ultrastrong coupling regime under different detunings. The ground state entanglement between any two qubits is characterized by using the pairwise entanglement and has a quadratic dependence on the qubit-oscillator coupling strength, which can be approximately determined within a wide range of parameters. The interesting feature in the ground state entanglement exists in its maximum value, which decreases quickly to zero and never increases again as the qubit-oscillator coupling strength is large enough. Transformed ground state ======================== The Hamiltonian of three identical qubits coupled to a harmonic oscillator without the rotating-wave approximation is ($\hbar=1$) $$\begin{aligned} \label{e1} H&=&\frac{1}{2}w_{a}(J_{+}+J_{-})+w_{c}a^{\dagger}a+g(a^{\dagger}+a)J_{z},\end{aligned}$$ where $a$ and $a^{\dagger}$ are respectively the annihilation and creation operators of the harmonic oscillator with frequency $w_{c}$. $J_{l}$ $\{ l=\pm,z \}$ describes the collective atomic operator of a spin-$\frac{3}{2}$ system, satisfying the angular momentum commutation relations $[J_{z},J_{\pm}]=\pm J_{\pm}$ and $[J_{+},J_{-}]=2J_{z}$. Physically, the spin-$\frac{3}{2}$ system is nontrivial and the states are entangled in terms of individual qubit configurations. $w_{a}$ is the transition frequency of each qubit. $g$ represents the collective qubit-oscillator coupling strength. The key point in this paper is to determine the ground state energy $E_{g}$ and the ground state $|\phi_{g}\rangle$ for the three-qubit Rabi system in the ultrastrong coupling regime, where $H|\phi_{g}\rangle=E_{g}|\phi_{g}\rangle$. To derive the analytic ground state, we define $|m\rangle_{a}$ to be an eigenvector of $J_{z}$, i.e., $J_{z}|m\rangle_{a}=m|m\rangle_{a}$ ($m=-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2}$). Besides, we will respectively use $|X\rangle_{f}$ and $|0\rangle_{f}$ to represent the coherent field state with the real amplitude $X$ and the vacuum field state. In what follows we extend the transformation method used in the single- and two-qubit Rabi models [@EPJD-59-473-2010; @arXiv-1303-3367v2-2013] to the three-qubit Rabi model. To transform the Hamiltonian $H$ into a mathematical form without the counter-rotating wave terms, we apply a unitary transformation to the Hamiltonian $H$: $$\begin{aligned} \label{e2} H^{'}&=&e^{S}He^{-S},\end{aligned}$$ with $$\begin{aligned} \label{e3} S&=&\chi(a^{\dagger}-a)J_{z},\end{aligned}$$ where $\chi$ is a variable to be determined. Then the transformed Hamiltonian $H^{'}$ is decomposed into three parts [@arXiv-1303-3367v2-2013]: $$\begin{aligned} \label{e4} H^{'}&=&H_{0}^{'}+H_{1}^{'}+H_{2}^{'},\end{aligned}$$ with $$\begin{aligned} \label{e5-e7} H_{0}^{'}&=&\eta w_{a} J_{x} -(2g\chi-w_{c}\chi^2)J_{z}^2+w_c a^{\dagger}a,\\ H_{1}^{'}&=&(g-w_{c}\chi)(a^{\dagger}+a)J_{z}+i\eta w_{a}\chi(a^{\dagger}-a)J_{y}, \\ H_{2}^{'}&=&w_{a}J_{x}\{ \cosh\big[\chi(a^{\dagger}-a)\big]-\eta \}\cr&& +iw_{a}J_{y}\bigg\{ \sinh\big[ \chi(a^{\dagger}-a)\big] -\eta\chi(a^{\dagger}-a) \bigg\},\end{aligned}$$ where $\eta=$$_{f}$$\langle0|\cosh[\chi(a^{\dagger}-a)]|0\rangle_{f}=\exp[-\chi^2/2]$. The terms $\cosh\big[\chi(a^{\dagger}-a)\big]$ and $\sinh\big[\chi(a^{\dagger}-a)\big]$ in $H_{2}^{'}$ have the dominating expansions: $$\begin{aligned} \label{e8-e9} \cosh[ \chi(a^{\dagger}-a) ]&=&\eta+O(\chi^2),\\ \sinh[\chi(a^{\dagger}-a) ]&=&\chi\eta(a^{\dagger}-a)+O(\chi^3),\end{aligned}$$ here $O(\chi^2)$ and $O(\chi^3)$ represent the double- and multi-photon transition processes containing higher-order operators for $a^{\dagger}$ and $a$, which can be neglected as an approximation when $\chi$ is small. Note that the transformation method works well only if $\chi g/(w_{a} +w_{c}) << 1$, and it fails in the ultrastrong coupling regime where $\chi > 1$. Thus, $H^{'}\simeq H_{0}^{'}+H_{1}^{'}$. By now, our approximation procedure is the same as that in Ref. [@arXiv-1303-3367v2-2013]. However, the main difference exists in the diagonalization for $H^{'}_{0}$ with the collective qubit vectors. Such an operator $H_{J}^{'}=\eta w_{a}J_{x}-(2g\chi-w_c\chi^2)J_{z}^{2}$ appearing in the qubit basis $\Gamma_{a}=\{ |-\frac{3}{2}\rangle_{a}, |-\frac{1}{2}\rangle_{a}, |\frac{1}{2}\rangle_{a}, |\frac{3}{2}\rangle_{a}\}$ represents a renormalized four-level qubit system. This is different from the two-qubit Rabi system [@arXiv-1303-3367v2-2013] which is a renormalized three-level atomic system corresponding to the diagonalization for $H^{'}_{0}$. Through diagonalizing the operator $H_{J}^{'}$ in the qubit basis $\Gamma_{a}$, we obtain the renormalized qubit eigenvector $|\varphi_{k}\rangle_a$ with the eigenvalue $\lambda_{k}$ ($k=1,2,3,4$) as following: $$\begin{aligned} \label{e10} \lambda_{1}&=&5A-\frac{1}{2}B-2\sqrt{4A^2+AB+\frac{1}{4}B^2},\cr |\varphi_{1}\rangle_a&=&\frac{1}{N_{1}}\bigg(-|-\frac{3}{2}\rangle_{a}+K_{1}|-\frac{1}{2}\rangle_{a}-K_{1}|\frac{1}{2}\rangle_{a}+|\frac{3}{2}\rangle_{a}\bigg),\cr \lambda_{2}&=&5A+\frac{1}{2}B-2\sqrt{4A^2-AB+\frac{1}{4}B^2},\cr |\varphi_{2}\rangle_a&=&\frac{1}{N_{2}}\bigg(|-\frac{3}{2}\rangle_{a}-K_{2}|-\frac{1}{2}\rangle_{a}-K_{2}|\frac{1}{2}\rangle_{a}+|\frac{3}{2}\rangle_{a}\bigg),\cr \lambda_{3}&=&5A-\frac{1}{2}B+2\sqrt{4A^2+AB+\frac{1}{4}B^2},\cr |\varphi_{3}\rangle_a&=&\frac{1}{N_{3}}\bigg(-|-\frac{3}{2}\rangle_{a}+K_{3}|-\frac{1}{2}\rangle_{a}-K_{3}|\frac{1}{2}\rangle_{a}+|\frac{3}{2}\rangle_{a}\bigg),\cr \lambda_{4}&=&5A+\frac{1}{2}B+2\sqrt{4A^2-AB+\frac{1}{4}B^2},\cr |\varphi_{4}\rangle_a&=&\frac{1}{N_{4}}\bigg(|-\frac{3}{2}\rangle_{a}-K_{4}|-\frac{1}{2}\rangle_{a}-K_{4}|\frac{1}{2}\rangle_{a}+|\frac{3}{2}\rangle_{a}\bigg),\cr&&\end{aligned}$$ and $$\begin{aligned} \label{e11} A&=&-\frac{1}{4}(2g\chi-w_c\chi^2),\cr B&=&\eta w_a,\cr K_{1}&=&\frac{1}{\sqrt{3}B}\bigg(8A+B+4\sqrt{4A^2+AB+\frac{1}{4}B^2}\bigg),\cr K_{2}&=&\frac{1}{\sqrt{3}B}\bigg(8A-B+4\sqrt{4A^2-AB+\frac{1}{4}B^2}\bigg),\cr K_{3}&=&\frac{1}{\sqrt{3}B}\bigg(8A+B-4\sqrt{4A^2+AB+\frac{1}{4}B^2}\bigg),\cr K_{4}&=&\frac{1}{\sqrt{3}B}\bigg(8A-B-4\sqrt{4A^2-AB+\frac{1}{4}B^2}\bigg),\end{aligned}$$ where $N_{k}=\sqrt{2+2K_{k}^2}$ $(k=1,2,3,4)$ is the normalization factor for the eigenvector $|\varphi_{k}\rangle_{a}$. ![(Color online) (a) Numerical solutions for the variable $\chi$ that makes $C_{1}=0$. (b) The $C_{3}$ value as a function of $g$ for different qubit-oscillator detunings. (c) The ground state energy as a function of the coupling strength $g$. The solid line represents the transformed ground state energy $E_{g}^{'}$ and the dash line represents the exact ground state energy $E_{g}$. (d) The fidelity $F$ of the ground state $|\phi_{g}^{'}\rangle$ obtained by the transformation method. []{data-label="Fig.1."}](Rfig1.eps){width="1\columnwidth"} For $\chi g\approx g\approx w_{a}$, the eigenvalues here are arranged in the decreasing order through the numerical simulation, i.e., $\lambda_{1}<\lambda_{2}<\lambda_{3}<\lambda_{4}$. Therefore, $H^{'}$ can be expanded with the above renormalized eigenvectors: $$\begin{aligned} \label{e12} H^{'}&&= \sum_{k=1}^{4}\lambda_{k}|\varphi_{k}\rangle_{a}\langle\varphi_{k}|+ \bigg[(C_{1}a+C_{2}a^{\dagger})|\varphi_{1}\rangle_{a}\langle\varphi_{2}|\cr&& +(C_{3}a+C_{4}a^{\dagger})|\varphi_{1}\rangle_{a}\langle\varphi_{4}| +(C_{5}a+C_{6}a^{\dagger})|\varphi_{2}\rangle_{a}\langle\varphi_{3}|\cr&& +(C_{7}a+C_{8}a^{\dagger})|\varphi_{3}\rangle_{a}\langle\varphi_{4}|+H.c.\bigg]+w_c a^{\dagger}a,\end{aligned}$$ where $C_{x}(x=1,2,3,...,8)$ is the coefficient depending on the variable $\chi$. It is obvious to see that $C_{1},C_{3},C_{5}$ and $C_{7}$ represent the coupling strengths of the corresponding counter-rotating wave terms with respect to the renormalized eigenvectors in Eq. (\[e12\]). Similar to the single- and two-qubit Rabi systems [@EPJD-59-473-2010; @arXiv-1303-3367v2-2013], the main task after transforming the Hamiltonian $H$ into $H^{'}$ is to eliminate the counter-rotating wave terms for the eigenvector with the lowest eigenenergy. The major obstacle here is to remove the different coupling coefficients $C_{1}$ and $C_{3}$ of two counter-rotating wave terms for the eigenvector $|\varphi_{1}\rangle_{a}$ simultaneously. This is very different from the single-qubit [@EPJD-59-473-2010] and the two-qubit Rabi models [@arXiv-1303-3367v2-2013], which just have one counter-rotating wave term for the approximate ground state vector. Although it is not possible to simultaneously remove the coefficients $C_{1}$ and $C_{3}$ for all the values of $\chi$, we find that the conditions $C_{1}=0$ and $C_{3}\approx0$ can be both satisfied when $0\leq\chi\leq0.5$, meaning two counter-rotating wave terms for the approximate ground state vector $|\varphi_{1}\rangle_{a}$ can both be eliminated if the qubit-oscillator interaction is not too strong. The coefficients $C_{1}$ and $C_{3}$ have the following analytical forms: $$\begin{aligned} \label{e13-e14} C_{1}&=& (3+K_1K_2)(g-w_c\chi)\cr&&-\eta w_a\chi\bigg( \sqrt{3}K_{1}+2K_{1}K_{2}-\sqrt{3}K_{2}\bigg),\\ C_{3}&=& (3+K_1K_4)(g-w_c\chi)\cr&&-\eta w_a\chi\bigg( \sqrt{3}K_{1}+2K_{1}K_{4}-\sqrt{3}K_{4}\bigg).\end{aligned}$$ Therefore, $|\varphi_{1}\rangle_{a}|0\rangle_{f}$ is expected to be the approximate ground state vector if the conditions $C_{1}=0$ and $C_{3}\approx0$ are both satisfied, then the ground state $|\phi_{g}\rangle$ of this three-qubit Rabi system approximates the transformed ground state $|\phi_{g}^{'}\rangle$: $$\begin{aligned} \label{e15} |\phi_{g}^{'}\rangle&=&e^{-S}|\varphi_{1}\rangle_{a}|0\rangle_{f}\cr &=&\frac{1}{N_{1}}\bigg(-|-\frac{3}{2}\rangle_{a}|\frac{3}{2}\chi\rangle_{f}+K_{1}|-\frac{1}{2}\rangle_{a}|\frac{1}{2}\chi\rangle_{f}\cr&& -K_{1}|\frac{1}{2}\rangle_{a}|-\frac{1}{2}\chi\rangle_{f}+|\frac{3}{2}\rangle_{a}|-\frac{3}{2}\chi\rangle_{f}\bigg),\end{aligned}$$ and the transformed ground state energy $E_{g}^{'}$ is: $$\begin{aligned} \label{e16} E_{g}^{'}&\simeq&\lambda_{1}\cr &=&\frac{5}{4}w_{c}\chi^2-\frac{5}{2}\chi g-\frac{1}{2}w_{a}e^{-\frac{\chi^2}{2}}- \bigg[(2g\chi-w_c\chi^2)^2\cr&&-w_{a}(2g\chi -w_c\chi^2)e^{-\frac{\chi^2}{2}}+w_{a}^{2}e^{-\chi^2}\bigg]^{\frac{1}{2}}.\end{aligned}$$ According to the condition $C_{1}=0$, the numerical solution of $\chi$ is plotted as a function of the coupling strength $g$ for different qubit-oscillator detunings in Fig. 1(a). We find $\chi$ has a proportional relation with $g$: $\chi\simeq\frac{g}{w_a+w_c}$. By substituting the result $\chi$ from Fig. 1(a) into Eq. (14), we obtain the corresponding solution for $C_{3}$ in Fig. 1(b), which shows the conditions $C_{1}=0$ and $C_{3}\approx0$ can be both satisfied when $0\leq g\leq0.5w_{a}$. This guarantees two counter-rotating wave terms in the eigenvector $|\varphi_{1}\rangle_{a}$ are both eliminated if the qubit-oscillator coupling is not too strong. In Fig. 1(c), we have estimated the accuracy between the transformed ground state energy $E_{g}^{'}$ and the exact ground state energy $E_{g}$ under different qubit-oscillator detunings, in which $E_{g}^{'}$ has an approximately quadratic dependence on $g$: $$\begin{aligned} \label{e17} E_{g}^{'}&\simeq&-\frac{3}{2}w_a-\frac{3}{2w_c+3w_a}g^2.\end{aligned}$$ We see that the transformed ground state energy achieves the nearly perfect matching with the exactly numerical value within the ultrastrong coupling regime $g\leq 0.5w_{a}$. When $g=0.5w_{a}$, the errors for the transformed ground state energy at $w_{c}=0.8w_{a}$, $w_{c}=w_{a}$, and $w_{c}=1.2w_{a}$ are $0.49\%$, $0.19\%$, and $0.07\%$, respectively. Especially, when there is a positive detuning $w_{c}-w_{a}>0$, the transformed ground state energy fits much better with the exact value for a wide range of $g$ than that with the negative qubit-oscillator detuning or the exact qubit-oscillator resonance, and its error is only $0.9\%$ even when $g=0.8w_a$ for $w_{c}=1.2w_{a}$. This result coincides with the variational behavior of $C_{3}$ in Fig. 1(b), in which $C_{3}$ grows much slower with $w_{c}=1.2w_{a}$ than the case with $w_{c}=0.8w_{a}$ or $w_{c}=w_{a}$ when the coupling strength satisfies $g>0.5w_{a}$. To examine the reliability of the transformed ground state $|\phi_{g}{'}\rangle$, we use the fidelity $F$, which is defined as $F=\langle\phi_{g}^{'}|\phi_{g}\rangle$, as a measurement for the transformation method. From the result of Fig. 1(d), we find that the exact ground state for the three-qubit Rabi model can be approximately represented by $|\phi_{g}^{'}\rangle$ within the ultrastrong coupling regime $0\leq g\leq0.5w_a$. For example, we obtain the fidelity with high value $F>99\%$ for $g\leq0.5w_a$ under different qubit-oscillator detunings. Ground state entanglement ========================= ![(Color online) The pairwise entanglement $N_{\rho_a}$ as a function of the coupling strength $g$ under different qubit-oscillator detunings: (a) $w_{c}=0.8w_{a}$; (b) $w_{c}=w_{a}$; (c) $w_{c}=1.2w_{a}$. The red solid curves (blue dashed curves) correspond to the pairwise entanglement of the transformed (exact) ground state. The blue dashed curves vanish at (a) $g/w_{a}=1.22$, (b) $g/w_{a}=1.45$, and (c) $g/w_{a}=1.82$. The photon number cutoff we used here is $N_{tr}=30$. []{data-label="Fig.1."}](Rfig22.eps){width="1\columnwidth"} To investigate the qubit-qubit entanglement for the present three-qubit Rabi model in the ground state, in which the prescription set out for symmetric Dicke states is used, we proceed to consider the pairwise entanglement [@PRA-68-012101-2003; @EPJD-18-385-2002] between any two qubits. Taking the transformed ground state $|\phi_{g}^{'}\rangle$ in Eq. (\[e15\]), the reduced density matrix $\rho_{a}$ of any two qubits can be written as: $$\begin{aligned} \label{e18} \rho_{a}&=&\left(\begin{array}{cccc} \rho_{11} & 0 & 0 & \rho_{14} \\ 0 & \rho_{22} & \rho_{23} & 0 \\ 0 & \rho_{32} & \rho_{33} & 0 \\ \rho_{41} & 0 & 0 & \rho_{44} \end{array} \right),\end{aligned}$$ where $$\begin{aligned} \label{e19} \rho_{11}&=&\rho_{44}=\frac{N^2-2N+4\langle J_{z}^{2}\rangle}{4N(N-1)} =\frac{1}{6}+\frac{1}{3(1+K_{1}^{2})},\cr\cr \rho_{14}&=&\rho_{41}=\frac{\langle J_{+}^{2}\rangle}{N(N-1)} =\frac{\sqrt{3}K_{1}}{3(1+K_{1}^{2})}e^{-2\chi^2}, \cr\cr\rho_{22}&=&\rho_{23}=\rho_{32}=\rho_{33}=\frac{N^2-4\langle J_{z}^{2}\rangle}{4N(N-1)}=\frac{K_{1}^{2}}{3(1+K_{1}^{2})},\cr&&\end{aligned}$$ and the standard basis in $\rho_{a}$ is $\{$ $|e_{l}\rangle|e_{m}\rangle$, $|e_{l}\rangle|s_{m}\rangle,$ $|s_{l}\rangle|e_{m}\rangle,$ $|s_{l}\rangle|s_{m}\rangle$ $\}$, with $|e_{l}\rangle$ ($|e_{m}\rangle$) and $|s_{l}\rangle$ ($|s_{m}\rangle$) ($l,m=1,2,3;$ and $l \neq m$) denoting the excited and ground state of the $l$th ($m$th) qubit, respectively. Therefore, the pairwise entanglement $N_{\rho_{a}}$ can be expressed as: $$\begin{aligned} \label{e20} N_{\rho_{a}}&=&2\max\bigg\{ 0,|\rho_{23}|-\sqrt{\rho_{11}\rho_{44}}, |\rho_{14}|-\sqrt{\rho_{22}\rho_{33}}\bigg\}.\cr&&\end{aligned}$$ In the ultrastrong coupling regime $g\leq 0.5w_{a}$, we can numerically verify: $$\begin{aligned} \label{e21} N_{\rho_{a}}&=&\frac{2(\sqrt{3}K_{1}e^{-2\chi^2}-K_{1}^2)}{3(1+K_{1}^{2})}\simeq \frac{1}{4(w_{a}+w_{c})^2}g^2.\end{aligned}$$ Fig. 2 illustrates the pairwise entanglement $N_{\rho_{a}}$ obtained from the transformed and the exact ground states versus the coupling strength $g$ under different detunings. We see that the pairwise entanglement has a quadratic dependence on $g$ at small coupling strength, which is mathematically captured by the approximate power law between $N_{\rho_a}$ and $g$ in Eq. (\[e20\]). If $g>0.5w_{a}$, discrepancies for the numerical results between the transformed and exact ground states become bigger as the coupling strength increases further. The maximal pairwise entanglement between any two qubits is determined by the detuning $\Delta=w_c-w_a$, and the maximal pairwise entanglement in the qubit system increases with $\Delta$. Interestingly, the pairwise entanglement $N_{\rho_{a}}$ decreases quickly to zero after reaching its maximum, and remains at zero even when the coupling strength $g$ increases further, which means that there is no qubit-qubit entanglement in the ground state of such a model any more if the coupling strength $g$ is large enough. For example, the pairwise entanglement decreases to zero for $g=1.5w_a$ at the exact resonant case and never increases again even when $g$ increases further. This feature is distinguished from the result of Ref. [@arXiv-1303-3367v2-2013] and can be explained as follows. When the field and one qubit is traced out, the first and fourth terms of Eq. (15) do not result in the entanglement of the other two qubits. In other words, the pairwise entanglement is contributed by the $W$-state components $|-\frac{1}{2}\rangle_ {a}$ and $|\frac{1}{2}\rangle_ {a}$. The coefficients of the terms involving $|-\frac{1}{2}\rangle_ {a}$ and $|\frac{1}{2}\rangle_ {a}$ quickly drop to zero when $g$ is large enough, resulting in the vanishing of pairwise entanglement. Conclusion ========== In summary, we have shown that the ground state of the three-qubit Rabi model in the ultrastrong coupling regime can be approximately treated by the transformation method. The transformed ground state fits very well with the exact ground state for different detunings even when the coupling strength $g$ increases to $0.5w_a$. When $g=0.5w_a$, the error of the transformed ground state energy is only $0.19\%$ at $w_a=w_c$, and the fidelity for the transformed ground state keeps higher than $99\%$ when $g\leq0.5w_{a}$ under different qubit-oscillator detunings. Finally, we use the pairwise entanglement to analytically examine the qubit-qubit entanglement, and the result shows that the ground state entanglement has an approximately quadratic dependence on the qubit-oscillator coupling. Interestingly, we find that there is no ground state entanglement if the qubit-oscillator coupling strength is large enough. Acknowledgement =============== This work is supported by the Major State Basic Research Development Program of China under Grant No. 2012CB921601, the National Natural Science Foundation of China (NSFC) under Grant No. 11247283, and funds from Fuzhou University under Grant No. 022513, Grant No. 022408, and Grant No. 600891. [99]{} A. A. Abdumalikov Jr, O. Astafiev, Y. Nakamura, Y. A. Pashkin, and J. S. Tsai, Phys. Rev. B 78, 180502(R) (2008). A. A. Anappara, S. DeLiberato, A. Tredicucci, C. Ciuti, G. Biasiol, L. Sorba, and F. Beltram, Phys. Rev. B 79, 201303(R) (2009). G. Günter, A. A. Anappara, J. Hees, *et al.*, Nature 458, 178 (2009). T. Niemczyk, F. Deppe, H. Huebl, *et al.*, Nature 6, 772 (2010). P. Forn-Díaz, J. Lisenfeld, D. Marcos, J. J. García-Ripoll, E. Solano, C. J. P. M. Harmans, and J. E. Mooij, Phys. Rev. Lett. 105, 237001 (2010). Y. Todorov, A. M. Andrews, R. Colombelli, *et al.*, Phys. Rev. Lett. 105, 196402 (2010). T. Schwartz, J. A. Hutchison, C. Genet, and T. W. Ebbesen, Phys. Rev. Lett. 106, 196405 (2011). G. Scalari, C. Maissen, D. Turčinková, *et al.* Science 335, 1323 (2012). A. Crespi, S. Longhi, and R. Osellame, Phys. Rev. Lett. 108, 163601 (2012). S. Hayashi, Y. Ishigaki, and M. Fujii, Phys. Rev. B 86, 045408 (2012). E. T. Jaynes and F. W. Cummings, Proc. IEEE 51, 89 (1963); S. B. Zheng and G. C. Guo, Phys. Rev. Lett. 85, 2392 (2000). X. Cao, J. Q. You, H. Zheng, and F. Nori, New J. Phys. 13, 073002 (2011). A. Ridolfo, M. Leib, S. Savasta, and M. J. Hartmann, Phys. Rev. Lett. 109, 193602 (2012). S. Ashhab, Phys. Rev. A 87, 013826 (2013). H. P. Zheng, F. C. Lin, Y. Z. Wang, and Y. Segawa, Phys. Rev. A 59, 4589 (1999). S. B. Zheng, X. W. Zhu, and M. Feng, Phys. Rev. A 62, 033807 (2000). E. K. Irish, J. Gea-Banacloche, I. Martin, and K. C. Schwab, Phys. Rev. B 72, 195410 (2005). C. Ciuti and I. Carusotto, Phys. Rev. A 74, 033811 (2006). D. Wang, T. Hansson, [Å]{}. Larson, H. O. Karlsson, and J. Larson, Phys. Rev. A 77, 053808 (2008). X. F. Cao, J. Q. You, H. Zheng, A. G. Kofman, and F. Nori, Phys. Rev. A 82, 022119 (2010). P. Nataf and C. Ciuti, Phys. Rev. Lett. 107, 190402 (2011). V. V. Albert, Phys. Rev. Lett. 108, 180401 (2012). F. Altintas and R. Eryigit, Phys. Rev. A 87, 022124 (2013). L. H. Du, X. F. Zhou, Z. W. Zhou, X. Zhou, and G. C. Guo, Phys. Rev. A 86, 014303 (2012). I. D. Feranchuk, L. I. Komarov, and A. P. Ulyanenkov, J. Phys. A: Math. Gen. 29, 4035 (1996). Q. H. Chen, T. Liu, Y. Y. Zhang, and K. L. Wang, Eur. Phys. Lett. 96, 14003 (2011). S. Ashhab and F. Nori, Phys. Rev. A 81, 042311 (2010). H. Chen, Y. M. Zhang, and X. Wu, Phys. Rev. B 40, 11326 (1989). J. Stolze and L. Müller, Phys. Rev. B 42, 6704 (1990). E. K. Irish, Phys. Rev. Lett. 99, 173601 (2007). T. Liu, K. L. Wang, and M. Feng, Eur. Phys. Lett. 86, 54003 (2009). D. Zueco, G. M. Reuther, S. Kohler, and P. Hänggi, Phys. Rev. A 80, 033846 (2009). J. Casanova, G. Romero, I. Lizuain, J. J. García-Ripoll, and E. Solano, Phys. Rev. Lett. 105, 263603 (2010). M. J. Hwang and M. S. Choi, Phys. Rev. A 82, 025802 (2010). D. Braak, Phys. Rev. Lett. 107, 100401 (2011). J. Song, Y. Xia, X. D. Sun, Y. Zhang, B. Liu, and H. S. Song, Eur. Phys. J. D 66, 1 (2012). L. X. Yu, S. Q. Zhu, Q. F. Liang, G. Chen, and S. T. Jia, Phys. Rev. A 86, 015803 (2012). S. Agarwal, S. M. H. Rafsanjani, and J. H. Eberly, Phys. Rev. A 85, 043815 (2012). Q. H. Chen, C. Wang, S. He, T. Liu, and K. L. Wang, Phys. Rev. A 86, 023822 (2012). H. Zheng, Eur. Phys. J. B 38, 559 (2004). Z. G. Lü and H. Zheng, Phys. Rev. B 75, 054302 (2007). C. J. Gan and H. Zheng, Eur. Phys. J. D 59, 473 (2010). K. M. C. Lee and C. K. Law, Phys. Rev. A 88, 015802 (2013). D. Braak, arXiv:1304.2529v1 (2013). X. G. Wang and B. C. Sanders, Phys. Rev. A 68, 012101 (2003). X. Wang and K. M[ø]{}lmer, Eur. Phys. J. D 18, 385 (2002).
--- abstract: 'The interplay between shear and bulk viscosities on the flow harmonics, $v_n$’s, at RHIC is investigated using the newly developed relativistic 2+1 hydrodynamical code v-USPhydro that includes bulk and shear viscosity effects both in the hydrodynamic evolution and also at freeze-out. While shear viscosity is known to attenuate the flow harmonics, we find that the inclusion of bulk viscosity decreases the shear viscosity-induced suppression of the flow harmonics bringing them closer to their values in ideal hydrodynamical calculations. Depending on the value of the bulk viscosity to entropy density ratio, $\zeta/s$, in the quark-gluon plasma, the bulk viscosity-driven suppression of shear viscosity effects on the flow harmonics may require a re-evaluation of the previous estimates of the shear viscosity to entropy density ratio, $\eta/s$, of the quark-gluon plasma previously extracted by comparing hydrodynamic calculations to heavy ion data.' author: - 'Jacquelyn Noronha-Hostler' - Jorge Noronha - Frédérique Grassi title: 'Bulk viscosity-driven suppression of shear viscosity effects on the flow harmonics at RHIC' --- Introduction ============ One of the the main results stemming from heavy-ion collision experiments at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) is the discovery that the Quark-Gluon Plasma (QGP) behaves as a nearly perfect fluid [@Gyulassy:2004zy] in which the lowest value of $\eta/s \sim 0.2$ [@Heinz:2013th]. While there have been several model calculations that support such a small value for $\eta/s$ in the QGP [@Danielewicz:1984ww; @Kovtun:2004de; @Nakamura:2004sy; @Meyer:2007ic; @Xu:2007ns; @Xu:2007jv; @Hirano:2005wx; @Csernai:2006zz; @Hidaka:2008dr; @NoronhaHostler:2008ju; @NoronhaHostler:2012ug; @Ozvenchuk:2012kh], much less is known about the bulk viscosity to entropy density ratio, $\zeta/s$. In fact, though it is true that $\zeta/s$ vanishes at sufficiently large temperatures [@Arnold:2006fz], it is not clear at the moment how large this quantity can be [@Meyer:2007dy; @Karsch:2007jc] in the range of temperatures probed in heavy ion collisions, $T \sim 100-400$ MeV. This has led to the idea that the bulk viscosity may be extremely small and have negligible effects on observable quantities such as the azimuthal flow anisotropies (for phenomenological consequences of large bulk viscosity in heavy ion collisions see [@Torrieri:2007fb; @Rajagopal:2009yw; @Habich:2014tpa]). Most studies have used only shear viscous calculations and then fitted the calculated flow harmonics to experimental data to estimate the shear viscosity of the QGP [@etas]. There are a few exceptions of those who have explored bulk viscosity [@Song:2009rh; @Bozek:2009dw; @Monnai:2009ad; @Denicol:2009am; @Denicol:2010tr; @Dusling:2011fd] and its effects on elliptic flow but further work was needed to quantify the effects of bulk viscosity on the higher order flow harmonics. Recently, in [@Noronha-Hostler:2013gga] the effects solely from bulk viscosity on the flow harmonics at RHIC were investigated using event-by-event hydrodynamics and it was found that bulk viscosity enhances the differential flow harmonics with respect to the ideal case, which is the opposite effect found in the case of shear viscosity [@etas]. In this paper we will explore the interplay between bulk and shear viscosities within the framework of relativistic hydrodynamical modeling using v-USPhydro [@Noronha-Hostler:2013gga], which is a boost invariant viscous hydrodynamical code that runs event-by-event initial conditions using Smoothed Particle Hydrodynamics (SPH) [@originalSPH; @SPHothers; @Aguiar:2000hw] to solve the equations of motion. For some choices of the model parameters, we find that for $\sqrt{s}=200$ A GeV RHIC collisions bulk viscosity can almost entirely negate the effects of shear viscosity when they are of a comparable size for both the integrated and $p_T$ dependent flow harmonics. In fact, in this case for differential flow harmonics bulk viscosity effects dominate over the effects from shear. However, we find that there is a strong dependence on the model choice of bulk viscous corrections at freeze-out for the differential flow harmonics at intermediate $p_T> 1.5$ GeV (at low $p_T$ both methods converge and, thus, the integrated flow harmonics are much more robust with respect to model changes in the viscous contribution to the particle distributions). Finally, we find that bulk viscosity has a nontrivial effect on the shear stress tensor even when the chosen $\zeta/s$ is significantly smaller than the shear viscosity. This paper is organized as follows. In Section \[sec:model\] we cover the relativistic hydrodynamical model that we are using. In \[sec:eom\] we discuss the equations of motion for 2+1 relativistic hydrodynamics with bulk and shear viscosity using the SPH formalism then in Section \[sec:bulkshear\] we show the transport coefficients used for this paper. In Section \[sec:ic\] the setup for our Glauber event-by-event initial conditions are discussed and in Section \[sec:freezeout\] we discuss the parameters and equations for the freeze-out with viscous corrections. In Section \[sec:visceffhydro\] we explore the effects of shear and bulk viscosities on the hydrodynamical evolution while in Section \[sec:results\] we discuss the results of our work for both the integrated $v_n$’s in \[sec:intvns\] and the differential $v_n$’s in \[eqn:difvns\]. We also show a comparison for the case when bulk and shear viscosities have the same magnitude in Section \[eqn:equal\]. Finally, in Section \[sec:conclu\] we discuss the consequences of our work. Details about the equations and tests of the accuracy of v-USPhydro can be found in the Appendices. *Definitions*: We use a flat space-time metric in Milne coordinates defined as $g_{\mu \nu }=(1,-1,-1,-\tau ^{2})$ where $x^{\mu }=(\tau ,\mathbf{r},\eta )$ and $$\begin{aligned} \tau &=&\sqrt{t^{2}-z^{2}} \nonumber \\ \eta &=&\frac{1}{2}\ln \left( \frac{t+z}{t-z}\right) \,.\end{aligned}$$ are the proper time and space-time rapidity, respectively. Furthermore, we assume boost invariance for the flow velocity so $u_{\mu }=\left( \sqrt{1+u_{x}^{2}+u_{y}^{2}},u_{x},u_{y},0\right) $ and also $u_\mu u^\mu=1$. Natural units are employed throughout this work, i.e., $\hbar=k_B=c=1$. Details of the Hydrodynamic Model {#sec:model} ================================= Equations of Motion for 2+1 relativistic hydrodynamics with bulk and shear viscosities {#sec:eom} -------------------------------------------------------------------------------------- In this paper we use a boost invariant setup with a vanishing baryon chemical potential and the conservation of energy and momentum $\nabla_\mu T^{\mu\nu}=0$ can be written as $$\frac{1}{\tau}\partial _{\mu }\left( \tau T^{\mu \nu }\right) +\Gamma _{\lambda \mu }^{\nu }T^{\lambda \mu }=0 \label{eqn:hydro}$$where the Christoffel symbol is $$\Gamma _{\lambda \mu }^{\nu }=\frac{1}{2}g^{\nu \sigma }\left( \partial _{\mu }g_{\sigma \lambda }+\partial _{\lambda }g_{\sigma \mu }-\partial _{\sigma }g_{\mu \lambda }\right) .$$The general expression for energy-momentum tensor that includes both bulk and shear viscosity effects is $$T^{\mu \nu }=\varepsilon u^{\nu}u^{\nu }-\left( p+\Pi \right) \Delta ^{\mu \nu }+\pi^{\mu\nu},$$where $\Pi$ is the bulk viscous pressure, $\pi^{\mu\nu}$ is the shear stress tensor, and the spatial projection operator is $% \Delta _{\mu \nu }=g_{\mu \nu }-u_{\mu }u_{\nu }$. Above, we use the Landau definition for the local rest frame, $u_{\nu }T^{\mu \nu }=\varepsilon u^{\mu }$ and the remaining dynamical quantities are the energy density $% \varepsilon$, the pressure $p$ (which can be written in terms of $\varepsilon$ via the equation of state) and the fluid 4-velocity $u^{\mu}$. The dissipative currents $\Pi$ and $\pi^{\mu\nu}$ obey relaxation-type differential equations and the full form of these equations, at least according to kinetic theory, can be found in [@Denicol:2012cn]. In this paper we do not consider all the terms found in [@Denicol:2012cn] due to the large uncertainty regarding the values of the many new transport coefficients involved (for a recent study involving the determination of these coefficients in certain limits see [@Denicol:2014vaa]). Rather, we use as in our previous work [@Noronha-Hostler:2013gga] the simplest equation for the bulk scalar (obtained originally via the memory function prescription [@Koide:2006ef] and used also in [@Denicol:2009am; @Denicol:2010tr]) $$\tau _{\Pi }\left( D\Pi +\Pi \theta \right) +\Pi +\zeta \theta =0, \label{eqn:hydro3}$$where $D=u^{\mu }\nabla_{\mu }$ is the comoving covariant derivative, $% \theta \equiv \nabla_\mu u^\mu=\tau ^{-1}\partial _{\mu }\left( \tau u^{\mu }\right) $ is the fluid expansion rate, and $\tau _{\Pi}$ is the bulk relaxation time coefficient. For the description of the shear stress tensor we use the minimal Israel-Stewart description (compatible with conformal invariance) $$\tau_{\pi}\left(\Delta_{\mu\nu\alpha\beta}D\pi^{\alpha\beta}+\frac{4}{3}\pi_{\mu\nu}\theta\right)+\pi_{\mu\nu}=2\eta\sigma_{\mu\nu}$$ where we defined the tensor projector $\Delta_{\mu\nu\alpha\beta}=\frac{1}{2}\left[\Delta_{\mu\alpha}\Delta_{\nu\beta}+\Delta_{\mu\beta}\Delta_{\nu\alpha}-\frac{2}{3}\Delta_{\mu\nu}\Delta_{\alpha\beta}\right]$, the shear tensor $\sigma_{\mu\nu}=\Delta_{\mu\nu\alpha\beta}\nabla^\alpha u^\beta$, and $\tau_{\pi}$ is the shear relaxation coefficient. Therefore, in this work we have 4 transport coefficients: $\zeta$, $\eta$ and their respective relaxation times $\tau_{\Pi}$ and $\tau_{\pi}$. We note that we included the term $\pi^{\mu\nu}\theta$ in the equations of motion for $\pi^{\mu\nu}$ to make it possible to check the accuracy of our code against the analytical and semi-analytical solutions found in Ref. [@Marrochio:2013wla] (shown in detail in Appendix \[sheartest\]). The fluid dynamical evolution is written using the Lagrangian approach within the Smoothed Particle Hydrodynamics (SPH). An in depth discussion of the SPH formalism and its relationship to the equations of motion can be found in [@Hama:2004rr; @Denicol:2009am; @Denicol:2010tr; @Noronha-Hostler:2013gga; @Noronha-Hostler:2013ria; @Andrade:2013poa]. Model choice for the transport coefficients and equation of state {#sec:bulkshear} ----------------------------------------------------------------- The v-USPhydro code has the ability to run ideal, bulk, shear, and shear+bulk 2+1 hydrodynamics (a generalization of the code to include full 3+1 dynamics is in the making). In this paper we consider the temperature dependent shear, bulk, and relaxation time coefficients shown in Fig. \[fig:transco\]. ![(Color online) Upper panel - Temperature dependence of $\eta/s$ from Eq. (\[eqn:eta\]) (dashed blue line) and $\zeta/s$ (multiplied by a factor of 10 for clarity) obtained from Eq. (\[eqn:adszeta\]) (solid black line). Lower panel - The relaxation time coefficients $\tau_{\pi}$ from Eq. (\[eqn:taupi\]) for shear (dashed blue line) and $\tau_{\Pi}$ for bulk from Eq. (\[eqn:tauPI\]) (solid black line).[]{data-label="fig:transco"}](figs/trans.eps "fig:"){width="40.00000%"}\ ![(Color online) Upper panel - Temperature dependence of $\eta/s$ from Eq. (\[eqn:eta\]) (dashed blue line) and $\zeta/s$ (multiplied by a factor of 10 for clarity) obtained from Eq. (\[eqn:adszeta\]) (solid black line). Lower panel - The relaxation time coefficients $\tau_{\pi}$ from Eq. (\[eqn:taupi\]) for shear (dashed blue line) and $\tau_{\Pi}$ for bulk from Eq. (\[eqn:tauPI\]) (solid black line).[]{data-label="fig:transco"}](figs/taupi.eps "fig:"){width="40.00000%"}\ For the temperature dependent shear viscosity we use the parametrization done in [@Niemi:2012ry] that describes the low temperature region using the result from the extended mass spectrum hadronic model [@NoronhaHostler:2008ju] while at high temperatures $\eta/s$ is given by the lattice data of Ref. [@Nakamura:2004sy]. It reads $$\begin{aligned} \frac{\eta}{s}(T>T_{tr})&=&-0.289+0.288\left(\frac{T}{T_{tr}}\right)+0.0818\left(\frac{T}{T_{tr}}\right)^2\nonumber\\ \frac{\eta}{s}(T<T_{tr})&=&0.681-0.0594\left(\frac{T}{T_{tr}}\right)-0.544\left(\frac{T}{T_{tr}}\right)^2\label{eqn:eta}\end{aligned}$$ where $T_{tr}=180$ MeV and the shear relaxation time [@Denicol:2010xn; @Denicol:2011fa] is taken to be $$\label{eqn:taupi} \tau_{\pi}=5\eta/(\varepsilon+p).$$ We have used the following bulk viscosity coefficient (inspired by Buchel’s formula [@Buchel:2007mf] for a strongly coupled plasma) $$\frac{\zeta }{s}=\frac{1}{8\pi }\left( \frac{1}{3}-c_{s}^{2}\right) , \label{eqn:adszeta}$$with the corresponding temperature dependent bulk relaxation time, $\tau _{\Pi }$ (see [@Huang:2010sa]) $$\tau _{\Pi }=9\,\frac{\zeta }{\varepsilon -3p}\,. \label{eqn:tauPI}$$Given the small value of $\zeta/s$ used here, we note that in Fig. \[fig:transco\] we actually plot $10\,\zeta/s$ in order to better illustrate its temperature dependence. Furthermore, we always ensure that $\tau _{\Pi }$ and $\tau_{\pi}$ are greater than 0.1 fm (the time step size of the numerical code) to avoid stability issues. Initial conditions {#sec:ic} ------------------ Centrality $N_{part}$ ------------ --------------- $0-10\%$ $>$ 274.95 $10-20\%$ 195.98-274.95 $20-30\%$ 139.01-195.98 $30-40\%$ 96.99-139.01 $40-50\%$ 61.95-96.99 $50-60\%$ 37.04-61.95 : Relationship between the number of participants, $N_{part}$, and the different centrality classes for Au+Au collisions at RHIC at $\sqrt{s}_{NN}=200$ GeV used in this paper.[]{data-label="tab:par"} In this paper we only consider Monte Carlo Glauber simulations of Au+Au collisions at RHIC at $\sqrt{s}_{NN}=200$ GeV [@ic] as our initial conditions for the energy density. We begin the relativistic fluid-dynamical evolution at $\tau _{0}=1$ fm (for testing of this assumption see [@Noronha-Hostler:2013gga]). Our centrality classes are found by binning the results for $N_{part}$ over 15,000 events and they are well in agreement with other Monte Carlo Glauber simulations [@Adler:2003cb]. The relationship between $N_{part}$ and the centrality classes is shown in Tab. \[tab:par\]. Within each centrality class we calculate 150 hydrodynamical events on an event-by-event basis. As in our previous work using the v-USPhydro code [@Noronha-Hostler:2013gga], our initial energy density is $$\label{eqn:cglauber} \varepsilon (\mathbf{r})=c\;n_{coll}(\mathbf{r}),$$where $n_{coll}$ is the number density of binary collisions in the event, which is fixed to obtain on average $123$ direct $\pi ^{+}$’s in central (averaged $0-5\%$) RHIC collisions (this number of direct pions, when added to the yield coming from particle decays, leads to the correct number of $\pi ^{+}$’s in this case). Also, in this paper particle decays and hadronic transport have not been taken into account. Furthermore, we assume that $\Pi$, $\pi^{\mu\nu}$, and the spatial components of $u^{\mu}$ vanish at $\tau_{0}$. Cooper-Frye Freeze-out {#sec:freezeout} ---------------------- Viscous corrections enter not only in the hydrodynamical equations of motion discussed in Section \[sec:eom\] but also in the distribution function for the Cooper-Frye freeze-out [@Cooper:1974mv]. We perform the freeze-out on an isothermal hypersurface with the freeze-out temperature $T_0=150$ MeV [@Noronha-Hostler:2013gga]. The distribution function for a given hadron is described as $$f_{p}=f_{0p}\left\{1+\left(1-af_{0p}\right)\left[\delta f^{Bulk}_{p}+\delta f^{Shear}_{p}\right]\right\}$$ where the ideal component of the distribution function, $ f_{0p}$, is $$\label{eqn:CFideal} f_{0p}=\frac{1}{e^{(p^{\mu}u_{\mu})/T_0}+a}$$ where $a=1$ for fermions, $a=-1$ for bosons, and $a=0$ for classical Boltzmann statistics. The general form of the correction term for bulk viscosity, $\delta f^{Bulk}_{p}$, up to second order in powers of $\left(u^i\cdot p_i\right)$ is [@Noronha-Hostler:2013gga] $$\delta f^{Bulk}_{p}=\Pi\left[B_0+D_0\left(u^i\cdot p_i\right)+E_0\left(u^i\cdot p_i\right)^2\right]$$ where $B_0$, $D_0$, and $E_0$ depend on the particle type (mass, degeneracy) and freeze-out temperature. In this paper we consider both the coefficients determined from the Moments Method (MOM) in [Denicol:2012cn,Denicol:2012yr,Noronha-Hostler:2013gga]{} and those derived by Monnai and Hirano (MH) in [@Monnai:2009ad]. MH implemented Grad’s 14-moment method for multi-particle species to compute the bulk viscous contribution to the distribution function. MOM is based on the novel procedure proposed in [@Denicol:2012cn] to derive viscous hydrodynamic equations from the Boltzmann equation, generalized to include the case involving different hadron species. The exact coefficients for each method were determined in [@Noronha-Hostler:2013gga] for the case of pions with $T_0=150$ MeV and for MOM we obtain $$\begin{aligned} B_{0}^{(\pi)}&=& -65.85 \,\,fm^4\,, \nonumber \\ D_{0}^{(\pi)}&=& 171.27 \,\,fm^4/GeV\,, \nonumber \\ E_{0}^{(\pi)}&=& -63.05 \,\,fm^4/GeV^2\,,\end{aligned}$$ while for MH $$\begin{aligned} B_{0}^{(\pi)}&=& -0.69 \,\,fm^4\,, \nonumber \\ D_{0}^{(\pi)}&=& -38.96 \,\,fm^4/GeV\,, \nonumber \\ E_{0}^{(\pi)}&=& 49.69 \,\,fm^4/GeV^2\,.\end{aligned}$$ Finally, we take the “democratic" Ansatz for the correction term for shear viscosity, $\delta f^{Shear}_{p}$ $$\begin{aligned} \delta f^{Shear}_{p}&=&\frac{\pi^{\mu\nu}p_{\mu}p_{\nu}}{2\left(\varepsilon+P\right)T^2}\end{aligned}$$ (for a recent discussion on the validity of such an Ansatz in kinetic theory see [@Molnar:2014fva]). The final expression for the pion spectrum in the SPH formalism, including both shear and bulk viscosity effects, is worked out in Appendix \[detailsCF\] and we refer the reader to that section for the necessary details. Viscous Effects in the Hydrodynamical Evolution {#sec:visceffhydro} =============================================== ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Energy Density distribution for a random event in peripheral ($20-30\%$) collisions at RHIC for different times. a.) initial time $\tau_0=1$ fm. Energy distribution at $\tau=6$ fm for ideal hydrodynamics in b.), c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity, and e.) viscous hydrodynamics with both bulk and shear viscosity effects. The transport coefficients are the ones shown in Fig. \[fig:transco\].[]{data-label="tab:eden"}](figs/eden_ic.eps "fig:"){width="50.00000%"} ![(Color online) Energy Density distribution for a random event in peripheral ($20-30\%$) collisions at RHIC for different times. a.) initial time $\tau_0=1$ fm. Energy distribution at $\tau=6$ fm for ideal hydrodynamics in b.), c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity, and e.) viscous hydrodynamics with both bulk and shear viscosity effects. The transport coefficients are the ones shown in Fig. \[fig:transco\].[]{data-label="tab:eden"}](figs/eden_ideal.eps "fig:"){width="50.00000%"} ![(Color online) Energy Density distribution for a random event in peripheral ($20-30\%$) collisions at RHIC for different times. a.) initial time $\tau_0=1$ fm. Energy distribution at $\tau=6$ fm for ideal hydrodynamics in b.), c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity, and e.) viscous hydrodynamics with both bulk and shear viscosity effects. The transport coefficients are the ones shown in Fig. \[fig:transco\].[]{data-label="tab:eden"}](figs/eden_bulk.eps "fig:"){width="50.00000%"} ![(Color online) Energy Density distribution for a random event in peripheral ($20-30\%$) collisions at RHIC for different times. a.) initial time $\tau_0=1$ fm. Energy distribution at $\tau=6$ fm for ideal hydrodynamics in b.), c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity, and e.) viscous hydrodynamics with both bulk and shear viscosity effects. The transport coefficients are the ones shown in Fig. \[fig:transco\].[]{data-label="tab:eden"}](figs/eden_shear.eps "fig:"){width="50.00000%"} ![(Color online) Energy Density distribution for a random event in peripheral ($20-30\%$) collisions at RHIC for different times. a.) initial time $\tau_0=1$ fm. Energy distribution at $\tau=6$ fm for ideal hydrodynamics in b.), c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity, and e.) viscous hydrodynamics with both bulk and shear viscosity effects. The transport coefficients are the ones shown in Fig. \[fig:transco\].[]{data-label="tab:eden"}](figs/eden_bulkshear.eps "fig:"){width="50.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In this section we describe the interplay between shear and bulk viscosity within the fluid evolution. To do so we consider here first the effects of viscosity on the energy density over an interval of $\Delta\tau=5$ fm (we start the evolution at $\tau_0=1$ fm and plot the energy density profile at $\tau=6$ fm) for a random initial condition shown in Fig. \[tab:eden\] a.) for RHIC’s $20-30\%$ centrality class. We then include plots of the fluid evolution of the energy density at $\tau=6$ fm for b.) ideal hydrodynamics, c.) viscous hydrodynamics with only bulk viscosity, d.) viscous hydrodynamics with only shear viscosity effects, and e.) viscous hydrodynamics with both bulk and shear viscosity. One can clearly see in Fig. \[tab:eden\] that there are qualitative differences between the ideal and viscous fluids. The ideal hydrodynamical evolution preserves most of the initial structure of the energy density even after $\Delta\tau=5$ fm. The bulk viscous case evolution does not maintain as many peaks and valleys at the ideal case but still displays more structure than both the shear and shear+bulk hydro events. Moreover, it is flatter than all the other events (recall that bulk viscosity acts against radial expansion). For the energy density profile we are able to see no difference between the bulk and the shear+bulk case, which indicates that the shear viscosity dominates the viscous corrections to energy density throughout the hydrodynamical evolution. This is not surprising considering that our chosen $\zeta/s$ is relatively small in comparison to $\eta/s$ and the energy density is a relatively robust observable. $\langle \Pi\rangle$ $\sigma^2_{\Pi}$ $\langle \Pi\rangle_{early}$ $(\sigma^2_{\Pi})_{early}$ $\langle \Pi\rangle_{late}$ $(\sigma^2_{\Pi})_{late}$ -------- ---------------------- ------------------ ------------------------------ ---------------------------- ----------------------------- --------------------------- 0-10% 1.79% 8.59% 1.14% -59.72% 2.03% 20.50% 10-20% 2.48% 8.95% 2.89% -52.37% 2.19% 20.59% 20-30% 2.87% 8.96% 4.07% -40.70% 2.02% 20.66% 30-40% 3.49% 9.15% 3.47% -36.96% 2.15% 19.97% 40-50% 4.14% 9.11% 3.52% -37.23% 2.00% 20.86% 50-60% 4.98% 9.23% 6.27% -22.55% 2.28% 19.73% : Percentage change of the mean values of the bulk pressure $\Pi$ and its corresponding variance $\sigma^2_{\Pi}$ averaged over all events for different centrality classes due to the presence of shear viscosity. $\langle \Pi\rangle$ and $\sigma^2_{\Pi}$ takes into account the parts of the fluid that have frozen out throughout the whole time evolution, $\langle \Pi\rangle_{early}$ and $(\sigma^2_{\Pi})_{early}$ are computed using only the parts of the fluid that have frozen out between $\tau_0=1$ fm and $\tau=2$ fm, $\langle \Pi\rangle_{late}$ and $(\sigma^2_{\Pi})_{late}$ are computed using only the parts of the fluid that have frozen out in the last fm of the time evolution.[]{data-label="percchangebulk"} While there is little difference between the case involving only shear and shear+bulk within the energy density profile, it is interesting to see if an effect shows up in the nonzero components of the shear stress tensor $\pi^{\mu\nu}$ and, additionally, in the bulk pressure $\Pi$. To see how the inclusion of shear viscosity affects the bulk pressure, we first look at the mean (averaged over all the SPH particles) of the bulk pressure for each individual event, $\left(\Pi\right)_{ev}$, and its corresponding variance and define $$\begin{aligned} (\Pi)_{ev}&=&100\,\frac{\left(\Pi_{sb}\right)_{ev}-\left(\Pi_{b}\right)_{ev}}{\left(\Pi_{b}\right)_{ev}}\nonumber\\ (\sigma^2_{\Pi})_{ev}&=&100\,\frac{\left(\sigma^2_{\Pi_{sb}}\right)_{ev}-\left(\sigma^2_{\Pi_{b}}\right)_{ev}}{\left(\sigma^2_{\Pi_{b}}\right)_{ev}},\end{aligned}$$\[eqn:perchange\] where $\left(\Pi_{sb}\right)_{ev}$ is the mean bulk pressure of a given event $ev$ with the corresponding variance $\left(\sigma^2_{\Pi_{sb}}\right)_{ev}$ when the equations of motion include both shear viscosity and bulk viscosity while $\left(\Pi_{b}\right)_{ev}$ is the mean bulk pressure of the same event $ev$ with the corresponding variance $\left(\sigma^2_{\Pi_{b}}\right)_{ev}$ when the equations of motion include only bulk viscosity. We then average the percentage change over all the events within each individual centrality class such that we look at the mean percentage change of the bulk pressure $\langle\Pi\rangle$ and the mean percentage change of the variance of the bulk pressure $\langle\sigma^2_{\Pi}\rangle$ over all the events. In Table \[percchangebulk\], $\langle \Pi\rangle$ and $\sigma^2_{\Pi}$ takes into account the parts of the fluid that have frozen out throughout the whole time evolution, $\langle \Pi\rangle_{early}$ and $(\sigma^2_{\Pi})_{early}$ are computed using only the parts of the fluid that have frozen out between $\tau_0=1$ fm and $\tau=2$ fm, $\langle \Pi\rangle_{late}$ and $(\sigma^2_{\Pi})_{late}$ are computed using only the parts of the fluid that have frozen out in the last fm of the time evolution. In general, we see that when shear viscosity is added to our hydrodynamical evolution the changes in the bulk pressure are not large. The mean percentage change of the bulk pressure $\langle\Pi\rangle$ is small and only increases for more peripheral events. Also, the mean percentage change of the variance of the bulk pressure $\langle\sigma^2_{\Pi}\rangle$ is around $9\%$ across all centrality classes, which means that the shear viscosity slight increases $\Pi$ and also makes the distribution of $\Pi$ only $9\%$ wider on average. The mean percentage change of $\langle \Pi\rangle_{early}$ and $\langle \Pi\rangle_{late}$ are positive and $<10\%$ and one can see that the percentage change of the variance $(\sigma^2_{\Pi})$ decreases significantly at early times while at late times it increases by $\sim 20\%$ for all centrality classes due to the inclusion of shear. This shows that even though the mean bulk pressure is not that affected by the presence of shear, its distribution computed event by event becomes sharper around the mean at early times and gets broadened at late times. Centrality $\langle \pi^{00}\rangle$ $\sigma^2_{\pi^{00}}$ $\langle \pi^{12}\rangle$ $\sigma^2_{\pi^{12}}$ ------------ --------------------------- ----------------------- --------------------------- ----------------------- 0-10% -17.61% -19.09% -2.87% -8.50% 10-20% -17.77% -18.53% -2.25% -8.45% 20-30% -19.22% -18.56% -3.48% -8.44% 30-40% -22.98% -18.53% -3.26% -8.35% 40-50% -38.11% -19.37% -2.81% -8.01% 50-60% -44.63% -19.61% -5.05% -7.68% : The percentage change in the mean values and variance of the $\pi^{00}$ and $\pi^{12}$ components of the shear stress tensor $\pi^{\mu\nu}$ averaged over all events and all SPH particles due to the inclusion of bulk viscosity in the time evolution. These quantities are computed taking into account the parts of the fluid that have frozen out throughout the whole time evolution.[]{data-label="perchangshear"} While the effects of shear on the bulk pressure are not large, the effects of bulk viscosity on the shear stress tensor are not so trivial. In Table \[perchangshear\] we show the percentage change in the mean values and variance of the $\pi^{00}$ and $\pi^{12}$ components of the shear stress tensor $\pi^{\mu\nu}$ averaged over all events and all SPH particles due to the inclusion of bulk viscosity in the time evolution. These quantities are computed taking into account the parts of the fluid that have frozen out throughout the whole time evolution. We note that since $\pi^{\mu\nu}$ is traceless, $\pi^{00} = \pi^{11}+\pi^{22}+\tau^2 \pi^{33}$. One can see that the inclusion of bulk viscosity considerably affects $\langle \pi^{00}\rangle$: there is a suppression in its average value that increases towards more peripheral collisions while its variance also decreases by $\sim 20\%$ for all centralities due to the nonzero bulk viscosity. Therefore, the distribution of $\pi^{00}$ has a smaller mean and becomes sharper around the mean due to bulk viscosity. This occurs because bulk viscosity dampens out radial disturbances of pressure and flow, which in turn should decrease the diagonal components of the shear stress tensor. On the other hand, $\pi^{12}$ is only slightly affected by bulk viscosity both in terms of its mean and variance, which could be expected from symmetry arguments. In Table \[perchangshearearly\] we show the corresponding quantities obtained from the parts of the fluid that have already frozen out after 1 fm passed the initial time $\tau_0$. In this case, one can see that at early times the modification of the fluid velocity due to bulk viscosity has not yet affected the shear stress tensor by much. The mean value of $\pi^{00}$ decreases by $< 7\%$ in a way that is almost independent on centrality. This should be contrasted to the results in Table \[perchangshear\] which took into account the modification in this component throughout the whole time evolution due to bulk viscosity, which becomes more significant in peripheral collisions. Its variance decreases by $\sim 13\%$ in the most central collisions while it for peripheral collisions the suppression is $\sim 15\%$. Once more, the distribution of $\pi^{12}$ is only slightly affected ($< 10\%$) by the presence of bulk viscosity. Centrality $\langle \pi^{00}\rangle_{early}$ $(\sigma^2_{\pi^{00}})_{early}$ $\langle \pi^{12}\rangle_{early}$ $(\sigma^2_{\pi^{12}})_{early}$ ------------ ----------------------------------- --------------------------------- ----------------------------------- --------------------------------- 0-10% -6.66% -12.79% -5.94% -10.66% 10-20% -5.32% -11.31% -4.87% -9.46% 20-30% -6.07% -12.72% -4.81% -9.15% 30-40% -7.01% -14.08% -4.80% -9.19% 40-50% -4.75% -9.00% -4.75% -8.99% 50-60% -6.83% -15.02% -4.63% -8.76% : The percentage change in the mean values and variance of the $\pi^{00}$ and $\pi^{12}$ components of the shear stress tensor $\pi^{\mu\nu}$ averaged over all events and all SPH particles due to the inclusion of bulk viscosity in the time evolution. These quantities are computed taking into account only the parts of the fluid that have already frozen for early times (between $\tau=\tau_0$ and $\tau=2$ fm). []{data-label="perchangshearearly"} Centrality $\langle \pi^{00}\rangle_{late}$ $(\sigma^2_{\pi^{00}})_{late}$ $\langle \pi^{12}\rangle_{late}$ $(\sigma^2_{\pi^{12}})_{late}$ ------------ ---------------------------------- -------------------------------- ---------------------------------- -------------------------------- 0-10% -17.68% -29.13% -5.94% -10.80% 10-20% -15.98% -29.09% -4.80% -9.38% 20-30% -15.45% -28.56% -4.77% -9.06% 30-40% -14.97% -28.28% -4.88% -9.34% 40-50% -13.83% -27.91% -4.80% -9.20% 50-60% -12.75% -26.18% -4.50% -8.51% : The percentage change in the mean values and variance of the $\pi^{00}$ and $\pi^{12}$ components of the shear stress tensor $\pi^{\mu\nu}$ averaged over all events and all SPH particles due to the inclusion of bulk viscosity in the time evolution. These quantities are computed taking into account only the parts of the fluid that have frozen during the last fm of the time evolution.[]{data-label="perchangshearlate"} At later times we see a larger effect on the shear stress tensor components from the bulk viscosity. In Table \[perchangshearlate\] we see that for almost every case both the mean and variance, regardless of centrality class, are significantly larger for late freeze-out (the last $\Delta \tau=1fm$ of the hydrodynamical evolution). This indicates that as the hydrodynamical evolution progresses the effects of bulk viscosity are more visible, which in the end is consistent with the results in Table \[perchangshear\]. This occurs because it takes some time for the bulk viscosity to affect the flow and then the shear tensor and, consequently, the shear stress tensor. Furthermore, one expects that by lowering the freeze-out temperature (here we use $T_{0}=150$ MeV) one can increase the effects from bulk viscosity. By the same reason, going from RHIC to LHC energies one would expect that bulk viscosity becomes more relevant to the dynamical evolution of the system since at large energies the fluid stays in the QGP phase for a longer period of time. Tables \[percchangebulk\]-\[perchangshearlate\] suggest that the interplay between bulk and shear viscosities have a very non-trivial, non-linear effect on the shear stress tensor and bulk pressure already during the hydrodynamical evolution itself. While the shear only slightly increases the mean value of $\Pi$, the inclusion of bulk viscosity leads to a significant suppression of the shear stress tensor components. This indicates that the expected suppression of flow harmonics due to shear viscosity can be softened by the presence of bulk viscosity. In fact, in our previous work [@Noronha-Hostler:2013gga] we suggested that it may be possible for the bulk viscosity-driven enhancement of the integrated flow harmonics $vn$’s compensate for the expected damping of these coefficients due to shear viscosity. However, it appears that their relationship is more complicated than we initially believed. Results for the Flow Harmonics {#sec:results} ============================== In this section we show the results for both the $p_T$-integrated and the differential flow harmonics taking into account the effect of bulk and shear viscosities. We use the event plane method [@Poskanzer:1998yz] to calculate the event plane angles $\psi_n$’s and a detailed explanation of the method as done in v-USPhydro can be found in [@Noronha-Hostler:2013gga]. Additionally, in each centrality class we consider 150 events on an event-by-event basis (we have checked that the results found here are robust with respect to the inclusion of more events). Unless stated otherwise, for the $p_T$-integrated $v_n$’s we take the limits of integration to be $p_T=0-5$ GeV. However, due to issues previously discussed with the bulk viscous corrections within Cooper-Frye freeze-out [@Noronha-Hostler:2013gga], the overall viscous correction to the particle distribution at freeze-out can become larger (and negative) than the equilibrium contribution at intermediate values of $p_T$ (which would lead to a negative particle spectra for those values of $p_T$) if the viscous transport coefficients are large (for the coefficients shown in Fig. \[fig:transco\] this problem does not occur). In order to avoid such problems in the spectra and the integrated $v_n$’s when $\zeta/s$ is 10 times larger than that in Eq. (\[eqn:adszeta\]), we did not take into account the negative contribution from the corresponding part of the integral over $p_T$. Integrated $v_n$’s {#sec:intvns} ------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Ratio between the integrated $v_n$’s of viscous and ideal hydrodynamics of direct pions over all centralities at RHIC computed using the Moments Method (MOM) for the bulk viscosity contribution at freeze-out. The circles correspond to the case where only shear viscosity is taken into account, the squares represent the case with both shear and bulk viscosity, while the diamonds correspond to the case where shear and bulk are included but $\zeta/s$ is multiplied by a factor of 10.[]{data-label="tab:vintmom"}](figs/vint_moments1.eps "fig:"){width="50.00000%"} ![(Color online) Ratio between the integrated $v_n$’s of viscous and ideal hydrodynamics of direct pions over all centralities at RHIC computed using the Moments Method (MOM) for the bulk viscosity contribution at freeze-out. The circles correspond to the case where only shear viscosity is taken into account, the squares represent the case with both shear and bulk viscosity, while the diamonds correspond to the case where shear and bulk are included but $\zeta/s$ is multiplied by a factor of 10.[]{data-label="tab:vintmom"}](figs/vint_moments2.eps "fig:"){width="50.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In Fig. \[tab:vintmom\] we show the ratio between the integrated $v_n$’s of direct pions at RHIC of viscous and ideal hydrodynamics across all centrality classes investigated in this paper. These quantities were computed using the moments method to determine the viscous correction to the particle distribution at freeze-out. The transport coefficients we used are defined in Section \[sec:bulkshear\]. The $v_n$ dependence on $n$ has the steepest curve when only shear viscosity is included. The effect of bulk viscosity combined with shear viscosity slightly increases the $v_n$’s, in accordance with the conclusions found in [@Noronha-Hostler:2013gga] that the bulk viscosity slightly increases the $v_n$’s. It is, however, a small increase since our chosen $\zeta/s$ is significantly smaller than $\eta/s$. The suppression of shear viscosity effects here occurs because, as shown in the previous section, the inclusion of bulk viscosity decreases the magnitude of the shear stress tensor components. Finally, when one includes the effect of a “large" bulk viscosity, i.e., $10\zeta/s$ we see that the $v_n$’s are shifted upwards much closer to the ideal case. In this case the bulk viscosity-driven suppression of the shear stress tensor is very significant. We note here that our initial bulk viscosity is so small that even after multiplying by a factor of 10, its peak is still not as large as the minimum of the shear viscosity (see Fig. \[fig:transco\]). Thus, we do not expect that the bulk viscosity can completely counteract shear viscous effects even in this case. Furthermore, due to above mentioned limitations in the $\delta f$, for the case with $10\zeta/s$ one can only integrate to $p_T=0.8$ GeV (which in any case is the integration interval that gives the major contribution to this quantity) before the spectrum becomes negative. We note that the inclusion of bulk viscosity has the net effect to decrease the difference between $v_2$ and $v_3$ in central collisions. In fact, in the case of $10\zeta/s$ there is very little difference between $v_2$ and $v_3$ in the most central collisions, which is not the case towards peripheral collisions. If the bulk viscosity of the QGP is not really much smaller than $\eta/s$, further improvements to the viscous correction $\delta f$ involving bulk and shear are necessary for a more accurate calculation of $v_n$’s to verify the trend regarding $v_2$ and $v_3$ in central collisions found here. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![(Color online) Ratio between the integrated $v_n$’s of viscous and ideal hydrodynamics of direct pions over all centralities at RHIC computed using the Moments Method (MOM) and the Monnai-Hirano (MH) formulas for the bulk viscosity contribution at freeze-out. The filled triangles correspond to the case with only bulk viscosity with $\delta f^{Bulk}$ from MOM while the empty triangles correspond to the analogous case computed with $\delta f^{Bulk}$ from MH. The solid squares correspond to the case with shear and bulk viscosities with $\delta f^{Bulk}$ from MOM while empty squares correspond to the analogous case computed with $\delta f^{Bulk}$ from MH.[]{data-label="tab:vintdfcomp"}](figs/vint_dfcomp1.eps "fig:"){width="50.00000%"} ![(Color online) Ratio between the integrated $v_n$’s of viscous and ideal hydrodynamics of direct pions over all centralities at RHIC computed using the Moments Method (MOM) and the Monnai-Hirano (MH) formulas for the bulk viscosity contribution at freeze-out. The filled triangles correspond to the case with only bulk viscosity with $\delta f^{Bulk}$ from MOM while the empty triangles correspond to the analogous case computed with $\delta f^{Bulk}$ from MH. The solid squares correspond to the case with shear and bulk viscosities with $\delta f^{Bulk}$ from MOM while empty squares correspond to the analogous case computed with $\delta f^{Bulk}$ from MH.[]{data-label="tab:vintdfcomp"}](figs/vint_dfcomp2.eps "fig:"){width="50.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ In Fig. \[tab:vintdfcomp\] we compare the difference between two different choices of $\delta f^{Bulk}$ viscous corrections within Cooper-Frye freeze-out. As discussed in the previous section, the moments method is derived in [Denicol:2012cn,Denicol:2012yr]{} and while MH comes from [@Monnai:2009ad]. It appears that at least in the case of the integrated $v_n$’s there is almost no difference between the two methods for all centrality classes and $v_n$’s. The moments method shows a slight increase for the integrated $v_n$’s for both bulk and shear+bulk but the change is very small. This is because the integrated $v_n$’s are more dependent on the lower $p_T$ region wherein there is little difference between MOM and MH. Differential $v_n$’s {#eqn:difvns} -------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison of $v_2$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v2"}](figs/v2c1.eps "fig:"){width="50.00000%"} ![(Color online) Comparison of $v_2$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v2"}](figs/v2c2.eps "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison of $v_3$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v3"}](figs/v3c1.eps "fig:"){width="50.00000%"} ![(Color online) Comparison of $v_3$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v3"}](figs/v3c2.eps "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison of $v_4$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v4"}](figs/v4c1.eps "fig:"){width="50.00000%"} ![(Color online) Comparison of $v_4$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v4"}](figs/v4c2.eps "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison of $v_5$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v5"}](figs/v5c1.eps "fig:"){width="50.00000%"} ![(Color online) Comparison of $v_5$ of direct pions across the centrality classes $0-10\%$, $10-20\%$, $20-30\%$ (in column a.), $30-40\%$, $40-50\%$, and $50-60\%$ (in column b.) for RHIC. The solid black line denotes the ideal hydro result, the short-dashed blue line was computed taking into account only shear viscosity, the long-dashed green curve was computed using shear+bulk with the moments method expression for the freeze-out while the dark red dotted-dashed curve corresponds to shear+bulk with the MH formula.[]{data-label="tab:v5"}](figs/v5c2.eps "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In this section we consider the $p_T$ dependent $v_n$’s across all centrality classes and look for the effects of bulk and shear viscosity. In Figs. \[tab:v2\]-\[tab:v5\] we show $v_2(p_T)$-$v_5(p_T)$, respectively. We do not include the effect of $\eta/s+10\,\zeta/s$ because for this large $\zeta/s$ value we are only able to calculate the spectrum reliably up to $p_T < 1$ GeV for MOM (in the case of MH one can integrate up to $p_T< 1.5$ GeV). One can see that the viscous corrections considerably change the $v_n(p_T)$’s, especially for large $p_T$. For all the $v_n(p_T)$’s and all the centralities we see a pattern: ideal hydrodynamics (solid black line) gives the largest $v_n(p_T)$, followed by the viscous case with shear and bulk computed using the MH formula (dark red dotted-dashed curve) and then the shear+bulk computed using the MOM expression (long-dashed green line), for which the $v_n(p_T)$’s are a bit larger than the case including only shear viscosity (short-dashed blue line) for $p_T < 1.5$ GeV. These results show that the bulk viscosity-driven suppression of shear effects also occurs for the differential flow harmonics. In the previous section we showed that both MH and MOM give very similar integrated $v_n$’s. The same cannot be said about the $p_T$ differential flow harmonics. Using the MOM correction, we found that the case including $\eta/s+\zeta/s$ effects uniformly decreases the $v_n(p_T)$’s and the effect is strongest for the most peripheral collisions. Additionally, higher order $v_n$’s are more strongly affected by the combined effect of shear and bulk viscosities. We also note that for $v_2(p_T)$ the MH curves with shear and bulk nearly match the ideal curves for all centrality classes. The same does not occur for higher order flow coefficients. In fact, as discussed in [@Noronha-Hostler:2013gga], the MH correction to the particle distribution diverges quickly for large $p_T$. The curves computed with the MOM method start to decrease (faster than those for pure shear viscosity effects) for $p_T >1.5$ GeV. Equal Shear and Bulk viscosities {#eqn:equal} -------------------------------- Up until this point we have always assumed that the bulk viscosity is significantly smaller than the shear viscosity. However, due to our very limited knowledge about the magnitude of $\zeta/s$ in the QGP there is no a priori reason why that must be the case. While there are no limitations from the point of view of the hydrodynamic code to use larger $\zeta/s$’s (aside from possible cavitation effects for sufficiently large $\zeta/s$), unfortunately, due to limitations with the $\delta f$ corrections for bulk viscosity we cannot include a bulk viscosity that is as large as the generally accepted shear viscosity $\sim 1/4\pi$. However, in order to understand what happens when both the bulk and shear viscosities have equal magnitude, we can consider a very small shear viscosity that is of the same order of magnitude as our bulk viscosity. In this section we consider the temperature independent situation where $\zeta/s=\eta/s=0.007$ and compare to the case where only shear viscosity $\eta/s=0.007$ is included. Because the bulk viscosity generally increases the $v_n$’s (both integrated and $p_T$ dependent flow harmonics) and the shear viscosity generally decreases them, it is possible that when they are both of the same order of magnitude that they will reproduce the ideal results (an indication of that possibility was already found in the previous section when comparing the shear+bulk MH results for $v_2(p_T)$ with its ideal value). Here we only look at the $20-30\%$ centrality class. ![(Color online) Ratio between the integrated $v_n$’s of viscous and ideal hydrodynamics of direct pions in the $20-30\%$ centrality class at RHIC computed using the Moments Method (MOM) for the pure shear case $\eta/s=0.007$ (black dots) and the bulk + shear calculation where $\zeta/s=\eta/s=0.007$ (red squares). []{data-label="fig:vnequal"}](figs/equal.eps){width="50.00000%"} In Fig. \[fig:vnequal\] the integrated $v_n$’s for direct pions are shown for RHIC’s $20-30\%$ most central collisions. While such a small shear viscosity of $\eta/s=0.007$ has an extremely small effect on the integrated $v_n$’s, one can still see that it does decrease the $v_n$’s and the higher order $n$’s are most strongly affected. However, when bulk viscosity is included we see that the integrated $v_n$’s return to almost precisely the result for ideal hydrodynamics (with the exception of $v_5$, which remains slightly below still). This indicates that it may be possible for bulk viscosity to compensate for the effects of shear viscosity in the integrated $v_n$’s when they are both of the same order of magnitude. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison between the $v_n(p_T)$’s of direct pions for the centrality class $20-30\%$ at RHIC when bulk and shear viscosities have equal magnitude. The solid black curves denote the ideal hydrodynamics results, the short-dashed blue curve shows the result in the case where there is only shear viscosity $\eta/s=0.007$ while the long-dashed green curve corresponds to the case where $\zeta/s=\eta/s=0.007$ computed using the MOM approach.[]{data-label="fig:vnequalpt"}](figs/vnptsmall1.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the $v_n(p_T)$’s of direct pions for the centrality class $20-30\%$ at RHIC when bulk and shear viscosities have equal magnitude. The solid black curves denote the ideal hydrodynamics results, the short-dashed blue curve shows the result in the case where there is only shear viscosity $\eta/s=0.007$ while the long-dashed green curve corresponds to the case where $\zeta/s=\eta/s=0.007$ computed using the MOM approach.[]{data-label="fig:vnequalpt"}](figs/vnptsmall2.eps "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In Fig. \[fig:vnequalpt\] we observe the $p_T$ dependent flow harmonics of direct pions in the centrality class $20-30\%$ for the small $\eta/s$ case. One can see that there is no visible difference between the ideal case and that of a small shear viscosity of $\eta/s=0.007$. However, when the bulk and shear viscosities are identical the $p_T$ dependent $v_n$’s increase, which indicates that the effect of bulk viscosity dominates the $p_T$ dependent $v_n$’s when bulk and shear are of the same order of magnitude. This was hinted already in Fig. \[tab:vintmom\] when we used $10\zeta/s$. However, in that case we were limited to a small $p_T$ range over which we could integrate our spectrum due to the problems of the $\delta f$. Here we avoid that problem due to the small value of $\zeta/s$ and $\eta/s$. Conclusions {#sec:conclu} =========== In this paper we used v-USPhydro (a 2+1 Lagrangian hydrodynamical model) to study the effects of both bulk and shear viscosities on the hydrodynamical evolution of the QGP and the resulting anisotropic collective flow harmonics at RHIC. We found that even though in our equations of motion the shear stress tensor $\pi^{\mu\nu}$ and the bulk scalar $\Pi$ do not couple directly, their indirect coupling via the flow velocity is still strong enough for them to influence each other in a nonlinear fashion. We found that the inclusion of even a small bulk viscosity decreases the well-known shear viscosity-induced suppression of both the integrated and differential flow harmonics bringing them closer to their values in ideal hydrodynamical calculations. This is a new effect brought in by bulk viscosity in event by event hydrodynamic simulations. Furthermore, we found that when the bulk and shear viscosities are roughly the same order of magnitude the bulk viscosity negates a significant portion of the contribution from shear viscosity for the integrated flow harmonics. In fact, for the differential flow harmonics the bulk viscosity dominates and actually increases the flow harmonics, which is the opposite effect when only the shear viscosity is considered. Even if one considers only a small $\zeta/s$ we find that the bulk viscosity tempers the suppression typically demonstrated from the shear viscosity. When one looks only at the relativistic hydrodynamical expansion one can clearly see that the presence of bulk viscosity suppresses the components of the shear stress tensor. Additionally, we find that this effect plays the largest role the longer the system is expanding. Thus, for the case of lower freeze-out temperatures and also for higher collision energies where one expects to see more time spent within the hydrodynamical expansion, one would expect that the effects of bulk viscosity are more relevant. The effects of viscosity are most relevant for peripheral collisions and higher order flow harmonics for both the integrated and differential $v_n$’s. Furthermore, we find that for the integrated $v_n$’s when we include a large $\zeta/s$ that $v_2\approx v_3$ for the most central collisions but that $v_3$ is more significantly suppressed for more peripheral collisions. When $\eta/s=\zeta/s$ (but both are small) $v_4$ basically returns to its value in ideal hydrodynamics while the same is not true for $v_5$. This could indicate that the higher order flow harmonics may be vital in helping us estimate $\eta/s$ and $\zeta/s$ within relativistic heavy ion collisions because they do not allow for a complete compensation of shear viscosity effects due to bulk viscosity. Our calculations need to be improved in a number of ways. First, in this paper we did not look into the flow harmonics of hadronic species other than pions and no particle decays or hadronic afterburner effects have been included. Clearly, this must be improved to allow for a comparison to data and adequately evaluate the role played by bulk viscosity in the flow harmonics of the QGP formed in heavy ions collisions. Also, different set of initial conditions (we have only used MC Glauber in this paper) and different collisions systems and energies should be investigated. Furthermore, the considerable difference found in the differential anisotropic flow coefficients computed using two different $\delta f$’s formulas is an issue that should serve as a motivation for finding a better behaved expression for the species dependent viscous corrections at freeze-out including both bulk and shear effects. Moreover, as discussed in [@Dumitru:2007qr], constraints on the entropy production generate a correlation between the values of transport coefficients and the initial time for hydrodynamics. If bulk viscosity effects compensate the effects from shear, $\eta/s$ could be larger than used here and, consequently, $\tau_0$ may actually be larger than usually considered in hydrodynamic simulations. As mentioned above, the bulk viscosity-driven suppression of the shear stress tensor found here occurs in a very indirect way mediated by the modification of the flow velocity due to bulk viscosity. Indeed, in our equations of motion we do not include the known terms [@Denicol:2012cn] which display a direct coupling between $\Pi$ and $\pi^{\mu\nu}$. It would be interesting to see if the effect discussed here remains when the more general equations of motion of [@Denicol:2012cn] and transport coefficients [@Denicol:2014vaa] are used in the hydrodynamical evolution. We hope to address this question in the near future [@comment]. We remark that the actual magnitude (and temperature dependence) of $\zeta/s$ in heavy ion collisions is largely unknown and it is conceivable that depending on the value of $\zeta/s$ in the QGP, the bulk viscosity-driven suppression of shear viscosity effects on the flow harmonics found here may require a re-evaluation of the previous estimates of $\eta/s$ extracted from comparisons of hydrodynamic calculations (which did not include bulk viscosity effects) to heavy ion data. Acknowledgements {#acknowledgements .unnumbered} ================ We thank G. S. Denicol for helping us with the implementation of shear viscosity effects in the v-USPhydro code. We thank G. Torrieri for insightful discussions about the effects of bulk viscosity in heavy ion collisions and A. Dumitru for comments about the thermalization time and the values of transport coefficients. This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). This work was also financially supported by the Helmholtz International Center for FAIR within the (Landesoffensive zur Entwicklung Wissenschaftlich-konomischer Exzellenz) launched by the State of Hesse. Tests of the numerical code =========================== Because of the complexity of viscous relativistic hydrodynamical equations, it is vital to test the accuracy of our numerical code. Fortunately, there are now well-known numerical and semi-analytical solutions for this purpose. TECHQM ------ One aspect of TECHQM bulk evolution working group [@techqm] was to ensure the overall accuracy of relativistic hydrodynamical codes and solutions for both ideal and viscous hydro evolution (with only shear viscosity) have become available. We use an ideal equation of state and the $b=0$ fm central Au-Au optical Glauber initial condition at RHIC $\sqrt{s}=200$ A GeV. We take the starting time for hydrodynamics to be $\tau_0=0.6$ fm, $\eta/s=0.08$, $\tau_{\pi}=3\eta/(sT)$, and the freeze-out temperature is $T_0=130$ MeV. In Fig. \[tab:techqm\] we compare our results (dots) to the TECHQM results (lines) [@techqm] over time $\tau=0.6$ (solid black lines), $1.6$ (blue short dashed lines), and $2.6$ fm (red long dashed lines). One can clearly see that the results match well for multiple time steps. In our code we used the SPH smoothing parameter is $h=0.2$ fm, the total number of SPH particles is $N_{SPH}= 25432$, and the time step $d\tau=0.02$ fm. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison between the TECHQM results (lines) and our results (dots) for the energy density $\varepsilon$, the shear stress components $\pi^{11}$ and $\pi^{33}$, as well as the x-component of the flow velocity. The comparison is made at the times $\tau=0.6$ (solid black lines), $1.6$ (blue short dashed lines), and $2.6$ fm (red long dashed lines).[]{data-label="tab:techqm"}](figs/etechqm.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the TECHQM results (lines) and our results (dots) for the energy density $\varepsilon$, the shear stress components $\pi^{11}$ and $\pi^{33}$, as well as the x-component of the flow velocity. The comparison is made at the times $\tau=0.6$ (solid black lines), $1.6$ (blue short dashed lines), and $2.6$ fm (red long dashed lines).[]{data-label="tab:techqm"}](figs/pi11techqm.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the TECHQM results (lines) and our results (dots) for the energy density $\varepsilon$, the shear stress components $\pi^{11}$ and $\pi^{33}$, as well as the x-component of the flow velocity. The comparison is made at the times $\tau=0.6$ (solid black lines), $1.6$ (blue short dashed lines), and $2.6$ fm (red long dashed lines).[]{data-label="tab:techqm"}](figs/pi33techqm.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the TECHQM results (lines) and our results (dots) for the energy density $\varepsilon$, the shear stress components $\pi^{11}$ and $\pi^{33}$, as well as the x-component of the flow velocity. The comparison is made at the times $\tau=0.6$ (solid black lines), $1.6$ (blue short dashed lines), and $2.6$ fm (red long dashed lines).[]{data-label="tab:techqm"}](figs/vxtechqm.eps "fig:"){width="50.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $SO(3)\otimes SO(1,1)\otimes Z_2$ Test of Conformal Israel-Stewart Dynamics {#sheartest} --------------------------------------------------------------------------- Using symmetry arguments, Gubser [@Gubser:2010ze] found analytical solutions of ideal and Navier-Stokes conformal hydrodynamics that are invariant under $SO(3)\otimes SO(1,1)\otimes Z_2$ (a subgroup of $SO(2,4)$). The symmetries imply that the solution is radially symmetric in the transverse plane and boost invariant with the flow $$\begin{aligned} u_{\tau}&=&-\cosh \left[\tanh^{-1}\left(\frac{2q^2\tau r}{1+q^2\tau^2+q^2 r^2}\right)\right]\nonumber\\ u_{r}&=&\sinh \left[\tanh^{-1}\left(\frac{2q^2\tau r}{1+q^2\tau^2+q^2 r^2}\right)\right]\nonumber\\ u_{\phi}&=&u_{\eta}=0\end{aligned}$$ where $q$ is a free parameter with dimensions of energy. This approach has been used in [@Marrochio:2013wla] to find the first analytical and semi-analytical solutions of conformal Israel-Stewart hydrodynamics which includes nontrivial dynamics in the transverse plane. These solutions are described in detail in [@Marrochio:2013wla] and they provide a very stringent test of the accuracy of viscous hydrodynamic codes. Since the equations of motion involving shear viscosity used here have the same structure as those in [@Marrochio:2013wla], we can directly test the accuracy of v-USPhydro in this case. We also note that novel analytical solutions of conformal Israel-Stewart hydrodynamics with full 3+1 dynamics can be found in [@Hatta:2014gqa; @Hatta:2014gga]. Here we compare the results from v-USPhydro to the semi-analytical solution [@Marrochio:2013wla] in the case where $\eta/s=0.2$, $\tau_\pi=5(\eta/s)/T$, and $q=1$ fm$^{-1}$. The comparison involving the temperature, the flow, and a few components of the shear stress tensor can be found in Fig. \[tab:gubser\]. For our comparison we used $h=0.1$, $d\tau=0.001$ fm, $\tau_0=1$ fm, and the total number of SPH particles is $N_{SPH}= 40401$. We see that v-USPhydro is able to match the semi-analytical solution pretty well at early times (the agreement remains at later times). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(Color online) Comparison between the semi-analytical solutions of [@Marrochio:2013wla] ( lines) and v-USPhydro (dots). Here we compare for the time steps $\tau=1.0$ fm (first time step, which sets the initial condition for the fields) (solid black lines), $1.2$ fm (blue short dashed lines), and $1.5$ fm (red long dashed lines).[]{data-label="tab:gubser"}](figs/Tgub.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the semi-analytical solutions of [@Marrochio:2013wla] ( lines) and v-USPhydro (dots). Here we compare for the time steps $\tau=1.0$ fm (first time step, which sets the initial condition for the fields) (solid black lines), $1.2$ fm (blue short dashed lines), and $1.5$ fm (red long dashed lines).[]{data-label="tab:gubser"}](figs/uxgub.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the semi-analytical solutions of [@Marrochio:2013wla] ( lines) and v-USPhydro (dots). Here we compare for the time steps $\tau=1.0$ fm (first time step, which sets the initial condition for the fields) (solid black lines), $1.2$ fm (blue short dashed lines), and $1.5$ fm (red long dashed lines).[]{data-label="tab:gubser"}](figs/pi11gub.eps "fig:"){width="50.00000%"} ![(Color online) Comparison between the semi-analytical solutions of [@Marrochio:2013wla] ( lines) and v-USPhydro (dots). Here we compare for the time steps $\tau=1.0$ fm (first time step, which sets the initial condition for the fields) (solid black lines), $1.2$ fm (blue short dashed lines), and $1.5$ fm (red long dashed lines).[]{data-label="tab:gubser"}](figs/pi33gub.eps "fig:"){width="50.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Details of the Cooper-Frye freeze-out in the SPH formalism {#detailsCF} ========================================================== Our distribution function for a single particle species that includes effects from both shear and bulk viscosity is $$f_{p}=f_{0p}\left[1+\left(1-af_{0p}\right)\left(\delta f^{Bulk}_{p}+\delta f^{Shear}_{p}\right)\right]$$ where $p^\mu$ is the particle on-shell momentum. The ideal distribution, $f_{0p}$, is defined as $$f_{0p}=\frac{1}{e^{(p^{\mu}u_{\mu})/T_0}+a}$$ where $a=1$ for fermions, $-1$ for bosons, and $0$ for classical Boltzmann statistics. For the ideal component we have $$f_{0p}=\sum_{n=0}^{\infty}\left(-a\right)^n e^{-(n+1)\frac{p^{\mu}u_{\mu}}{T_0}}$$ whereas $$f_{0p}\left(1-af_{0p}\right)=\sum_{n=0}^{\infty}\left( n+1\right)\left(-a_i\right)^n e^{-(n+1)\frac{p^{\mu}u_{\mu}}{T_0}}$$ and $$\label{eqn:cordis} f_{p}=\sum_{n=0}^{\infty}\left(-a\right)^n e^{-(n+1)\frac{p^{\mu}u_{\mu}}{T_0}}\left(1+ \left( n+1\right) \left[\delta f^{Bulk}_{p}+\delta f^{Shear}_{p}\right]\right)\,.$$ In Cartesian coordinates the scalar product of the particle momentum and the hypersurface normal vector (see Appendix \[normalvector\]), $n^\mu$, is $$p^{\mu}\cdot n_{\mu}=E n_t+p^x n_x+p^y n_y +p^z n_z$$ switching to hyperbolic coordinates $$\begin{aligned} p^{\mu}\cdot u_{\mu}&=& m_{\perp} u_{\tau} \text{cosh}\left(\eta-y\right) - \vec{p}_{\perp}\cdot \vec{u}_{\perp} \\ p^{\mu}\cdot n_{\mu}&=& m_{\perp} n_{\tau} \text{cosh}\left(\eta-y\right) - \vec{p}_{\perp}\cdot \vec{n}_{\perp} \\ u^{\mu} \cdot n_{\mu}&=& u_{\tau}n_{\tau}+u^{x}n_x+u^y n_y\,.\end{aligned}$$ In the SPH formalism the integral over the isothermal hypersurface is written in terms of a sum of SPH particles as $$\label{spectraSPH} \frac{dN}{dyd^2p_T} = \frac{g}{(2\pi)^3}\,\sum_{\alpha=1}^{N_{SPH}}% \int_{-\infty}^{\infty}d\eta_\alpha\frac{(p\cdot n)_\alpha}{ (n\cdot u)_\alpha}\frac{\nu_\alpha}{\sigma_\alpha} \,f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha)$$ where $N_{SPH}$ is the total number of SPH particles, $(n_\mu)_\alpha$ is the normal vector of the isothermal hypersurface reconstructed using the $\alpha$-th SPH particle, $% (u_\mu)_\alpha$ is the 4-velocity of the SPH particle, $\Pi_\alpha$ is the bulk viscosity of the SPH particle, there is an integral over the space-time rapidity of each SPH particle $\eta_\alpha$, and $\pi^{\mu\nu}_\alpha$ is the shear stress tensor of the SPH particle. Then, the contribution from a SPH particle to the ideal distribution function is $$f^{(\alpha)}_{0p}=e^{ \vec{p}_{\perp}\cdot \vec{u}^{(\alpha)}_{\perp}/T_0}e^{- m_{\perp} u^{(\alpha)}_{\tau}/T_0 \text{cosh}\left(\eta_\alpha-y\right)}$$ Substituting this into Eq. (\[eqn:cordis\]), we find $$\label{eqn:cordiswithns} f^{(\alpha)}_{p}=\sum_{n=0}^{\infty}\left(-a_\alpha\right)^n \lambda_\alpha^{n+1} e^{-(n+1)m_{\perp} u^{(\alpha)}_{\tau}/T_0 \text{cosh}\left(\eta_\alpha-y\right)}\left(1+ \left( n+1\right) \left[\delta f^{(\alpha)Bulk}_{p}+\delta f^{(\alpha)Shear}_{p}\right]\right)$$ where $\lambda_\alpha=e^{ \vec{p}_{\perp}\cdot \vec{u}^{(\alpha)}_{\perp}/T_0}$. The integral over the isothermal hypersurface becomes $$\begin{aligned} \frac{dN}{dyd^2p_T}& =& \frac{g}{(2\pi)^3}\,\sum_{\alpha=1}^{N_{SPH}}\frac{1}{ (n\cdot u)_\alpha} \frac{\nu_\alpha}{\sigma_\alpha} \left\{ m_{\perp} n_{\tau}^{(\alpha)} \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha) \right. \nonumber \\ & & + \left. \left[p^x n_x^{(\alpha)} +p^y n_y^{(\alpha)}\right] \int_{-\infty}^{\infty}d\eta_\alpha \,f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha)\right\}\end{aligned}$$ where we can then substitute in $$\left(q_{\nu}\right)_{\alpha}=\frac{\left(n_{\nu}\right)_{\alpha}}{\left(n\cdot u\right)_{\alpha}}\frac{\nu_{\alpha}}{\sigma_{\alpha}}$$ such that $$\begin{aligned} \frac{dN}{dyd^2p_T}& =& \frac{g}{(2\pi)^3}\,\sum_{\alpha=1}^{N_{SPH}}\left\{ m_{\perp} q_{0\,\alpha} \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha) \right. \\ & & + \left. \left(\mathbf{p}_T\cdot \mathbf{q}_T\right)_\alpha \int_{-\infty}^{\infty}d\eta_\alpha \,f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha)\right\}\nonumber\\ & =& \frac{g}{(2\pi)^3}\,\sum_{\alpha=1}^{N_{SPH}}\,% \left[ q_{0 \,\alpha} \,\mathcal{I}_1(\alpha,m,T_{FO})-(\mathbf{p}_T \cdot \mathbf{q}_{T})_\alpha\, \mathcal{I}_2(\alpha,m,T_{FO}) \right]\,\end{aligned}$$ where $$\begin{aligned} \mathcal{I}_1(\alpha,m,T_{FO})&=& m_{\perp} \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha) \nonumber\\ \mathcal{I}_2(\alpha,m,T_{FO})&=& \int_{-\infty}^{\infty}d\eta_\alpha \,f(T_{FO}, (p\cdot u)_\alpha,\Pi_\alpha,\pi^{\mu\nu}_\alpha) \nonumber\\\end{aligned}$$ We can now insert the distribution function in Eq. (\[eqn:cordiswithns\]) $$\begin{aligned} \mathcal{I}_1(\alpha,m,T_{0})&=& m_{\perp}\sum_{n=0}^{\infty}\left(-a_\alpha\right)^n \lambda^{n+1}_\alpha \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) e^{-(n+1)m_{\perp} \frac{u_{\tau}^{(\alpha)}}{T_0} \text{cosh}\left(\eta_\alpha-y\right)} \nonumber \\ &\times & \left[1+ \left( n+1\right) \left(\delta f^{(\alpha)Bulk}_{p}+\delta f^{(\alpha)Shear}_{p}\right)\right] \nonumber\\ \mathcal{I}_2(\alpha,m,T_{0})&=& \,\sum_{n=0}^{\infty}\left(-a_\alpha\right)^n \lambda^{n+1}_\alpha \int_{-\infty}^{\infty}d\eta_\alpha\, e^{-(n+1)m_{\perp} \frac{u_{\tau}^{(\alpha)}}{T_0} \text{cosh}\left(\eta_\alpha-y\right)}\nonumber \\ &\times &\,\left[1+ \left( n+1\right) \left(\delta f^{(\alpha)Bulk}_{p}+\delta f^{(\alpha)Shear}_{p}\right)\right] \nonumber\\\end{aligned}$$ but we already know a portion of this from the combination of the ideal and bulk in [@Noronha-Hostler:2013gga], which we will refer to here as $I_1^{\alpha+b}$ and $I_2^{\alpha+b}$, so $$\begin{aligned} \mathcal{I}_1(\alpha,m,T_{FO})&=& I_1^{\alpha+b}+ m_{\perp} \sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda^{n+1}_\alpha \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) e^{-(n+1)m_{\perp} u_{\tau}^{(\alpha)}/T_0 \text{cosh}\left(\eta_\alpha-y\right)} \nonumber \\ &\times& \delta f^{(\alpha)Shear}_{p}\ \nonumber\\ \mathcal{I}_2(\alpha,m,T_{FO})&=& I_2^{\alpha+b}+ \,\sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda_\alpha^{n+1} \int_{-\infty}^{\infty}d\eta_\alpha\, e^{-(n+1)m_{\perp} u_{\tau}^{(\alpha)}/T_0 \text{cosh}\left(\eta_\alpha-y\right)}\delta f^{(\alpha)Shear}_{p}\ \nonumber\\\end{aligned}$$ Details about $\delta f$ for shear ---------------------------------- The correction term from shear viscosity effects for a given particle species is $$\begin{aligned} \delta f^{(i)Shear}_{p}&=&\frac{1}{2s_0 \,T_0^3}\pi^{\mu\nu}p_{\mu}p_{\nu}\,\end{aligned}$$ where $s_0$ is the entropy density at freeze-out. Using the properties of the shear stress tensor we find, in explicit form, $$\begin{aligned} \pi^{\mu\nu}p_{\mu}p_{\nu}&=& m_{\perp}^2 \left[\pi^{00} \text{cosh}^2(\eta- y) +\tau^2\pi^{33} \text{sinh}^2(\eta- y) \right]+p^2_x\pi^{11}+p_y^2\pi^{22}+2p_x p_y \pi^{12}\end{aligned}$$ Substituting that expression in the shear correction term, one obtains for each SPH particle $$\begin{aligned} \mathcal{I}_1(\alpha,m,T_{0})&=& I_1^{\alpha+b}+ \frac{m_{\perp}}{2s_0 T_0^3}\sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda_\alpha^{n+1} \int_{-\infty}^{\infty}d\eta_\alpha \, \text{cosh}\left(\eta_\alpha-y\right) \nonumber \\ &\times& e^{-(n+1)m_{\perp} u_{\tau}^{(\alpha)}/T_0 \text{cosh}\left(\eta_\alpha-y\right)} \pi_\alpha^{\mu\nu}p_{\mu}p_{\nu}\nonumber\\ \mathcal{I}_2(\alpha,m,T_{0})&=& I_2^{\alpha+b}+ \frac{1}{2s_0 T_0^3}\sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda_\alpha^{n+1} \int_{-\infty}^{\infty}d\eta_\alpha e^{-(n+1)m_{\perp} u_{\tau}^{(\alpha)}/T_0 \text{cosh}\left(\eta_\alpha-y\right)}\pi_\alpha^{\mu\nu}p_{\mu}p_{\nu}\,. \nonumber \\\end{aligned}$$ After some manipulations, our final equations become $$\begin{aligned} \mathcal{I}_1(\alpha,m,T_{0})&=& I_1^{\alpha+b}+ \frac{1}{s_0 T_0^3}\sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda_\alpha^{n+1} \nonumber\\ & \times & \left\{ \left(E\left[p^2_x\pi_\alpha^{11}+p_y^2\pi_\alpha^{22}+2p_x p_y \pi_\alpha^{12}\right]+\frac{1}{4} E^3 \left[3\pi_\alpha^{00}-\tau^2\pi_\alpha^{33}\right] \right)K_1\left((n+1)\frac{E u_{\tau}^{(\alpha)}}{T_0}\right)\right.+\nonumber\\ & & +\left.\frac{1}{4} m_{\perp}^3\left[ \pi_\alpha^{00}+\tau^2\pi_\alpha^{33} \right] K_3\left((n+1)\frac{m_{\perp} u_{\tau}^{(\alpha)}}{T_0}\right) \right\} \nonumber\\ \mathcal{I}_2(\alpha,m,T_{0})&=& I_2^{\alpha+b}+ \frac{1}{s_0 T_0^3}\sum_{n=0}^{\infty}\left( n+1\right) \left(-a_\alpha\right)^n \lambda_\alpha^{n+1} \nonumber\\ &\times & \left\{\left( p^2_x\pi_\alpha^{11}+p_y^2\pi_\alpha^{22}+2p_x p_y \pi_\alpha^{12}+\frac{1}{2}m_{\perp}^2\left[\pi_\alpha^{00}-\tau^2\pi_\alpha^{33}\right]\right) K_0\left((n+1) \frac{m_{\perp} \gamma_\alpha}{T_0}\right)\right.+\nonumber\\ &\times & +\left.\frac{1}{2}m_{\perp}^2 \left(\pi_\alpha^{00}+\tau^2\pi_\alpha^{33}\right)K_2\left((n+1) \frac{ m_{\perp} \gamma_\alpha}{T_0}\right)\right\} \nonumber\\\end{aligned}$$ where $K_\beta(x)$ is the modified Bessel function. Normal vector of isothermal surface and the SPH formalism {#normalvector} ========================================================= The normalized normal vector to the isothermal surface is $$n_\mu = \frac{\left(\partial_\tau T,\partial_x T, \partial_y T\right)}{\sqrt{\left(\partial_\tau T\right)^2-\left(\partial_x T\right)^2-\left(\partial_y T\right)^2}}\,.$$ Since in the SPH method the spatial gradients of the pressure are known [@Hama:2004rr] and, using the Gibbs-Duhem relation $\partial_\mu T = \partial_\mu P/s$, we just need to determine $\partial_\tau T$ to obtain $n_\mu$. Using that $DP = \gamma \partial_\tau P + \left(\mathbf{u}\cdot \nabla P\right)$, $dP/d\varepsilon=c_s^2$ hence $DP = c_s^2 \,D\varepsilon$, and the energy conservation equation $$D\varepsilon + (\varepsilon+P+\Pi)\theta - \pi_{\mu\nu}\sigma^{\mu\nu}=0$$ we find that $$\partial_\tau P = \frac{1}{\gamma} \left[-c_s^2\,\theta \left(\varepsilon+P+\Pi \right)+c_s^2\, \pi_{\mu\nu}\sigma^{\mu\nu}- \left(\mathbf{u}\cdot \nabla P\right)\right]$$ and thus $$\label{eqn:normtau} \partial_\tau T = \frac{1}{\gamma\,s} \left[-c_s^2\,\theta \left(\varepsilon+P+\Pi \right)+c_s^2\, \pi_{\mu\nu}\sigma^{\mu\nu}- \left(\mathbf{u}\cdot \nabla P\right)\right]\,.$$ Both $\eta/s$ and $\zeta/s$ are still relatively small in all of our calculations, which means that the contributions from the bulk pressure and shear stress tensor in Eq. (\[eqn:normtau\]) are still in general very small compared to that from the energy density and pressure ($\varepsilon+P\approx 1.5$ whereas the components of $\pi^{\mu\nu}\approx10^{-3}$ and $\Pi\approx10^{-3}-10^{-1}$). This then means that the primary contribution to the uncorrected flow harmonics comes from the viscous corrected flow and not from the details of the freeze-out hypersurface (which remains very similar to the one found in the ideal hydro case).
--- abstract: - 'The energy spectrum of central divalent impurity is calculated using the effective mass approximation in a spherical quantum dot (QD). The dipole moment and oscillator strength of interlevel transition is defined. The dependence of linear absorption coefficient on the QD size and electromagnetic frequency is analyzed. The obtained results are compared with the results of univalent impurity. divalent impurity, linear absorption coefficient 73.21.La, 78.20.Ci' - '=3000У рамках методу ефективної маси обчислено спектр центральної двовалентної домішки у квантовій точці (КТ) сферичної форми. Визначено дипольні моменти та сили осциляторів міжрівневих переходів. Проаналізовано залежність лінійного коефіцієнту поглинання електромагнітних хвиль від розмірів КТ та частоти падаючої хвилі. Проведено порівняння з відповідними результатами для одновалентної домішки. двовалентна домішка, лінійний коефіцієнт поглинання' address: - | Department of Theoretical Physics, Ivan Franko Drohobych State Pedagogical University,\ 3 Stryiska St., 82100 Drohobych, Ukraine - | Кафедра теоретичної фізики, Дрогобицький державний педагогічний університет ім. Івана Франка\ вул. Стрийська, 3, 82100 Дрогобич, Львівська обл. author: - 'V.I Boichuk, R.Ya. Leshko[^1]' - 'В.І. Бойчук, Р.Я. Лешко' date: 'Received March 19, 2014, in final form May 19, 2014' title: Міжрівневе поглинання електромагнітних хвиль нанокристалом з двовалентною домішкою --- Introduction ============ The semiconductor quantum dots (QDs) are widely used in opto- and nanoelectronics due to their unique properties. Lasers, sources of light, LEDs are constructed based on nanosystems. Sources of terahertz radiation, which are constructed based on QDs, take a special place [@Wu]. The feature of terahertz radiation lies in the fact that it practically does not ionize materials, contrary to the X-ray, and is capable of penetrating into materials. That is why this kind of radiation is widely used in medical tomography [@Wang], in security systems, in producing high resolution images of microscopic objects [@Huber]. The possibilities of developing high-speed THz communication systems are studied [@Piesiewicz]. The detector of terahertz radiation was proposed based on QDs [@Wei1; @Wei2]. Taking into consideration that the energy of interlevel transitions responds to the terahertz range, the study of interlevel transitions became the basis for theoretical description and prognostication of the properties of terahertz detectors and sources. Single-electron states in the QD which definitely depend on the QD size, the presence of defects, especially impurities, are the basis of interlevel transition analysis. At present, the theory of shallow hydrogenic donor impurities is widely developed in the QD. An exact solution of Schrödinger equation for the central impurity was derived [@Tkach], the energy spectrum of the off-central impurity was obtained using different methods in spherical [@Boichuk1] and ellipsoidal [@Sadeghi] QDs. The cubic [@Rezaei1] QDs are analysed too. Since the QD can contain several impurities, the problem regarding the QD with two impurities was solved [@Holovatsky1; @Boichuk2]. Based on the obtained results, the linear and nonlinear optical properties of the QD with impurities [@Boichuk1; @Sadeghi; @Rezaei1; @Boichuk2; @Vahdani; @Rezaei2] were calculated using the density matrix and iteration method [@Tang]. Experimental data show that QDs can be doped with impurities which are divalent [@Korb]. In particular, in this work it was shown that the zinc impurities penetrate the CdS QDs. This leads to the changes of the optical properties which are connected with interband (high-energy) and interlevel intraband (low-energy) transitions. The above mentioned as well as the lack of a consistent theory of central divalent impurities in spherical QDs, which could make possible the calculation of the ground and excited states, brings about the necessity to consider the divalent impurity in a spherical QD; to determine the energy spectrum of this impurity; to calculate interlevel transitions in the QD with divalent impurity; to compare the obtained results with the corresponding results of monovalent impurity. Eigenvalues and eigenfunctions ============================== We consider a spherical nanosize heterosystem. It consists of a nanocrystal of radius $a$ having electron effective mass $m_1^*$, which is placed in a matrix having electron effective mass $m_2^*$. There is a divalent impurity in the center of the QD. Let the heterosystem be made of crystals that have the values close to dielectric permittivity. This makes it possible to introduce the average value of dielectric permittivity $\varepsilon$. The effective-mass Hamiltonian of this system can be written as follows: $$\label{1} \hat H = \hat H_1^{} + \hat H_2^{} + \frac{{{e^2}}}{{4\pi {\varepsilon _0}\varepsilon {r_{12}}}}\, ,$$ where $$\label{2} \hat H_i^{} = - \frac{{{\hbar ^2}}}{2}{\nabla _i}\frac{1}{{{m^*}\left( {{r_i}} \right)}}{\nabla _i} + U({r_i}) - \frac{{Z{e^2}}}{{4\pi {\varepsilon _0}\varepsilon {r_i}}} = \hat H_i^{(0)} - \frac{{Z{e^2}}}{{4\pi {\varepsilon _0}\varepsilon {r_i}}}\,,$$ $Z=2$. The potential energy caused by the heterostructure band mismatch is given by: $$\label{3} U({r_i}) = \left\{ \begin{array}{ll} 0, & \hbox{${r_i} \leqslant a$},\\ {U_0},& \hbox{${r_i} > a$}. \end{array} \right.$$ The Schrödinger equation with the Hamiltonian (\[1\]) cannot be solved exactly. Therefore, the Ritz variation method has been used herein. Since the electrons are fermi-particles, the wave function should be antisymmetric. The approach of [@Boichuk1; @Boichuk3; @Boic4] has been used for the chosen variation function. Nonetheless in [@Boichuk3; @Boic4] there was calculated only the ground state energy of divalent impurity, and in [@Boichuk1] there was calculated the energy of the ground state and the first exited states of the monovalent impurity. In both cases, one variational parameter was used. To improve the accuracy, two variational parameters are introduced in the present paper in the coordinate wave functions of ground state and some exited states of divalent impurity: $$\begin{aligned} \label{4.1} {\psi _1} &= {c_1}\left| {1s,{{\vec r}_1},{\alpha _1}} \right\rangle \left| {1s,{{\vec r}_2},{\beta _1}} \right\rangle,\\ %\end{equation} %\begin{equation} \label{4.2} {\psi _2} &= {c_2}\left( {\left| {1s,{{\vec r}_1},{\alpha _2}} \right\rangle \left| {1p,{{\vec r}_2},{\beta _2}} \right\rangle - \left| {1p,{{\vec r}_1},{\alpha _2}} \right\rangle \left| {1s,{{\vec r}_2},{\beta _2}} \right\rangle } \right),\\ %\end{equation} %\begin{equation} \label{4.3} {\psi _3} &= {c_3}\left( {\left| {1s,{{\vec r}_1},{\alpha _3}} \right\rangle \left| {1p,{{\vec r}_2},{\beta _3}} \right\rangle + \left| {1p,{{\vec r}_1},{\alpha _3}} \right\rangle \left| {1s,{{\vec r}_2},{\beta _3}} \right\rangle } \right),\\ %\end{equation} %\begin{equation} \label{4.4} {\psi _4} &= {c_4}\left( {\left| {1s,{{\vec r}_1},{\alpha _4}} \right\rangle \left| {1d,{{\vec r}_2},{\beta _4}} \right\rangle - \left| {1d,{{\vec r}_1},{\alpha _4}} \right\rangle \left| {1s,{{\vec r}_2},{\beta _4}} \right\rangle } \right),\\ %\end{equation} %\begin{equation} \label{4.5} {\psi _5} &= {c_5}\left( {\left| {1s,{{\vec r}_1},{\alpha _5}} \right\rangle \left| {1d,{{\vec r}_2},{\beta _5}} \right\rangle + \left| {1d,{{\vec r}_1},{\alpha _5}} \right\rangle \left| {1s,{{\vec r}_2},{\beta _5}} \right\rangle } \right),\end{aligned}$$ where $$\begin{aligned} \label{5} \left| {j,{{\vec r}_i},{\gamma _q}} \right\rangle &=& R_j^{}\left( {{r_i},{\gamma _q}} \right) Y_{{l_j}}^{{m_j}}\left( {{\theta _i},{\varphi _i}} \right) \nonumber\\ &=& {A_j}Y_{{l_j}}^{{m_j}}\left( {{\theta _i},{\varphi _i}} \right) \left\{ \begin{array}{ll} {{\bf{j}}_{{l_j}}}\left( {{k_{{n_j},{l_j}}}{r_i}} \right)\exp \left( { - {\gamma _q}{r_i}} \right), & \hbox{${r_i} \leqslant a$},\\ {{\bf{k}}_{{l_j}}}\left( {{x_{{n_j},{l_j}}}{r_i}} \right)\exp \left\{ { - {\gamma _q}\left[ {\frac{{{m_2}^*}}{{{m_1}^*}} \left( {a - {r_i}} \right) - a} \right]} \right\}, & \hbox{${r_i} > a$}, \end{array} \right.\end{aligned}$$ $j=1s, 1p, 1d$; index $q=1, 2, 3, 4, 5$ enumerates variational parameters for states (\[4.1\])–(\[4.5\]); $\gamma=\alpha, \beta$ are variational parameters, index $i=1, 2$ enumerates electrons; $l_{1s}=0$, $l_{1p}=1$, $l_{1d}=2$; $m_{1s}=0$, $m_{1p}=-1, 0, 1$; $m_{1d}=-2, -1, 0, 1, 2$. The spherical Bessel function of the first kind $j_\textrm{b}(z)$ and the modified spherical Bessel function of the second kind $k_\textrm{b}(z)$ are the solutions of a Schrödinger equation regarding the particle in the spherical potential well with the Hamiltonian $\hat H_i^{(0)}$, $$\begin{aligned} \label{} {k_{{n_j},{l_j}}} = \sqrt {\frac{{2{m_1}^*}}{{{\hbar ^2}}}E_{{n_j},{l_j}}^{(0)}} \,, \qquad {x_{{n_j},{l_j}}} = \sqrt {\frac{{2{m_2}^*}}{{{\hbar ^2}}}\left( {{U_0} - E_{{n_j},{l_j}}^{(0)}} \right)}\,, \nonumber\end{aligned}$$ $n_{1s}$, $n_{1p}$, $n_{1d}$ enumerates the solutions of dispersion equation when $l$ is fixed. $A_j$ can be found from the normalization condition for the function (\[5\]). $\psi_1$, $\psi_3$, $\psi_5$ are functions of singlet states; $\psi_2$, $\psi_4$ are functions of triplet states. Orthogonality of total wave functions (the coordinate part and the spin part) are provided by the orthogonality of spin parts of wave functions and by the orthogonality of spherical harmonics. The single particle wave function ensures the implementation of a boundary condition. After substitution (\[4.1\])–(\[4.5\]) into the Schrödinger equation with Hamiltonian (\[1\]), the functional was found which depends on two variational parameters for excited states and depends on one variational parameter for the ground state. The performed procedure of numerical minimization makes it possible to get the corresponding energy states and find the values of variational parameters, and thus ultimately determine the wave functions. Calculation of electron discrete energy was performed for heterostructure CdS/SiO$_2$ with the following parameters: $m_1^* = 0.2m_0$, $m_2^* = 0.42m_0$, $\varepsilon = \left( {5.5 + 3.9} \right)/2 = 4.7$, ${U_0} = 2.7$ eV, where $m_0$ is free electron mass. The energy spectrum of a divalent impurity is presented in figure \[fig1\]. Due to spherical symmetry, ground and excited states are degenerated by the magnetic quantum number. Figure \[fig1\] shows that an increase of the QD radius leads to a decrease of the energy of the ground state which quickly becomes saturated. For larger QD radius, the energy of excited states leads to the values corresponding to the values of the bulk crystal. Similar dependence was observed for a monovalent impurity [@Boichuk1]. This dependence is caused by a small effective Bohr radius $a_\texttt{b}^*=12.44$ [Å]{} and a large confinement. Although the effective Bohr radius is small, the volume $a_\texttt{b}^{*3}$ approximately contains 10–12 elementary cells. This is the reason for using the Coulomb model potential interaction of electrons having an impurity. ![The energy of divalent impurity as a function of the QD radius. Numbers denote energies of respective states: 1 — $\psi_1$, 2 — $\psi_2$, 3 — $\psi_3$, 4 — $\psi_4$, 5 — $\psi_5$. Horizontal lines correspond to the energy of the divalent impurity in the bulk CdS.[]{data-label="fig1"}](fig1){width="55.00000%"} An important characteristic of the QD having a divalent or monovalent impurity is the binding energy. In the case of a divalent impurity, $E_\textrm{b}$ is calculated by the similar formula [@Safwan]: $$\label{ionze} E_{\textrm{b},II}=E_0+E_{1s,Z=2}-E_1\,,$$ where $E_0$ is the electron energy of the QD without impurities, $E_{1s,Z=2}$ is the ground state energy of the QD having a singly ionized divalent impurity, $E_1$ is the energy of the state $\psi_1$ of the divalent impurity (\[4.1\]). In the case of an univalent impurity, the binding energy is defined by the formula: $$\label{bind} E_{\textrm{b},I}=E_0-E_{1s,Z=1},$$ where $E_{1s,Z=1}$ is the energy of the univalent impurity. If the QD radius reduces, the binding energy increases in both cases. For very small radii, $E_\textrm{b}$ decreases (figure \[fig-r1\]). This is caused by an increase of the probability of location of the electrons outside the QD in both cases. However, if the QD has a divalent impurity, the binding energy is larger. Optical properties ================== Energy spectrum and wave functions make it possible to calculate interlevel transitions. Selection rules by spin variables state that transitions are possible only between singlet-singlet and triplet-triplet states. ![(Color online) The binding energy of the univalent impurity (curve 1) and of the divalent impurity (curve 2).[]{data-label="fig-r1"}](fig-r1){width="55.00000%"} Let the QD be irradiated by the linearly polarized light along the $z$ direction. Then, in the dipole approximation, interlevel transitions are possible between the states $\psi_1$ and $\psi_3$; $\psi_2$ and $\psi_4$; $\psi_3$ and $\psi_5$. Dipole transition matrix elements between those states are given by: $$\label{6} {d_{13}} = \left\langle {\psi _1^{}} \right|ez\left| {\psi _3^{}} \right\rangle , \qquad{d_{24}} = \left\langle {\psi _2^{}} \right|ez\left| {\psi _4^{}} \right\rangle , \qquad {d_{35}} = \left\langle {\psi _3^{}} \right|ez\left| {\psi _5^{}} \right\rangle.$$ The dependence of the square of the dipole transition matrix element on the QD radius is presented in figure \[fig2\] with logarithmic scale. ${\left| {{d_{1s - 1p}}/e} \right|^2}$ for the monovalent impurity is plotted too. Figure \[fig2\] shows that the corresponding values for a monovalent impurity are bigger than for the divalent one. This is due to the changes in the average distance of electrons in their respective states. Besides, it was established that all the curves for a large QD radii tend to the values that correspond to the values of the bulk crystal. ![The square of the dipole momentum of interlevel transitions. Solid curves correspond to the divalent impurity. The dotted curve corresponds to the monovalent impurity. Horizontal dashed curve denotes the square of the dipole momentum of interlevel transitions of the monovalent and divalent impurity in the bulk crystal.[]{data-label="fig2"}](fig2){width="55.00000%"} The oscillator strength of interlevel transitions is also defined $$\label{7} {f_{mn}} = \frac{{2m_1^*}}{{{\hbar ^2}{e^2}}}\left( {{E_n} - {E_m}} \right){\left| {{d_{mn}}} \right|^2}.$$ The dependences are presented in figure \[fig3\] with logarithmic scale. The oscillator strength of interlevel transitions for a monovalent impurity in the center of the QD is plotted too. This is in agreement with the result of other works [@Boichuk1; @Holovatsky2]. Similarly to the dipole momentum, the oscillator strength of the divalent impurity is smaller than the oscillator strength of the monovalent impurity. This dependence is caused by the dependence of the dipole momentum and the transition energy $E_\textrm{tr}$=$E_n-E_m$ (figure \[fig4\]). ![The oscillator strength of interlevel transitions. Solid curves correspond to the divalent impurity. The dotted curve corresponds to the monovalent impurity. Horizontal curves denote the oscillator strength of interlevel transitions of the monovalent and divalent impurity in the bulk crystal.[]{data-label="fig3"}](fig3){width="55.00000%"} ![The transition energy. Solid curves correspond to the divalent impurity. Dashed curve corresponds to the monovalent impurity.[]{data-label="fig4"}](fig4){width="55.00000%"} The above mentioned dependence of the dipole momentum and the transition oscillator strength effects the height of the absorption peaks. For a two-level system, the density matrix and iterative procedure were used to derive the absorption coefficient [@Vahdani; @Rezaei2; @Tang]. In this approach, the linear absorption coefficient can be expressed as follows: $$\label{9} {\alpha _{m,n}}\left( \omega \right) = \omega \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} \frac{{N{{\left| {{d_{m,n}}} \right|}^2}\hbar \Gamma }}{{{{({E_n} - {E_m} - \hbar \omega )}^2} + {{\left( {\hbar \Gamma } \right)}^2}}}\,,$$ where $\varepsilon_0$ is electric constant, $\mu_0$ is magnetic constant, $c$ is the speed of light, $N \approx 3 \cdot 10^{16}$ cm$^{-3}$ is carrier concentration, $\hbar \Gamma$ is the scattering rate caused by the electron-phonon interaction and by some other factors of scattering. If $\hbar \Gamma$ limits to zero, one can obtain: $$\label{10} {\alpha _{m,n}}\left( \omega \right) = \mathop {\lim }\limits_{\hbar \Gamma \to 0} \left( {\omega \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} \frac{{N{{\left| {{d_{m,n}}} \right|}^2}\hbar \Gamma }}{{{{({E_n} - {E_m} - \hbar \omega )}^2} + {{\left( {\hbar \Gamma } \right)}^2}}}} \right) = \omega \pi \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} N{\left| {{d_{m,n}}} \right|^2}\delta \left( {{E_n} - {E_m} - \hbar \omega } \right).$$ In practice, sets of QD are obtained which are located on both crystal and polymer matrix or in the solutions. Whatever method of cultivation is used, the set of QDs are always characterized by the size dispersion. Let the QD size distribution be approximated by the Gauss function: $$\label{11} g\left( {s,\bar a,a} \right) = \frac{1}{{s\sqrt {2\pi } }}\exp \left( { - \frac{{{{\left( {a - \bar a} \right)}^2}}}{{2{s^2}}}} \right),$$ where $a$ is the QD radius (variable), $s$ is a half-width of the distribution (\[11\]), which is expressed by the average radius $\bar{a}$ and the value $\sigma$ which is considered as the variance in the QD sizes expressed in percentage: $s = \bar a\sigma /100$. By regarding the QD dispersion (\[11\]), the absorption coefficient is obtained for the set of QDs: $${\alpha _{m,n;\textrm{system}}}\left( \omega \right) = \omega \pi \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} N\int {g\left( {s,\bar a,a} \right)\,\,{{\left| {{d_{m,n}}\left( a \right)} \right|}^2}\delta \left( {{E_n}\left( a \right) - {E_m}\left( a \right) - \hbar \omega } \right)\rd a}.$$ Using delta-function properties we obtain: $$\label{12} {\alpha _{m,n;\textrm{system}}}\left( \omega \right) = \omega \pi \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} N\int {g\left( {s,\bar a,a} \right)\,\,{{\left| {{d_{m,n}}\left( a \right)} \right|}^2}\sum\limits_i {\frac{{\delta \left( {a - {a_{0i}}} \right)}}{{{{\left| {\frac{\rd}{{\rd a}}\left( {{E_n}\left( a \right) - {E_m}\left( a \right) - \hbar \omega } \right)} \right|}_{a = {a_{0i}}}}}}} \rd a},$$ where $a_{0i}$ are simple zeros of the function $F\left( a \right) = {E_n}\left( a \right) - {E_m}\left( a \right) - \hbar \omega$. Therefore, $$\label{13} {\alpha _{m,n;\textrm{system}}}\left( \omega \right) = \omega \pi \sqrt {\frac{{{\mu _0}}}{{{\varepsilon _0}\varepsilon }}} N\sum\limits_i {\frac{{g\left( {s,\bar a,{a_{0i}}} \right)\,\,{{\left| {{d_{m,n}} \left( {{a_{0i}}} \right)} \right|}^2}}}{{{{\left| {\frac{\rd}{{\rd a}}\left( {{E_n} \left( a \right) - {E_m}\left( a \right) - \hbar \omega } \right)} \right|}_{a = {a_{0i}}}}}}}.$$ ![The absorption coefficient of the QD system with the average radius $\bar{a}=$ 40 [Å]{}. The curve 1 denotes the QD system with $\sigma=5\%$, the curve 2 — $\sigma=10\%$, the curve 3 — $\sigma=15\%$.[]{data-label="fig5"}](fig5){width="55.00000%"} ![(Color online) The absorption coefficient of the QD system. Solid curves 1, 2, 3 denote the absorption coefficient of the QD with divalent impurity (transitions between singlet states $\psi_1$, $\psi_3$), dashed curves 1’, 2’, 3’ denote the absorption coefficient of the QD with univalent impurity. 1, 1’ — average radius is 30 [Å]{}; 2, 2’ — average radius is 40 [Å]{}; 3, 3’ — average radius is 50 [Å]{}. []{data-label="fig6"}](fig6){width="55.00000%"} The dependence of the absorption coefficient on the energy quant of light for different average radii and dispersion $\sigma$ was plotted using expression (\[13\]). In figure \[fig5\] for an univalent impurity in the QD, the dependence of the QD absorption coefficient which is caused by the $1s-1p$ transition, was plotted for three different values of $\sigma$. The figure shows that for highly dispersed QDs, the height of the absorption peak decreases and the absorption band blurs. This leads to an overlap of absorption bands caused by transitions between other allowed states. For monodispersion systems or systems with low $\sigma$, those transitions are clearly seen. A similar situation exists for the bivalent impurity in the spherical QD. Thus, further we consider a system of QDs with $\sigma$=5%. ![(Color online) Absorption coefficient of the system of QDs with divalent impurity. Solid curves 1, 2, 3 denote absorption between singlet states $\psi_1$–$\psi_3$ , dashed curves 1’, 2’, 3’ denote absorption between triplet states $\psi_2$–$\psi_4$. Curves 1, 1’ — average radius 30 [Å]{}; 2, 2’ — 40 [Å]{}; 3, 3’ — 50 [Å]{}.[]{data-label="fig7"}](fig7){width="55.00000%"} The absorption coefficient is plotted in figure \[fig6\]. The figure shows that for the same average QD radius $\bar{a}$, $1s-1p$ transition in the QD having a monovalent impurity and the respective absorption coefficient are larger than corresponding values in the QD having a divalent impurity. This is caused by a larger oscillator strength and dipole momentum of the interlevel transition in the case of univalent impurity. Values of $|d_{m,n}|^2$ in the case of univalent impurity are larger, because $\left| {\left\langle {{r_{n}}} \right\rangle - \left\langle {{r_{m}}} \right\rangle } \right|$ is larger for the univalent impurity than for the divalent impurity. A similar explanation of the height of absorption bands is presented in our previous works [@Boichuk1; @Boichuk2]. Both for the monovalent and divalent impurity, an increase of the average QD radius leads to the shift of absorption bands into the low-energy range. When the average QD radius is less than 55 [Å]{}, the absorption band caused by the transition $1s-1p$ with monovalent impurity is located in the high-energy range in comparison with the transition $\psi_1$–$\psi_3$ of the divalent impurity. For larger $\bar{a}$, this dependence reversed. Moreover, this can be seen in figure \[fig4\]. It should be noted that the absorption of electromagnetic waves by the system of QDs having a divalent impurity caused by the transitions between singlet states, is stronger than the corresponding absorption between triplet states (figure \[fig7\]). In addition, the transition energy of triplet states $\psi_2$, $\psi_4$ is smaller than the transition energy of singlet states $\psi_1$, $\psi_3$. Therefore, the respective absorption bands are shifted into low-energy region. Both for the singlet-singlet and triplet-triplet transitions, the small $\sigma$ are provided without overlapping the absorption bands which are clearly identified. Summary ======= The present paper studied optical properties of the QD heterosystem CdS/SiO$_2$ having a divalent impurity in the center of the QD, which made it possible: - to determine the energy spectrum of the QD with a divalent impurity and to show that this energy is smaller than the energy of the monovalent impurity in the same QD; - to calculate the dipole momentum and the oscillator strength of the interlevel transition and to find out that the absorption between the singlet states is stronger than between the triplet states; - to establish that the absorption bands of low dispersion systems with QD caused by a transition between the permitted states are clearly visible and do not overlap; - to show that in the presence of a monovalent impurity, the absorption coefficient is larger than in the presence of a divalent impurity. The results obtained are valid at very low temperatures. Their adjustments will be made by considering the temperature dependence. This will be implemented in our further works. [99]{} . Wang S., Zhang X.C., J. Phys. D: Appl. Phys., 2004, **37**, R1; . Boichuk V.I. Bilynskyi I.V., Leshko R.Ya., Turyanska L.M., Physica E, 2011, **44**, 476;\ . Sadeghi E., Avazpour A., Physica B, 2011, **406**, 241; . Holovatsky V., Frankiv I., J. Phys. Stud., 2012, **1/2**, 1706 (in Ukrainian). Boichuk V.I. Bilynskyi I.V., Leshko R.Ya., Turyanska L.M., Physica E, 2013, **54**, 281;\ . Vahdani M.R.K., Rezaei G., Phys. Lett. A, 2009, **373**, No. 34, 3079; . Tang C.L., Fundamentals of Quantum Mechanics for Solid State Electronics and Optics, Cambridge University Press, Cambridge, 2005. Boichuk V.I., Bilynskyi I.V., Leshko R.Ya., Condens. Matter Phys., 2008, **11**, 653; . Holovatsky V.A., Makhanets O.M., Voitsekhivska O.M., Physica E, 2009, **41**, 1522; . [^1]: E-mail: leshkoroman@gmail.com
--- abstract: 'For any nonnegative Borel-measurable function $f$ such that $f(x)=0$ if and only if $x=0$, the best constant $c_f$ in the inequality ${\operatorname{\mathsf{E}}}f(X-{\operatorname{\mathsf{E}}}X){\leqslant}c_f{\operatorname{\mathsf{E}}}f(X)$ for all random variables $X$ with a finite mean is obtained. Properties of the constant $c_f$ in the case when $f=|\cdot|^p$ for $p>0$ are studied. Applications to concentration of measure in the form of Rosenthal-type bounds on the moments of separately Lipschitz functions on product spaces are given.' address: | Department of Mathematical Sciences\ Michigan Technological University\ Houghton, Michigan 49931, USA\ E-mail: author: - bibliography: - 'C:/Users/Iosif/Dropbox/mtu/bib\_files/citations.bib' title: 'Optimal re-centering bounds, with applications to Rosenthal-type concentration of measure inequalities' --- Introduction {#intro} ============ In many situations (as e.g. in [@nonlinear]), one starts with zero-mean random variables (r.v.’s), which need to be truncated in some manner, and then the means no longer have to be zero. So, to utilize such tools as the Rosenthal inequality for sums of independent zero-mean r.v.’s, one has to re-center the truncated r.v.’s. Then one will usually need to bound moments of the re-centered truncated r.v.’s in terms of the corresponding moments of the original r.v.’s. To be more specific, let $Z$ be a given r.v., possibly (but not necessarily) of zero mean. Next, let ${{\tilde{Z}}}$ be a truncated version of $Z$ such that $|{{\tilde{Z}}}|{\leqslant}|Z|$; possibilities here include letting ${{\tilde{Z}}}$ equal $Z{\,\mathbf{I}\{Z{\leqslant}z\}}$ or $Z{\,\mathbf{I}\{|Z|{\leqslant}z\}}$ or $Z\wedge z$, for some $z>0$; cf. [@winzor; @tilted-mean]. Assume that ${\operatorname{\mathsf{E}}}|{{\tilde{Z}}}|<\infty$. Then for any $p{\geqslant}1$ one can use the inequalities $|x-y|^p{\leqslant}2^{p-1}(|x|^p+|y|^p)$ and $({\operatorname{\mathsf{E}}}|{{\tilde{Z}}}|)^p{\leqslant}{\operatorname{\mathsf{E}}}|{{\tilde{Z}}}|^p$, to write $$\label{eq:Z} {\operatorname{\mathsf{E}}}|{{\tilde{Z}}}-{\operatorname{\mathsf{E}}}{{\tilde{Z}}}|^p{\leqslant}2^p{\operatorname{\mathsf{E}}}|{{\tilde{Z}}}|^p{\leqslant}2^p{\operatorname{\mathsf{E}}}|Z|^p,$$ as is oftentimes done. However, the factor $2^p$ in can be significantly improved, especially for $p{\geqslant}2$. For instance, it is clear that for $p=2$ this factor can be reduced from $2^2=4$ to $1$. More generally, for every real $p>1$ we shall provide the best constant factor $C_p$ in the inequality $$\label{eq:p} {\operatorname{\mathsf{E}}}|X-{\operatorname{\mathsf{E}}}X|^p{\leqslant}C_p{\operatorname{\mathsf{E}}}|X|^p$$ for all r.v.’s $X$ with a finite mean ${\operatorname{\mathsf{E}}}X$. In particular, $C_p$ improves the factor $2^p$ more than $6$ times for $p=3$, and for large $p$ this improvement is asymptotically $\sqrt{8ep}$ times; see parts and of Theorem \[prop:p\] and the left panel in Figure \[fig:graphs\] in this paper. In fact, in Theorem \[prop:centring\] below we shall present an extended version of the exact inequality , for a quite general class of moment functions $f$ in place of the power functions $|\cdot|^p$. Another natural application of these results is to concentration of measure for separately Lipschitz functions on product spaces. In Section \[concentr\] of this paper, we shall give Rosenthal-type bounds on the moments of such functions. Similar extensions of the von Bahr–Esseen inequality were given in [@bahr-esseen]. Summary and discussion {#summary} ====================== Let $f\colon{\mathbb{R}}\to{\mathbb{R}}$ be any nonnegative Borel-measurable function such that $f(x)=0$ if and only if $x=0$. Let $X$ stand for any random variable (r.v.) with a finite mean ${\operatorname{\mathsf{E}}}X$. \[prop:centring\] One has $$\label{eq:centring} {\operatorname{\mathsf{E}}}f(X-{\operatorname{\mathsf{E}}}X){\leqslant}c_f{\operatorname{\mathsf{E}}}f(X),$$ where $$\label{eq:cf} c_f:=\sup\Big\{\frac{af(b)+bf(-a)}{af(b-t)+bf(-a-t)}\colon a\in(0,\infty), b\in(0,\infty), t\in{\mathbb{R}}\Big\}$$ is the best possible constant factor in (over all r.v.’s $X$ with a finite mean). All necessary proofs will be given in Section \[proofs\]. Note that for all $a\in(0,\infty)$, $b\in(0,\infty)$, and $t\in{\mathbb{R}}$ both the numerator and the denominator of the ratio in are strictly positive (since $f$ is nonnegative and vanishes only at $0$). So, $c_f$ is correctly defined, with possible values in $(0,\infty]$. It is possible to say much more about the optimal constant factor $c_f$ in the important case when $f$ is the power function $|\cdot|^p$. To state the corresponding result, let us introduce more notation. Take any $a\in(0,\infty)$ and $b\in(0,\infty)$, and let $X_{a,b}$ be any zero-mean r.v. with values $-a$ and $b$, so that $${\operatorname{\mathsf{P}}}(X_{a,b}=b)=\frac a{a+b}=1-{\operatorname{\mathsf{P}}}(X_{a,b}=-a).$$ Note that $$ X_{b,a}{\overset{\operatorname{D}}=}-X_{a,b},$$ where ${\overset{\operatorname{D}}=}$ denotes the equality in distribution. Take any $$\label{eq:p in} p\in(1,\infty)$$ and introduce $$\label{eq:R} R(p,b):=(b^{p - 1} + (1 - b)^{p - 1}) \big(b^{\frac1{p - 1}} + (1 - b)^{\frac1{p - 1}}\big)^{p - 1} \quad\text{for any $b\in[0,1]$.}$$ \[lem:\] If $p\ne2$ then there exists $b_p\in(0,\frac12)$ such that (i) ${\operatorname{\partial}_{b}{R(p,b)}}>0$ for $b\in(0,b_p)$ and hence $R(p,b)$ is (strictly) increasing in $b\in[0,b_p]$; (ii) ${\operatorname{\partial}_{b}{R(p,b)}}<0$ for $b\in(b_p,\frac12)$ and hence $R(p,b)$ is decreasing in $b\in[b_p,\frac12]$. So, $b_p$ is the unique maximizer of $R(p,b)$ over all $b\in[0,\frac12]$. In Proposition \[lem:\] and in the sequel, ${\operatorname{\partial}_{\cdot}{}}$ denotes the partial differentiation with respect to the argument in the subscript. \[prop:p\]  (i) \[ineq\] Inequality holds with the constant factor $$\begin{gathered} C_p:=c_{|\cdot|^p}=\sup_{b\in[0,1]}R(p,b)=\max_{b\in(0,1/2)}R(p,b)=R(p,b_p), \label{eq:C_p} \end{gathered}$$ where $R(p,b)$ is as in and $b_p$ is as in Proposition \[lem:\]. In particular, $C_2=R(2,b)=1$ for all $b\in[0,1]$. (ii) \[best\] $C_p$ is the best possible constant factor in . More specifically, the equality in obtains if and only if one of the following three conditions holds: (a) ${\operatorname{\mathsf{E}}}|X|^p=\infty$; (b) $p=2$, ${\operatorname{\mathsf{E}}}X^2<\infty$, and ${\operatorname{\mathsf{E}}}X=0$; (c) $p\ne2$ and $X{\overset{\operatorname{D}}=}{\lambda}(X_{1-b_p,b_p}-t_{b_p})$ for some ${\lambda}\in{\mathbb{R}}$, where $$\label{eq:t_b} t_b:=b-\frac{b^{1/(p-1)}}{b^{1/(p-1)}+(1-b)^{1/(p-1)}}$$ for all $b\in(0,1)$, and $b_p$ is as in Proposition \[lem:\]. (iii) \[symm\] One has the symmetries $$\label{eq:symm} C_p^{1/\sqrt{p-1}}=C_q^{1/\sqrt{q-1}}\quad\text{and}\quad b_p=b_q,$$ where $q$ is dual to $p$ in the sense of $L^p$-spaces: $$\frac1p+\frac1q=1.$$ (iv) \[asymp\] For $p\to\infty$, $$\label{eq:C_p sim} C_p\sim\frac{2^p}{\sqrt{8ep}};$$ as usual, $A\sim B$ means that $A/B\to1$. (v) \[C\_p mono\] $C_p$ is strictly log-convex and hence continuous in $p\in(1,\infty)$; moreover, $C_p$ decreases in $p\in(1,2]$ from $2$ to $1$ and increases in $p\in[2,\infty)$ from $1$ to $\infty$. (vi) \[C\_3\] The values of $C_p$, $b_p$, and $t_{b_p}$ are algebraic whenever $p$ is rational; in particular, $C_3=\frac1{27} (17+7 \sqrt 7)=1.315...$, $b_3=\frac12-\frac16\,\sqrt{1+2 \sqrt{7}} =0.0819...$, and $t_{b_3}=-\frac{1}{3} \sqrt{\frac{1}{2} \left(13 \sqrt{7}-34\right)}=-0.148...$. By parts and of Theorem \[prop:p\], $C_p$ can in principle be however closely bracketed for any real $p\in(1,\infty)$. However, such a calculation may in many cases be inefficient. On the other hand, Proposition \[lem:\] allows one to bracket the maximizer $b_p$ of $R(b,p)$ however closely and thus, perhaps more efficiently, compute $C_p$ with any degree of accuracy. (A part of) the graph of $C_p$ is shown in Figure \[fig:Cp-graph\], and those of $2^p/C_p$ and $b_p$ are shown in Figure \[fig:graphs\]. ![$C_p$ decreases in $p\in(1,2]$ from $2$ to $1$ and increases in $p\in[2,\infty)$ from $1$ to $\infty$. []{data-label="fig:Cp-graph"}](Cp-graph.pdf){width=".8\textwidth"} ![By , $2^p/C_p\sim\sqrt{8ep}$ as $p\to\infty$. By , $b_p=b_q$; note here also that $p\in(1,2]\iff q\in[2,\infty)$; by , $b_p\sim(p-1)/2$ as $p\downarrow1$.[]{data-label="fig:graphs"}](graphs.pdf){width="100.00000%"} What if, instead of the condition , one has $p\in(0,1]$? It is easy to see that the inequality holds for $p=1$ with $C_1=2$ (cf. ), which is then the best possible factor, as seen by letting $$\label{eq:p=1} \text{$X=X_{1-b,b}-b$ with $b\downarrow0$.}$$ However, the equality ${\operatorname{\mathsf{E}}}|X-{\operatorname{\mathsf{E}}}X|=2{\operatorname{\mathsf{E}}}|X|$ obtains only if $X{\overset{\operatorname{D}}=}0$; one may also note here that, by part of Theorem \[prop:centring\], $C_{1+}=2=C_1$. As to $p\in(0,1)$, for each such value of $p$ the best possible factor $C_p$ in is $\infty$; indeed, consider $X$ as in . Application: Rosenthal-type concentration inequalities for separately Lipschitz functions on product spaces {#concentr} =========================================================================================================== It is well known that for every $p\in[2,\infty)$ there exist finite positive constants $c_1(p)$ and $c_2(p)$, depending only on $p$, such that for any independent real-valued zero-mean r.v.’s $X_1,\dots,X_n$ $$ {\operatorname{\mathsf{E}}}|Y|^p{\leqslant}c_1(p)A_p+c_2(p)B^p,$$ where $Y:=X_1+\dots+X_n$, $A_p:={\operatorname{\mathsf{E}}}|X_1|^p+\dots+{\operatorname{\mathsf{E}}}|X_n|^p$, and $B:=({\operatorname{\mathsf{E}}}X_1^2+\dots+{\operatorname{\mathsf{E}}}X_n^2)^{1/2}$. An inequality of this form was first proved by Rosenthal [@rosenthal], and has since been very useful in many applications. It was generalized to martingales [@burk (21.5)], including martingales in Hilbert spaces [@pin80] and, further, in $2$-smooth Banach spaces [@pin94]. The constant factors $c_1(p)$ and $c_2(p)$ were actually allowed in [@pin80] and [@pin94] to depend on certain freely chosen parameters, which provided for optimal in a certain sense sizes of $c_1(p)$ and $c_2(p)$, for any given positive value of the Lyapunov ratio $A_p/B^p$. Best possible Rosenthal-type bounds for sums of independent real-valued zero-mean r.v.’s were given, under different conditions, by Utev [@utev-extr] and Ibragimov and Sharakhmetov [@ibr-shar97; @ibr-sankhya]. Also for sums of independent real-valued zero-mean r.v.’s $X_1,\dots,X_n$, Lata[ł]{}a [@latala-moments] obtained an expression ${\mathcal{E}}$ in terms of $p$ and the individual distributions of the $X_i$’s such that $a_1{\mathcal{E}}{\leqslant}\|Y\|_p{\leqslant}a_2{\mathcal{E}}$ for some positive absolute constants $a_1$ and $a_2$. Given a Rosenthal-type upper bound for real-valued martingales, one can use the Yurinski[ĭ]{} martingale decomposition [@yurinskii] and (say) Theorem \[prop:p\] to obtain a corresponding upper bound on the $p$th absolute *central* moment of the norm of the sum of independent random vectors in an arbitrary separable Banach space; even more generally, one can obtain such a measure-concentration inequality for separately Lipschitz functions on product spaces. To state such a result, let $X_1,\dots,X_n$ be independent r.v.’s with values in measurable spaces ${{\mathfrak{X}}}_1,\dots,{{\mathfrak{X}}}_n$, respectively. Let $g\colon{{\mathfrak{P}}}\to{\mathbb{R}}$ be a measurable function on the product space ${{\mathfrak{P}}}:={{\mathfrak{X}}}_1\times\dots\times{{\mathfrak{X}}}_n$. Let us say (cf. [@bent-isr; @normal]) that $g$ is [*separately Lipschitz*]{} if it satisfies a Lipschitz-type condition in each of its arguments: $$\label{eq:Lip} |g(x_1,\dots,x_{i-1},\tilde x_i,x_{i+1},\dots,x_n) - g(x_1,\dots,x_n)| {\leqslant}\rho_i(\tilde x_i,x_i)$$ for some measurable functions $\rho_i\colon{{\mathfrak{X}}}_i\times{{\mathfrak{X}}}_i\to{\mathbb{R}}$ and all $i\in{\overline{1,n}}$, $(x_1,\dots,x_n)\in{{\mathfrak{P}}}$, and $\tilde x_i\in{{\mathfrak{X}}}_i$. Take now any separately Lipschitz function $g$ and let $$Y:=g(X_1,\dots,X_n).$$ Suppose that the r.v. $Y$ has a finite mean. On the other hand, take any $p\in[2,\infty)$ and suppose that positive constants $c_1(p)$ and $c_2(p)$ are such that for all real-valued martingales $(\zeta_j)_{j=0}^n$ with $\zeta_0=0$ and differences $\xi_i:=\zeta_i-\zeta_{i-1}$ $$\label{eq:mart} {\operatorname{\mathsf{E}}}|\zeta_n|^p{\leqslant}c_1(p)\sum_1^n{\operatorname{\mathsf{E}}}|\xi_i|^p+c_2(p)\Big(\sum_1^n\|{\operatorname{\mathsf{E}}}_{i-1}\xi_i^2\|_\infty\Big)^{p/2},$$ where ${\operatorname{\mathsf{E}}}_j$ denotes the expectation given $\zeta_0,\dots,\zeta_j$. Then one has \[cor:concentr\] For each $i\in{\overline{1,n}}$, take any $x_i$ and $y_i$ in ${{\mathfrak{X}}}_i$. Then $$\label{eq:concentr} {\operatorname{\mathsf{E}}}|Y-{\operatorname{\mathsf{E}}}Y|^p {\leqslant}C_p c_1(p)\sum_1^n{\operatorname{\mathsf{E}}}\rho_i(X_i,x_i)^p+c_2(p)\Big(\sum_1^n{\operatorname{\mathsf{E}}}\rho_i(X_i,y_i)^2\Big)^{p/2},$$ where $C_p$ is as in . An example of separately Lipschitz functions $g:{{\mathfrak{X}}}^n\to{\mathbb{R}}$ is given by the formula $$\label{eq:g=sum} g(x_1,\dots,x_n)=\|x_1+\dots+x_n\|$$ for all $x_1,\dots,x_n$ in a separable Banach space $({{\mathfrak{X}}},\|\cdot\|)$. In this case, one may take $\rho_i(\tilde x_i,x_i)\equiv\|\tilde x_i-x_i\|$. Thus, one immediately obtains \[cor:conc-sums\] Let $X_1,\dots,X_n$ be independent random vectors in a Banach space $({{\mathfrak{X}}},\|\cdot\|)$. Let here $Y:=\|X_1+\dots+X_n\|$. For each $i\in{\overline{1,n}}$, take any $x_i$ and $y_i$ in ${{\mathfrak{X}}}_i$. Then $$\label{eq:sum} {\operatorname{\mathsf{E}}}|Y-{\operatorname{\mathsf{E}}}Y|^p {\leqslant}C_p c_1(p)\sum_1^n{\operatorname{\mathsf{E}}}\|X_i-x_i\|^p+c_2(p)\Big(\sum_1^n{\operatorname{\mathsf{E}}}\|X_i-y_i\|^2\Big)^{p/2}.$$ Particular cases of separately Lipschitz functions more general than the norm of the sum as in were discussed earlier in [@ineqs-largedev11] and [@viniti10 pages 20–23]. For $p=2$, it is obvious that the inequality holds with $c_1(2)=1$ and $c_2(2)=0$, and then the inequalities and do so. Thus, for $p=2$ becomes $$\label{eq:concentr,p=2} {\operatorname{\mathsf{Var}}}Y{\leqslant}\sum_1^n{\operatorname{\mathsf{E}}}\|X_i-x_i\|^2,$$ since $C_2=1$. The inequality was presented in [@viniti10 page 29] and [@pin-sakh Theorem 4], based on an improvement of the method of Yurinskiĭ [@yurinskii]; cf. [@mcdiarmid89; @mcdiarmid98; @bent-isr], [@normal Section 4], and [@pin94 Proposition 2.5]. The proof of Corollary \[cor:concentr\] is based in part on the same kind of improvement. The case $p=3$ is also of particular importance in applications, especially to Berry–Esseen-type bounds; cf. e.g. [@bolt93 Lemma A1], [@chen-shao05 Lemma 6.3], and [@nonlinear]. It follows from the main result of [@pin80] that holds for $p=3$ with $c_1(3)=1$ and $c_2(3)=3$, whereas, by part  of Theorem \[prop:p\], $C_3<1.316$. Thus, one has an instance of with rather small constant factors: $$ {\operatorname{\mathsf{E}}}|Y-{\operatorname{\mathsf{E}}}Y|^3 {\leqslant}1.316\,\sum_1^n{\operatorname{\mathsf{E}}}\|X_i-x_i\|^3+3\Big(\sum_1^n{\operatorname{\mathsf{E}}}\|X_i-y_i\|^2\Big)^{3/2}.$$ Similarly, the more general inequality holds for $p=3$ with $1.316$ and $3$ in place of $C_p c_1(p)$ and $c_2(p)$. As can be seen from the proof given in Section \[proofs\], both Corollaries \[cor:concentr\] and \[cor:conc-sums\] will hold even if the separately-Lipschitz condition is relaxed to $$\label{eq:LipE} |{\operatorname{\mathsf{E}}}g(x_1,\dots,x_{i-1},\tilde x_i,X_{i+1},\dots,X_n) - {\operatorname{\mathsf{E}}}g(x_1,\dots,x_i,X_{i+1},\dots,X_n)|{\leqslant}\rho_i(\tilde x_i,x_i).$$ Note also that in Corollaries \[cor:concentr\] and \[cor:conc-sums\] the r.v.’s $X_i$ do not have to be zero-mean, or even to have any definable mean; at that, the arbitrarily chosen $x_i$’s and $y_i$’s may act as the centers, in some sense, of the distributions of the corresponding $X_i$’s. Other inequalities for the distributions of separately Lipschitz functions on product spaces were given in [@bent-isr; @normal; @bahr-esseen]. Clearly, the separate-Lipschitz (sep-Lip) condition is easier to check than a joint-Lipschitz one. Also, sep-Lip (especially in the relaxed form ) is more generally applicable. On the other hand, when a joint-Lipschitz condition is satisfied, one can generally obtain better bounds. Literature on the concentration of measure phenomenon, almost all of it for joint-Lipschitz settings, is vast; let us mention here only [@ledoux-tala; @ledoux_book; @lat-olesz; @bouch-etal; @ledoux-olesz]. Proofs ====== It is well known that any zero-mean probability distribution on ${\mathbb{R}}$ is a mixture of zero-mean distributions on sets of at most two elements; see e.g.  [@disintegr Proposition 3.18]. So, there exists a Borel probability measure $\mu$ on the set $$S:={\mathbb{R}}\times(0,1/2]$$ such that $$\label{eq:g} {\operatorname{\mathsf{E}}}g(X-{\operatorname{\mathsf{E}}}X)=\int_S{\operatorname{\mathsf{E}}}g({\lambda}X_{1-b,b})\,\mu({{\operatorname{d}}}{\lambda}\times{{\operatorname{d}}}b) $$ for all nonnegative Borel functions $g$; the measure $\mu$ depends on the distribution of the r.v. $X-{\operatorname{\mathsf{E}}}X$. Letting now $$\label{eq:S_0} S_0:=({\mathbb{R}}\setminus\{0\})\times(0,1/2]$$ and using the condition $f(0)=0$, one has $$\begin{aligned} {\operatorname{\mathsf{E}}}f(X-{\operatorname{\mathsf{E}}}X)&=\int_S{\operatorname{\mathsf{E}}}f({\lambda}X_{1-b,b})\,\mu({{\operatorname{d}}}{\lambda}\times{{\operatorname{d}}}b) \notag \\ &=\int_{S_0}{\operatorname{\mathsf{E}}}f({\lambda}X_{1-b,b})\,\mu({{\operatorname{d}}}{\lambda}\times{{\operatorname{d}}}b) \notag \\ &{\leqslant}{{\tilde{c}}}_f \int_{S_0}{\operatorname{\mathsf{E}}}f({\lambda}X_{1-b,b}+{\operatorname{\mathsf{E}}}X)\,\mu({{\operatorname{d}}}{\lambda}\times{{\operatorname{d}}}b) \label{eq:le tc_f} \\ &{\leqslant}{{\tilde{c}}}_f \int_S{\operatorname{\mathsf{E}}}f({\lambda}X_{1-b,b}+{\operatorname{\mathsf{E}}}X)\,\mu({{\operatorname{d}}}{\lambda}\times{{\operatorname{d}}}b) \label{eq:le int S} \\ &={{\tilde{c}}}_f {\operatorname{\mathsf{E}}}f\big((X-{\operatorname{\mathsf{E}}}X)+{\operatorname{\mathsf{E}}}X\big)={{\tilde{c}}}_f {\operatorname{\mathsf{E}}}f(X), \notag \end{aligned}$$ where $$\begin{aligned} {{\tilde{c}}}_f&:=\sup\{{{\tilde{\rho}}}_f({\lambda},b,t)\colon({\lambda},b)\in S_0, t\in{\mathbb{R}}\} \quad\text{and} \label{eq:tc_f} \\ {{\tilde{\rho}}}_f({\lambda},b,t)&:=\frac{{\operatorname{\mathsf{E}}}f({\lambda}X_{1-b,b})}{{\operatorname{\mathsf{E}}}f\big({\lambda}(X_{1-b,b}-t)\big)}, \label{eq:trho} \end{aligned}$$ so that $$\label{eq:tc_f=c_f} {{\tilde{c}}}_f=c_f.$$ Now the inequality in follows from the above multi-line display and , and (together with and ) also shows that $c_f$ is the best possible constant factor in . It is straightforward to check the symmetry $$\label{eq:R-symm} R(p,b)^{1/\sqrt{p-1}}=R(q,b)^{1/\sqrt{q-1}}$$ for all $b\in[0,1]$, where $q$ is dual to $p$. So, it remains to consider $p\in(1,2)$. Also assume that $b\in(0,1/2)$ and introduce $$\label{eq:r,x,z} r:=p-1,\quad x:=\frac b{1-b},\quad\text{and}\quad z:=-\frac{\ln x}r,$$ so that $$\text{$r\in(0,1)$,\quad $x\in(0,1)$,\quad and\quad $z\in(0,\infty)$. }$$ Now introduce $$\begin{aligned} D_1(x)&:=D_1(r,x):=(1-b)\frac{x^r+1}{x^{r-1}-1}\,{\operatorname{\partial}_{b}{\ln R(p,b)}} =r - \frac{(x-x^{1/r}) (1 + x^r)}{(x^r-x) (1 + x^{1/r})} \label{eq:D1} \\ \intertext{and} D_2(x)&:=D_2(r,x):=r x^3 (1 + x^{1/r})^2 (x^{r-1}-1)^2\,D_1'(x), \label{eq:D2}\end{aligned}$$ so that $D_1(x)$ and $D_2(x)$ equal in sign to ${\operatorname{\partial}_{b}{\ln R(p,b)}}$ and $D_1'(x)$, respectively. One can verify the identity $$\label{eq:D2=} D_2(x)e^{(1 + r + r^2)z}/2=D_{21}(z) + (1 - r)D_{22}(z),$$ where $$\begin{aligned} D_{21}(z)&:=r^2 {\operatorname{sh}}((1 - r) z) + {\operatorname{sh}}(r(1 - r) z) - r {\operatorname{sh}}((1 - r^2) z), \\ D_{22}(z)&:=h(z)-h(rz), \quad h(u):={\operatorname{sh}}ru-r{\operatorname{sh}}u; $$ we use ${\operatorname{sh}}$ and ${\operatorname{ch}}$ for $\sinh$ and $\cosh$. Note that $h'(u)=r({\operatorname{ch}}ru-{\operatorname{ch}}u)<0$ for $u>0$ and hence $$D_{22}(z)<0.$$ Next, $$\frac{D_{21}'(z)}{(1 - r) r } = \big({\operatorname{ch}}[(1 - r) r z] - {\operatorname{ch}}[(1 - r^2) z]\big) +r \big({\operatorname{ch}}[(1 - r) z] - {\operatorname{ch}}[(1 - r^2) z]\big) <0,$$ since $(1 - r) r < 1 - r < 1 - r^2$. So, $D_{21}(z)$ is decreasing (in $z>0$) and, obviously, $D_{21}(0+)=0$. Hence, $D_{21}(z)<0$ as well. Thus, by , $D_2(x)<0$, which shows that $D_1'(x)<0$ and $D_1(x)$ is decreasing – in $x\in(0,1)$. Moreover, $D_1(0+)=r>0>r-1/r=D_1(1-)$. It follows, in view of , that $D_1(x)$ changes in sign exactly once, from $+$ to $-$, as $x$ increases from $0$ to $1$. Equivalently, by , ${\operatorname{\partial}_{b}{\ln R(p,b)}}$ changes in sign exactly once, from $+$ to $-$, as $b$ increases from $0$ to $1/2$. This completes the proof of Proposition \[lem:\].   **** To begin the proof of part of Theorem \[prop:p\], note that the last two inequalities in follow by the obvious symmetry $$\label{eq:Rsymm} R(p,b)=R(p,1-b) \quad\text{for all}\ b\in[0,1]$$ and Proposition \[lem:\]. Next, in view of the definition of $C_p$ in , inequality is a special case of . Moreover, by the definition of ${{\tilde{\rho}}}$ in and the homogeneity of the power function $|\cdot|^p$, $$\label{eq:trho=} {{\tilde{\rho}}}_{|\cdot|^p}({\lambda},b,t)=\rho_p(b,t):={{\tilde{\rho}}}_{|\cdot|^p}(1,b,t)=\frac{{\operatorname{\mathsf{E}}}|X_{1-b,b}|^p}{{\operatorname{\mathsf{E}}}|X_{1-b,b}-t|^p}$$ for all $({\lambda},b)\in S_0$ and $t\in{\mathbb{R}}$, where $S_0$ is as in . Next, the denominator ${\operatorname{\mathsf{E}}}|X_{1-b,b}-t|^p$ decreases in $t\in(-\infty,b-1]$, increases in $t\in[b,\infty)$, and attains its minimum over all $t\in[b-1,b]$ (and thus over all $t\in{\mathbb{R}}$) only at $t=t_b$, where $t_b$ is as in . So, $$\label{eq:max trho=} \max_{{\lambda}\in{\mathbb{R}}\setminus\{0\},\,t\in{\mathbb{R}}}{{\tilde{\rho}}}_{|\cdot|^p}({\lambda},b,t)=\max_{t\in{\mathbb{R}}}\rho_p(b,t)=\rho_p(b,t_b)=R(p,b)$$ for all $b\in(0,1/2]$, in view of . Now , , and yield $$c_{|\cdot|^p}=\sup_{b\in(0,1/2]}R(p,b)=\sup_{b\in[0,1]}R(p,b). $$ Thus, the proof of and all of part of Theorem \[prop:p\] is complete. **** That the equality in obtains under either of the conditions (a) or (b) in part of Theorem \[prop:p\] is trivial. If the condition (c) of part holds with ${\lambda}=0$, then $X{\overset{\operatorname{D}}=}0$, and again the equality in is trivial. If now (c) holds with some ${\lambda}\in{\mathbb{R}}\setminus\{0\}$ – so that $X{\overset{\operatorname{D}}=}{\lambda}(X_{1-b_p,b_p}-t_{b_p})$, then , , and imply $$ C_p=R(p,b_p)=\rho_p(b_p,t_{b_p})=\frac{{\operatorname{\mathsf{E}}}|X_{1-b_p,b_p}|^p}{{\operatorname{\mathsf{E}}}|X_{1-b_p,b_p}-t_{b_p}|^p} =\frac{{\operatorname{\mathsf{E}}}|X-{\operatorname{\mathsf{E}}}X|^p}{{\operatorname{\mathsf{E}}}|X|^p},$$ whence the equality in follows. Thus, for the equality in to hold it is sufficient that one of the conditions (a), (b), or (c) be satisfied. Let us now verify the necessity of one of these three conditions. W.l.o.g. condition (a) fails to hold, so that ${\operatorname{\mathsf{E}}}|X|^p<\infty$. If now $p=2$ then $C_p=C_2=1$, and the necessity of the condition ${\operatorname{\mathsf{E}}}X=0$ for the equality in is obvious. It remains to consider the case when $p\ne2$ and ${\operatorname{\mathsf{E}}}|X|^p<\infty$. Suppose that one has the equality in and let $f=|\cdot|^p$. Then, by the definition of $C_p$ in and the equality , equalities take place in and . In view of the condition ${\operatorname{\mathsf{E}}}|X|^p<\infty$, the integrals in and are both finite and equal to each other. So, the equality in means that $|{\operatorname{\mathsf{E}}}X|^p\,\mu\big(\{0\}\times(0,1/2]\big)=0$. If now $\mu\big(\{0\}\times(0,1/2]\big)\ne0$ then ${\operatorname{\mathsf{E}}}X=0$, and the equality in takes the form ${\operatorname{\mathsf{E}}}|X|^p=C_p{\operatorname{\mathsf{E}}}|X|^p$; but, by part of Theorem \[prop:p\] (to be proved a bit later), the condition $p\ne2$ implies $C_p>1$, which yields ${\operatorname{\mathsf{E}}}|X|^p=0$, and so, $X{\overset{\operatorname{D}}=}{\lambda}(X_{1-b_p,b_p}-t_{b_p})$ for ${\lambda}=0$. It remains to consider the case when $p\ne2$, ${\operatorname{\mathsf{E}}}|X|^p<\infty$, and $\mu\big(\{0\}\times(0,1/2]\big)=0$. Then $\mu(S_0)=\mu(S)=1$, and the equality in (again with $f=|\cdot|^p$), together with and , will imply that ${\operatorname{\mathsf{E}}}|{\lambda}X_{1-b,b}|^p=C_p{\operatorname{\mathsf{E}}}|{\lambda}X_{1-b,b}+{\operatorname{\mathsf{E}}}X|^p$ for $\mu$-almost all $({\lambda},b)\in S_0$. In view of , , Proposition \[lem:\], and , this in turn yields $$\rho_p(b,-{\operatorname{\mathsf{E}}}X/{\lambda})=R(p,b_p){\geqslant}R(p,b)=\rho_p(b,t_b)$$ for $\mu$-almost all $({\lambda},b)\in S_0$. Now recall that for each $b\in(0,1/2]$ the maximum of $\rho_p(b,t)$ in $t\in{\mathbb{R}}$ is attained only at $t=t_b$. It follows that for $\mu$-almost all $({\lambda},b)\in S_0$ one has (i) $R(p,b_p)=R(p,b)$ and hence, by Proposition \[lem:\], $b=b_p$ and (ii) $-{\operatorname{\mathsf{E}}}X/{\lambda}=t_b=t_{b_p}$ or, equivalently, ${\lambda}=-{\operatorname{\mathsf{E}}}X/t_b=-{\operatorname{\mathsf{E}}}X/t_{b_p}=:{\lambda}_p$. Therefore, $({\lambda},b)=({\lambda}_p,b_p)$ for $\mu$-almost all $({\lambda},b)\in S_0$ and thus for $\mu$-almost all $({\lambda},b)\in S$. Now shows that $X+{\lambda}_p t_{b_p}=X-{\operatorname{\mathsf{E}}}X{\overset{\operatorname{D}}=}{\lambda}_p X_{1-b_p,b_p}$ or, equivalently, $X{\overset{\operatorname{D}}=}{\lambda}_p(X_{1-b_p,b_p}-t_{b_p})$, which completes the proof of part of Theorem \[prop:p\]. **** Part of Theorem \[prop:p\] follows immediately by the symmetry of $R(p,b)$ in $p$ and the definitions of $C_p$ and $b_p$ in and Proposition \[lem:\], respectively. **** As in , let $r:=p-1$, so that $r\to\infty$. For a moment, take any $k\in(0,\infty)$ and choose $b=\frac kr$. Then, by , $x\sim b=\frac kr$, and now yields $D_1(r,x)\sim(1-\frac1{2k})r$, whence $D_1(r,x)$ is eventually (i.e., for all large enough $r$) positive or negative according as $k$ is greater or less than $\frac12$. So, again by , for any real $\check k$ and $\hat k$ such that $0<\check k<\frac12<\hat k$, eventually ${\operatorname{\partial}_{b}{R(p,b)}}\big|_{b=\check k/r}<0<{\operatorname{\partial}_{b}{R(p,b)}}\big|_{b=\hat k/r}$. It follows by Proposition \[lem:\] that $$\label{eq:b_p sim} b_p\sim\frac1{2r},$$ that is, $b_p={\kappa}/r$ for some ${\kappa}$ varying with $r$ so that ${\kappa}\to1/2$. Hence, $$\label{eq:factor1} (1-b_p)^r+b_p^r=(1-{\kappa}/r)^r+({\kappa}/r)^r\to e^{-1/2}.$$ Next, $b_p^{1/r}=({\kappa}/r)^{1/r}=\exp\big(\frac1r\,\ln\frac{\kappa}r\big)=1+\frac1r\,\ln\frac{\kappa}r+O\big(\big(\frac1r\,\ln\frac{\kappa}r\big)^2\big)$ and $(1-b_p)^{1/r}=1+O(1/r^2)$, whence $$\begin{aligned} \big((1-b_p)^{1/r}+b_p^{1/r}\big)^r &=\Big[2\Big(1+\frac1{2r}\,\ln\frac{\kappa}r+O\Big(\frac{\ln^2 r}{r^2}\Big)\Big)\Big]^r \\ &=\Big[2\exp\Big\{\frac1{2r}\,\ln\frac{\kappa}r+o\Big(\frac1r\Big)\Big\}\Big]^r \sim2^r\sqrt{\frac{\kappa}r}\sim\frac{2^p}{\sqrt{8p}}. \end{aligned}$$ Recalling now , , and , one obtains . **** Take any $b\in(0,1/2)$. Then $$d_{2,1}(r):={\operatorname{\partial}_{r}{}}{\operatorname{\partial}_{r}{}}\ln\big(b^r + (1 - b)^r\big)=\frac{(1-b)^r b^r}{\big(b^r + (1 - b)^r\big)^2}\, \ln^2\frac{1-b}b>0$$ for all $r>0$. Moreover, $d_{2,2}(r):={\operatorname{\partial}_{r}{}}{\operatorname{\partial}_{r}{}}\ln\big[\big(b^{1/r} + (1 - b)^{1/r}\big)^r\big]=d_{2,1}(1/r)/r^3>0$ for all $r>0$. So, ${\operatorname{\partial}_{p}{}}{\operatorname{\partial}_{p}{}}\ln R(p,b)=d_{2,1}(p-1)+d_{2,2}(p-1)>0$, which shows that $R(p,b)$ is strictly log-convex in $p\in(1,\infty)$. Also, ${\operatorname{\partial}_{p}{}}\ln R(p,b)\big|_{p=2}=0$, so that $R(p,b)$ decreases in $p\in(1,2]$ and increases in $p\in[2,\infty)$, with $R(2,b)=1$. Therefore and in view of – note in particular the attainment of the supremum there, $C_p$ is strictly log-convex and hence continuous in $p\in(1,\infty)$, and it also follows that $C_p$ decreases in $p\in(1,2]$ and increases in $p\in[2,\infty)$, with $C_p=1$. Next, shows that $C_p\to\infty$ as $p\to\infty$. Letting now $p\downarrow1$ and using , one has $q\to\infty$ and hence $C_p=C_q^{1/(q-1)}=\big(2^q/\sqrt{(8+o(1))eq}\,\big)^{1/(q-1)}\to2$. This completes the proof of part of Theorem \[prop:p\]. ****The proof of part of Theorem \[prop:p\] is straightforward, in view of , Proposition \[lem:\], , and .   The proof is based on ideas presented in [@viniti10; @pin-sakh] concerning the use of the mentioned Yurinski[ĭ]{} martingale decomposition; similar ideas were also used e.g. in [@bent-isr; @normal; @bahr-esseen]. Consider the martingale defined by the formula $\zeta_j:={\operatorname{\mathsf{E}}}_j(Y-{\operatorname{\mathsf{E}}}Y)$ for $j\in{\overline{0,n}}$, where ${\operatorname{\mathsf{E}}}_j$ stands for the conditional expectation given the ${\sigma}$-algebra generated by $(X_1,\dots,X_j)$, with ${\operatorname{\mathsf{E}}}_0:={\operatorname{\mathsf{E}}}$, and then consider the differences $\xi_i:=\zeta_i-\zeta_{i-1}$. Next, for each $i\in{\overline{1,n}}$ introduce the r.v.  $$\eta_i :={\operatorname{\mathsf{E}}}_i(Y - \tilde Y_i),$$ where $\tilde Y_i := g(X_1,\dots,X_{i-1},x_i,X_{i+1},\dots,X_n)$, so that $\xi_i=\eta_i-{\operatorname{\mathsf{E}}}_{i-1}\eta_i$, since the r.v.’s $X_1,\dots,X_n$ are independent. Also, in view of or , for all $i\in{\overline{1,n}}$ and $z_i\in{{\mathfrak{X}}}_i$ one has $|\eta_i|{\leqslant}\rho_i(X_i,z_i)$, whence, by , $$\begin{aligned} {\operatorname{\mathsf{E}}}_{i-1}|\xi_i|^r={\operatorname{\mathsf{E}}}_{i-1}|\eta_i-{\operatorname{\mathsf{E}}}_{i-1}\eta_i|^r{\leqslant}C_r{\operatorname{\mathsf{E}}}_{i-1}|\eta_i|^r &{\leqslant}C_r{\operatorname{\mathsf{E}}}_{i-1}\rho_i(X_i,z_i)^r \\ &=C_r{\operatorname{\mathsf{E}}}\rho_i(X_i,z_i)^r \end{aligned}$$ for all $r\in(1,\infty)$. Now follows from , since $\zeta_n=Y-{\operatorname{\mathsf{E}}}Y$ and $C_2=1$.
--- abstract: 'Over the past decade, a large number of jet substructure observables have been proposed in the literature, and explored at the LHC experiments. Such observables attempt to utilize the internal structure of jets in order to distinguish those initiated by quarks, gluons, or by boosted heavy objects, such as top quarks and $W$ bosons. This report, originating from and motivated by the BOOST2013 workshop, presents original particle-level studies that aim to improve our understanding of the relationships between jet substructure observables, their complementarity, and their dependence on the underlying jet properties, particularly the jet radius and jet transverse momentum. This is explored in the context of quark/gluon discrimination, boosted $W$ boson tagging and boosted top quark tagging.' author: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - bibliography: - 'boost2013\_report.bib' date: 'Received: date / Accepted: date' subtitle: 'Report of BOOST2013, hosted by the University of Arizona, 12$^{th}$-16$^{th}$ of August 2013.' title: Towards an Understanding of the Correlations in Jet Substructure --- Introduction {#sec:intro} ============ Monte Carlo Samples {#sec:samples} =================== Jet Algorithms and Substructure Observables {#sec:algssubstructure} =========================================== Multivariate Analysis Techniques {#sec:multivariate} ================================ Quark-Gluon Discrimination {#sec:qgtagging} ========================== Boosted $W$-Tagging {#sec:wtagging} =================== Top Tagging {#sec:toptagging} =========== Summary & Conclusions {#sec:conclusions} =====================
--- abstract: 'Motivated by modern applications in which one constructs graphical models based on a very large number of features, this paper introduces a new class of cluster-based graphical models. Unlike standard graphical models, variable clustering is applied as an initial step for reducing the dimension of the feature space. We employ model assisted clustering, in which the clusters contain features that are similar to the same unobserved latent variable. Two different cluster-based Gaussian graphical models are considered: the latent variable graph, corresponding to the graphical model associated with the unobserved latent variables, and the cluster-average graph, corresponding to the vector of features averaged over clusters. We derive estimates tailored to these graphs, with the goal of pattern recovery under false discovery rate (FDR) control. Our study reveals that likelihood based inference for the latent graph is analytically intractable, and we develop alternative estimation and inference strategies. We replace the likelihood of the data by appropriate empirical risk functions that allow for valid inference in both graphical models under study. Our main results are Berry-Esseen central limit theorems for the proposed estimators, which are proved under weaker assumptions than those employed in the existing literature on Gaussian graphical model inference. As a corollary of the main results, we make explicit the implications of the asymptotic approximations on graph recovery under FDR control, and show when it can be controlled asymptotically. Our analysis takes into account the uncertainty induced by the initial clustering step. We find that in the model-assisted clustering framework, the errors induced by clustering are asymptotically ignorable in the follow-up analysis, under no further restrictions on the parameter space for which inference is valid. The theoretical properties of the proposed procedures are verified on simulated data and an fMRI data analysis.' author: - 'Carson Eisenach [^1] Florentina Bunea [^2] Yang Ning[^3] Claudiu Dinicu[^4]' bibliography: - 'gblock.bib' title: 'High-Dimensional Inference for Cluster-Based Graphical Models' --- [**Keyword:**]{} [ Berry-Esseen bound, Graphical model, High-dimensional inference, Clustering, False discovery rate]{} Introduction {#sec:intro} ============ High dimensional graphical models have become increasingly popular, over the last [several]{} decades, for understanding independence and conditional independence relationships among components of high dimensional random vectors. The challenges posed by the estimation and statistical analysis of a graphical model with many more nodes than the the number of observations [led]{} to renewed interest in these models, and to a very large body of literature, [including]{} [@Meinshausen06; @Yuan07; @Friedman08; @verzelen2008gaussian; @Lam09; @rothman2008sparse; @peng2009partial; @Ravikumar11; @Yuan10; @Cai11a; @Sun12b; @liu2012high; @xuezou2012; @ning2013high; @jankova2014confidence; @cai2016estimating; @jankova2017honest; @fan2017high], to give only an incomplete list. However, in practice, when the dimension is very large and the sample size is small, the dependency among variables may become very weak, if it exists, and [difficult]{} to detect without auxiliary information. Moreover, when the number of variables is in the tens of thousands, it is difficult to form an opinion [of]{} their overall dependency structure, even at the visual level, from a graph estimated via a graphical model. [A solution to these issues is to employ an initial dimension reduction procedure on the very high dimensional vector. For example, in neuroscience applications, a typical functional magnetic resonance image (fMRI) consists of blood-oxygen-level-dependent (BOLD) activities measured at 200,000+ voxels of the brain, over a period of time.]{} Instead of analyzing voxel-level data directly, scientists routinely cluster voxels into several regions of interest (ROI) with homogeneous functions based on the domain knowledge, and then carry out the analysis at the ROI-level. In this example, the group structure of variables may boost the dependency signals, in the context of graphical models. [Similar pre-processing steps are used in other application domains, such as genomics, finance and economics.]{} Motivated by [a rich set of applications, we consider variable clustering as the initial dimension reduction step applied to the]{} observed vector $\bX =: (X_1, \ldots, X_d) \in \RR^d$. To the best of our knowledge, very little is known about the effect of clustering on down-stream analysis and, consequently, on the induced graphical models. Our contribution is [the provision of]{} a framework that allows for such an analysis. We introduce cluster-based graphical models and show how they can be estimated, under FDR control. The model is built on the assumption that the observed variables $\bX =(X_1, \ldots, X_d) \in \RR^d$ can be partitioned into $K$ unknown clusters $G^* = \{{G^*_{1}}, \ldots, {G^*_{K}}\}$ such that variables in the same cluster share the same behavior. Following the intuition behind the widely-used $K$-means type procedures, we define a population-level cluster as a group of variables that are noise corrupted versions of a hub-like variable, which is not directly observable, and treated as a latent factor. Formally, we assume there exists a latent random vector $\bZ \in \RR^K$, with mean zero and covariance matrix $\Cov(\bZ) = {\Cb_{ }^*}$, such that $$\label{eqn:g_latent_model} \bX=\Ab \bZ+\bE,$$ for a zero mean error vector $\bE$ with independent entries. The entries of the $d \times K$ matrix $\Ab$ are given by $A_{jk}=\II\{j\in G^*_{k}\}$. A cluster of variables consist in those components of $\Xb$ with indices in the same ${G^*_{k}}$. We denote $\Cov(\bE)={\bGamma^*}$, a diagonal matrix with entries $\Gamma^*_{jj} ={\gamma_{j}^*}$ for any $1\leq j\leq d$. We also assume that the mean-zero noise $\bE$ is independent of $\bZ$. The clusters are uniquely defined by model (\[eqn:g\_latent\_model\]) provided that the smallest cluster contains at least two variables and that ${\Cb_{ }^*}$ is strictly positive definite, as shown in [@Bunea2016a], and this result holds [irrespective]{} of distributional assumptions. To keep the presentation focused, in this work we assume that $\bZ\sim \cN(0,{\Cb_{ }^*})$ and $\bE \sim \cN(0,{\bGamma^*})$, which implies $\bX \sim \cN(0,{\bSigma_{ }^*})$ with ${\bSigma_{ }^*}=\Ab{\Cb_{ }^*}\Ab^T+{\bGamma^*}$. In this context we consider two related, but different, graphical models: - The [*latent variable graph*]{}, associated with the sparsity pattern of the precision matrix $$\label{teta} \bTheta^*:=\Cb^{*-1}$$ of the Gaussian vector $\bZ \in \RR^K$. The latent variable graph encodes conditional independencies (CI’s) among the unobservable latent variables $\bZ$. - The [*cluster-average graph*]{}, associated with the sparsity pattern of the precision matrix $$\label{omega} \bOmega^*:=\bS^{*-1},$$ where ${\bS_{ }^*}$ is the covariance matrix of $\bar{\bX} \in \RR^{K}$, and $\bar{\bX} =: (\bar{X}_1, \ldots, \bar{X}_K)$ is the within cluster average given by $\bar{X}_k =: \frac{1}{|{G^*_{k}}|}\sum_{i \in {G^*_{k}}} X_i$. The cluster-average graph encodes CI’s among averages of observable random variables. In particular, we have $${\bS_{ }^*}= {\Cb_{ }^*}+ \bar{\bGamma^*},$$ where $\bar {\bGamma^*}=\textrm{diag}(\bar\gamma^*_{1},...,\bar\gamma^*_K)$ with $\bar\gamma^*_k=\frac{1}{|G_k^*|^2}\sum_{j\in G_k^*}\gamma_j^*$. Although both graphs correspond to vectors of dimension $K$, possibly much lower than $d$, they are in general different, as the sparsity patterns of $\bTheta^*$ and $\bOmega^*$ will typically differ, and have different interpretations. It would be therefore misleading to use one as a proxy for the other when drawing conclusions in practice. For instance, in the neuroscience example, if we interpret each latent variable as the function of a ROI, the latent variable graph [encodes the CI relationships]{} between functions, which may be one question of scientific interest. The relationship between $\bTheta^*$ and $\bOmega^*$ shows that this question will not be typically answered by studying the cluster-average graph, although that may be tempting to do. On the other hand, the cluster-average graph may be of independent interest, as it encodes CI’s relations among the average signals within each ROI. Since the two cluster-based graphical models introduced above can both be of interest in a large array of applications, we provide inferential tools for each of them in this work. We assume that we observe $n$ i.i.d. copies $\bX_1, \ldots, \bX_n$ of $\bX$. The focus of our work is on [post-clustering and post-regularization inference]{} for these two sparse precision matrices, with the ultimate goal of sparsity pattern recovery under FDR control. Inference for the entries of a Gaussian precision matrix has received a large amount of attention in the past few years, most notably post-regularization inference, for instance [@Ren2013; @Zhang2014; @jankova2014confidence; @gu2015local; @barber2015rocket; @jankova2017honest; @javanmard2013confidence; @van2013asymptotically; @ning2017general; @cai2017confidence; @neykov2015unified]. These works generalize to the high-dimensional setup the classical ideas of one-step estimation [@bickel1975one], by first constructing a sparse estimator of the precision matrix, via regularization, and then building de-sparsified updates that are asymptotically normal. The effect of the initial regularization step is controlled in the second step, and inference after regularization becomes valid. Whereas we will also consider a similar estimation strategy, we differ from the existing literature in important ways. In our work, we add another layer of data-dependent dimension reduction, via clustering, and provide a framework within which the variability induced by clustering can be controlled. Even after controlling for the clustering variability, we note that the existing procedures for estimation and, especially, post-regularization inference in Gaussian graphical models are not immediately applicable to our problem for the following reasons: (1) They are developed for variables that can be observed directly. From this perspective, they could, in principle, be applied to the cluster-average graph, but are not directly extendable to the latent graph; (2) To the best of our knowledge, all existing methods for precision matrix inference require the largest eigenvalue of the corresponding covariance matrix to be upper bounded by a constant. Such an assumption implies, in turn, that the Euclidean norm of each row of the covariance matrix is bounded, which reduces significantly the parameter space for which inference is valid. The assumption holds, for instance, when the number of variables is bounded, or when the entries of each row are appropriately small. To overcome these limitations, we take a different approach in this work, that allows us to lift unpleasant technical conditions associated with other procedures, while maintaining the validity of inference for both the latent and the average graph. We summarize our contributions below.\ [**1.**]{} [**Post clustering inference does not impose additional restrictions on the parameter space.**]{} We discuss, in Section \[sec:introduction:glatent\], clustering methods tailored to model (\[eqn:g\_latent\_model\]), where the number of clusters $K$ is unknown and is allowed to grow with $n$. Using the results of [@Bunea2016a], these methods yield a partition $\widehat G = G^*$, with high probability, provided that $\lambda_{\text{min}}(\Cb^*) > c$, for a small positive quantity $c$ made precise in Section \[sec:introduction:glatent\]. A lower bound on the smallest eigenvalue of the covariance matrix is the minimal condition under which inference in any general graphical [model]{} can be performed. Therefore, consistent clustering via model (\[eqn:g\_latent\_model\]) does not require a further reduction of the parameter space for which the more standard post-regularized inference can be developed. Moreover, as Section \[sec:main\_results\] shows, asymptotic inference based on the estimated clusters reduces to asymptotic inference relative to the true clusters, $G^*$, without any need for data splitting. This fact is in contrast with the phenomenon encountered in post-model selection inference, for instance in variable selection in linear regression [@lockhart2014significance; @lee2013exact; @taylor2014post]. In that case, reducing inference to the consistently selected set of variables can only be justified over a reduced part of the parameter space [@bunea2004consistent].\ [**2.**]{} [**Methods for estimation tailored to high dimensional inference in cluster-based graphical models.**]{} We develop a new estimation strategy tailored to our final goal, that of constructing approximately Gaussian estimators for the entries of the precision matrices $\bTheta^*$ and $\bOmega^*$ given in (\[teta\]) and (\[omega\]) above. Although we work under the assumption that the data is Gaussian, likelihood based estimators may be unsatisfactory, as their analysis may require stringent assumptions, as explained above, or may become analytically intractable, as argued in Section \[sec\_LVG\], for the latent graph. We do propose a method that mimics very closely the principles underlying the construction of an efficient score function for estimation in the presence of high dimensional nuisance parameters, see for instance [@bickel1993efficient], but we do not base it on the corresponding likelihood-derived quantities. We explain the underlying principles in Section \[sec:general\].\ [**3.** ]{} [**Berry-Esseen-type bounds for Gaussian approximations and FDR control.**]{} Our goal is to estimate the cluster-average and latent graphs, respectively, and to provide guarantees on the recovery error. Existing literature [focuses]{} on constructing approximate confidence intervals for one or a finite number of entries of a CI graph, or known linear functionals of such entries [@Ren2013; @Zhang2014; @jankova2014confidence; @gu2015local; @barber2015rocket; @jankova2017honest; @javanmard2013confidence; @van2013asymptotically; @ning2017general; @cai2017confidence; @neykov2015unified]. In all these cases, deriving the asymptotic limiting distribution of appropriate test statistics suffices. Our focus here is on the estimation of the sparsity pattern, which can be equivalently viewed as a multiple-testing problem. It is well known that the exact sparsity pattern can be recovered, with high probability, only if the entries of each precision matrix are above the minimax optimal noise level $O(\sqrt{\log d/n})$ [@Ravikumar11; @Meinshausen06]. Since our aim is inference on the sparsity pattern without further restrictions on the parameter space, the next best type of error that we can control is the False Discovery Rate (FDR) [@BH95]. For this, we first derive Gaussian approximations for the distribution of our estimates. We establish Berry-Esseen type bounds on the difference between the cumulative distribution function of our estimators and that of a standard Gaussian random variable that are valid for each $K, d$ and $n$, and are presented in Theorems \[thm:xi\_asymptotic\] and \[thm:theta\_asymptotic\], respectively. In Section \[sec:main\_results:fdr\_control\] we use these results for pattern recovery under FDR control, and explain the effect of the asymptotic approximations on this quantity. The paper is organized as follows. Section \[sec:background\] below contains a brief summary of existing results on model-assisted clustering, via model (\[eqn:g\_latent\_model\]). Section \[sec:inference\] describes the estimation procedures for the latent variable graph and the cluster-average graph, respectively. In Section \[sec:main\_results\] we establish Berry-Esseen type central limit theorems for the estimators derived in Section \[sec:inference\], and provide bounds on the FDR associated with each graphical model under study, respectively. Section \[sec:numerical\] gives numerical results using both simulated and real data sets. Background {#sec:background} ========== Notation {#sec:introduction:notation} -------- The following notation is adopted throughout this paper. Let $d$ denote the ambient dimension, $n$ the sample size, $K$ the number of clusters and $m$ the minimum cluster size. The matrix ${\Cb_{ }^*}$ denotes the population covariance of the latent vector $\bZ$. Likewise, the matrices ${\bGamma^*}$, ${\bSigma_{ }^*}$, ${\bTheta_{ }^*}$, ${\bS_{ }^*}$ and ${\bOmega_{ }^*}$ denote population-level quantities. For $\vb=(v_1,...,v_d)^{T} \in \mathbb{R}^d$, and $1 \leq q \leq \infty$, we define $\|{\vb}\|_q=(\sum_{i=1}^d |v_i|^q)^{1/q}$, $\|{\vb}\|_0=|\textrm{supp}(\vb)|$, where $\textrm{supp}(\vb)=\{j: v_j\neq 0\}$ and $|A|$ is the cardinality of a set $A$. Denote $\|{\vb}\|_{\infty}=\max_{1\leq i \leq d} |v_i|$ and $\vb^{\otimes 2}=\vb\vb^T$. Assume that $\vb$ can be partitioned as $\vb=(\vb_1,\vb_2)$. Let $\nabla f(\vb)$ denote the gradient of the function $f(\vb)$, and $\nabla_1 f(\vb)=\partial f(\vb)/\partial \vb_1$. Similarly, let $\nabla^2 f(\vb)$ denote the Laplacian of the function $f(\vb)$ and $\nabla^2_{12} f(\vb)=\partial^2 f(\vb)/\partial \vb_1\partial \vb_2$. For a $d\times d$ matrix $\Mb=[M_{jk}]$, let $\|{\Mb}\|_{\max}=\max_{jk}|M_{jk}|$, $\|{\Mb}\|_1=\sum_{jk}|M_{jk}|$, and $\|{\Mb}\|_{\infty}=\max_{k}\sum_{j}|M_{jk}|$. If the matrix $\Mb$ is symmetric, then $\lambda_{\min}(\Mb)$ and $\lambda_{\max}(\Mb)$ are the minimal and maximal eigenvalues of $\Mb$. Let $[d]=\{1,2,....,d\}$. For any $j\in [d]$, we denote the $j$th row and $j$th column of $\Mb$ as $\Mb_{j\cdot}$ and $\Mb_{\cdot j}$, respectively. Similarly, let $\Mb_{-j,-k}$ be the sub-matrix of $\Mb$ with the $j^{th}$ row and $k^{th}$ column removed. Define $\Mb^{\otimes 2}=\Mb\otimes\Mb$. The notation $\cS^{d\times d}$ refers to the set of all real, symmetric $d \times d$ matrices. Likewise, ${\cS^{d \times d}_+} \subset \cS^{d\times d}$ is the positive semi-definite cone. We use $\otimes$ and $\circ$ to denote the Kronecker and Hadamard product of two matrices, respectively. Let $\eb_{j}$ denote the vector of all zeros except for a one in the $j^{th}$ position. The vector $\bone$ is the vector of all ones. $a\vee b=\max(a,b)$. Model Assisted Variable Clustering {#sec:introduction:glatent} ---------------------------------- In this section we review existing results on variable clustering that will be used throughout this paper. If model (\[eqn:g\_latent\_model\]) is used as a background for defining clusters of variables, then [@Bunea2016a] showed that these clusters are uniquely defined, up to label switching, as soon as $m=: \min_{1 \leq k \leq K} |G_k^*| \geq 2$ and the components of the latent vector $\bZ$ are different a.s. : $$\Delta({\Cb_{ }^*})=: \min_{j < k} \EE(Z_j - Z_k)^2 > 0.$$ Since $$\Delta({\Cb_{ }^*})= \min_{j<k} (\eb_{j}-\eb_{k})^T{\Cb_{ }^*}(\eb_{j}-\eb_{k})\geq 2 {\lambda_{\min}\left({\Cb_{ }^*}\right)},$$ the clusters are uniquely defined as soon as ${\lambda_{\min}\left({\Cb_{ }^*}\right)} > 0$, which is the minimal condition under which one can study properties of the inverse of ${\Cb_{ }^*}$. Moreover, [@Bunea2016a] developed two algorithms, PECOK and COD, that are shown to recover the clusters exactly, from $n$ i.i.d. copies $\bX_1, \ldots, \bX_n$ of $\bX$, as soon as $${\lambda_{\min}\left({\Cb_{ }^*}\right)} \geq c,$$ for a positive quantity $c$ that approaches 0 as $n$ [grows. Specifically, for the COD procedure, $$c = O\left(\|\bSigma^*\|_{\max}\sqrt{\log (d \vee n)/n}\right).$$ On the other hand, for the PECOK procedure $$c = O\left(\|\bGamma^*\|_{\max}\sqrt{K\log (d \vee n)/mn}\right),$$ which can be much smaller when one has a few, balanced, clusters.]{} These values of $c$ are further shown to be minimax or near-minimax optimal for cluster recovery. We refer to Theorems 3 and 4 in [@Bunea2016a] for the precise expressions and details. Under these minimal conditions on ${\lambda_{\min}\left({\Cb_{ }^*}\right)}$, the exact recovery of the clusters holds with probability larger than $1 - 1/(d \vee n)$. This will allow us to show, in Section \[sec:main\_results\], that inference in cluster-based graphical models is not hampered by the clustering step. For completeness, we outline the PECOK algorithm below, which consists in a convex relaxation of the $K$-means algorithm, further tailored to estimation of clusters $G^* = \{{G^*_{1}}, \ldots, {G^*_{K}}\}$ defined via the interpretable model (\[eqn:g\_latent\_model\]). The PECOK algorithm consists in the following three steps: 1. Compute an estimator $\tilde\bGamma$ of the matrix $\bGamma^*$. 2. Solve the semi-definite program (SDP) $$\label{eqn:pecok_sdp} \widehat \Bb =\argmax _{\Bb \in \cD}\langle \widehat{\bSigma} - \widetilde{\bGamma}, \Bb \rangle,$$ where $\widehat \bSigma$ is the sample covariance matrix and $$\label{eq:domain} \cD:=\left\{ \Bb \in R^{d\times d}: \begin{array}{l} \bullet\ \bB \succcurlyeq 0 \ \ \text{(symmetric and positive semidefinite)} \\ \bullet\ \sum_a B_{ab} = 1,\ \forall b\\ \bullet\ B_{ab}\geq 0,\ \forall a,b\\ \bullet\ \tr(\Bb) = K \end{array} \right\}.$$ 3. Compute $\widehat G$ by applying a clustering algorithm on the rows (or equivalently columns) of $\widehat \Bb$. The construction of an accurate estimator $\widetilde \bGamma$ of $\bGamma^*$, before the cluster structure is known, is a crucial step for guaranteeing the statistical optimality of the PECOK estimator. Its construction is given in [@Bunea2016a], and included in Appendix \[pregamma\], for the convenience of the reader. We will employ an efficient algorithm for solving . [Standard black-box SDP solvers, for a fixed precision, exhibit $\cO(d^7)$ running time on , which is prohibitively expensive. [@Eisenach2017] recently introduced the FORCE algorithm, which requires worst case $\cO(d^{6}K^{-2})$ time to solve the SDP, and in practice often performs the clustering rapidly.]{} The key idea behind the FORCE algorithm is that an optimal solution to can be attained by first transforming into an eigenvalue problem, and then using a first-order method. Iterations of the first-order method are interleaved with a dual step to round the current iterate to an integer solution of the clustering problem, and then searches for an optimality certificate. By using knowledge of both the primal and the dual SDPs, FORCE is able to find the solution much faster than a standard SDP solver. We refer to [@Eisenach2017] for the detailed algorithm. Estimation of Cluster-based Graphical Models {#sec:inference} ============================================ In this section, we propose a unified estimation approach, that utilizes similar loss functions for estimation and inference in the cluster-average and the latent variable graphs. We first describe our general principle, and then apply it to the two graphical models, respectively. One-step Estimators for High-Dimensional Inference {#sec:general} -------------------------------------------------- \[sec:inference:z\_est\] Assume that we observe $n$ i.i.d. realizations $\bX_1,...,\bX_n$ of $\bX \in \RR^d$. Let $Q(\bbeta, \bX)$ denote a known function of $\bbeta$ and $\bX$, where $\bbeta$ is a $q$-dimensional unknown parameter that parametrizes the distribution of $\bX$. We define the target parameter $\bbeta^*$ as $$\bbeta^*=\argmin \EE(Q(\bbeta, \bX)).$$ Let us partition $\bbeta$ as $\bbeta=(\theta,\bgamma)$, where $\theta \in \RR$ is a univariate parameter of interest, and $\bgamma \in \RR^{q-1}$ is a nuisance parameter. Our goal is to construct a $n^{1/2}$-consistent and asymptotically normal estimator for $\theta$ in high-dimensional models with $q=\textrm{dim}(\bbeta)\gg n$. In this case, the dimension of the nuisance parameter $\gamma$ is large, which makes the inference on $\theta$ challenging. We start from the empirical risk function over $n$ observations defined as $$\label{eqQn} Q_n(\bbeta)=\frac{1}{n} \sum_{i=1}^nQ(\bbeta, \bX_i).$$ One standard instance of $Q_n$ is the negative log-likelihood function of the data. In this work, we will conduct inference based on an alternative loss function, as the analysis of the log-likelihood may require unpleasant technical conditions that we would like to avoid, as discussed in Sections \[sec:main\_results:assumption\]. However, we mimic as much as possible the likelihood principles, in order to aid the understanding of the construction below. For these reasons we will refer to $Q_n(\bbeta)$ as the negative pseudo-likelihood function. For now, we will assume that $Q_n(\bbeta)$ is given, and a detailed discussion of its respective choice for inference in the latent variable graph and the cluster-average graph will be given in the following two subsections. The pseudo-information matrix for one observation is defined as $\Ib=\EE(\nabla^2 Q(\bbeta^*, \bX_i))$, which can be further partitioned as $$\label{eqpartition} \Ib= \begin{bmatrix} \Ib_{11} & \Ib_{12} \\ \Ib_{21} & \Ib_{22} \\ \end{bmatrix},$$ relative to the partition of $\bbeta=(\theta,\bgamma)$. When $Q_n$ is the negative log-likelihood function, and the dimension of the parameter is independent of $n$, then $h(\btheta; \bgamma)$ given by (\[eqeff\]) is called the efficient score function for $\theta$, and classical theory shows that it admits solutions that are consistent, asymptotically normal and attain the information bound given by the reciprocal of (\[eqinfor\]) [@van; @bickel1993efficient]. With these goals in mind, we similarly define the corresponding pseudo-score function for estimating $\theta$ in the presence of the nuisance parameter $\bgamma$ as $$\label{eqeff} h(\theta; \bgamma)=\nabla_1Q_n(\bbeta)-\Ib_{12}\Ib_{22}^{-1}\nabla_2Q_n(\bbeta)=\frac{1}{n}\sum_{i=1}^n \Big(\nabla_1Q(\bbeta, \bX_i)-\Ib_{12}\Ib_{22}^{-1}\nabla_2Q(\bbeta, \bX_i)\Big),$$ and define the pseudo information of $\theta$, in the presence of the nuisance parameter $\bgamma$, as $$\label{eqinfor} \Ib_{1\mid 2}=\Ib_{11}-\Ib_{12}\Ib_{22}^{-1}\Ib_{21}.$$ When the dimension of $\bgamma$ is fixed, one can easily estimate $\Ib_{12}$ and $\Ib_{22}$ in (\[eqeff\]) by their sample versions $\hat \Ib_{12}$ and $\hat \Ib_{22}$. However, such simple procedure fails when the dimension of $\bgamma$ is greater than the sample size, as $\hat \Ib_{22}$ is rank deficient. To overcome this difficulty, rather than estimating $\Ib_{12}$ and $\Ib_{22}^{-1}$ separately, we directly estimate $$\label{w} \wb^T=\Ib_{12}\Ib_{22}^{-1}$$ by $$\label{eqwd} \hat\wb=\argmin \|{\wb}\|_1, ~~~~\textrm{s.t.}~~~~ \|{\nabla^2_{12} Q_n(\hat\bbeta)- \wb^T\nabla^2_{22} Q_n(\hat\bbeta)}\|_\infty\leq\lambda',$$ where $\lambda'$ is a non-negative tuning parameter, and $\hat\bbeta=(\hat\theta,\hat\bgamma)$ is an initial estimator, which is usually defined case by case, for a given model. Then, we can plug $\hat\wb$ and $\hat\bgamma$ into the pseudo-score function, which gives $$\label{eqeffest} \hat h(\theta, \hat\bgamma)=\nabla_1Q_n(\theta, \hat\bgamma)-\hat\wb^T\nabla_2Q_n(\theta, \hat\bgamma).$$ Following the Z-estimation principle [@van; @bickel1993efficient], one could define the final estimator of $\theta$ as the solution of the pseudo-score function $\hat h(\theta, \hat\bgamma)$. However, in many examples, the pseudo-score function $\hat h(\theta, \hat\bgamma)$ may have multiple solutions and it becomes unclear which root serves as a consistent estimator; see [@small2000eliminating] for further discussion in the general estimating function context. To bypass this issue, we consider the following simple one-step estimation approach. Given the initial estimator $\hat\theta$ from the partition of $\hat\bbeta$, we perform a Newton-Raphson update based on the pseudo-score function $\hat h(\theta, \hat\bgamma)$, to obtain $\tilde{\theta}$, which is classically referred to as a one-step estimator by [@bickel1975one]. Specifically, we construct $$\label{eqest} \tilde\theta=\hat\theta-\hat \Ib_{1|2}^{-1}\hat h(\hat\theta,\hat\bgamma),$$ where $\hat \Ib_{1|2}$ is an estimator of the partial information matrix $\Ib_{1\mid 2}$. In Sections \[sec\_average\] and \[sec\_LVG\] below we show that, under appropriate conditions, the one-step estimator $\tilde\theta$ constructed relative to the empirical risk functions $Q_n$ defined in (\[eqloss\]) and (\[eqloss2\]), respectively, satisfies $$\label{eq2} n^{1/2}(\tilde{\theta}-\theta^*)=-\Ib^{-1}_{1\mid 2}n^{1/2} h(\bbeta^*)+o_p(1).$$ By applying the central limit theory to $h(\bbeta^*)$, we can establish the asymptotic normality of $\tilde{\theta}$ in Theorems \[thm:xi\_asymptotic\] and \[thm:theta\_asymptotic\]. When $Q_n(\bbeta)$ is the negative log-likelihood of the data, this approach has been successfully used in [@ning2017general] and, moreover, the estimator $\tilde\theta$ is asymptotically equivalent to the de-biased estimator in [@Zhang2014; @van2013asymptotically]. As will be explained in the following subsections, the analysis based on the log-likelihood becomes intractable for the latent graphical model and requires stringent technical conditions for the cluster-average graphical model. To overcome this difficulty, we employ the pseudo score functions relative to the empirical risk functions $Q_n(\bbeta)$ defined in (\[eqloss\]) and (\[eqloss2\]). The resulting one-step estimator still attains the information bound established in the literature, and more importantly requires weaker technical assumptions than the existing methods. Moreover, in addition to (\[eq2\]), we also derive explicitly the speed at which the normal approximation is attained. Estimation of the Cluster-Average Graph {#sec_average} --------------------------------------- Recall that we assume $\bZ\sim \cN(0,{\Cb_{ }^*})$ and $\bE \sim \cN(0,{\bGamma^*})$, which implies $\bX \sim \cN(0,{\bSigma_{ }^*})$ with ${\bSigma_{ }^*}=\Ab{\Cb_{ }^*}\Ab^T+{\bGamma^*}$. The within-cluster average $\bar{\bX} =: (\bar{X}_1, \ldots, \bar{X}_K)\in\RR^K$ is given by $\bar{X}_k =: \frac{1}{|{G^*_{k}}|}\sum_{i \in {G^*_{k}}} X_i$, corresponding to the population level clusters. Because $\bX \sim \cN(0,{\bSigma_{ }^*})$, we can verify that $\bar \bX \sim \cN(0,{\bS_{ }^*})$, where $${\bS_{ }^*}= {\Cb_{ }^*}+ \bar{\bGamma^*}, \numberthis \label{eqn:s_star_definition}$$ and $\bar {\bGamma^*}=\textrm{diag}(\bar\gamma^*_{1},...,\bar\gamma^*_K)$ with $\bar\gamma^*_k=\frac{1}{|G_k^*|^2}\sum_{j\in G_k^*}\gamma_j^*$. Recall that the precision matrix of $\bar\bX$ is $$\bOmega^*={\bS^{*}}^{-1} = ({\Cb_{ }^*}+ \bar{\bGamma^*})^{-1}.$$ In this section we give the construction of the estimators of the cluster-average graph corresponding to $\bar{\bX}$. Specifically, we use the generic strategy outlined in the previous section in order to construct $n^{1/2}$-consistent and asymptotically normal estimators for each component $\bOmega^{*}_{t,k}$ of the precision matrix $\bOmega^{*}$, for $1\leq t<k\leq K$. For the estimation of each entry, the remaining $K(K+1)/2-1$ parameters in $\bOmega^{*}$ are treated as nuisance parameters. Since we observe $n$ i.i.d. samples of $\bX \in \RR^p$, if the clusters and their number were known, then we implicitly observe $n$ i.i.d. samples of $\bar \bX \in \RR^K$. To explain our method, we first assume that clustering is given, and then explain how to lift this assumption. Following our general principle, we would naturally tend to choose the negative log-likelihood function of the cluster-averages $(\bar \bX_1,...,\bar\bX_n)$ as the empirical risk function $Q_n(\bbeta)$ in (\[eqQn\]). Along this line, [@jankova2014confidence] proposed the de-biased estimator for Gaussian graphical models. However, the inference requires the irrepresentable condition [@ravikumar2011high] on $\bS^*$, which can be restrictive. The alternative methods proposed by [@Ren2013; @jankova2017honest] imposed the condition that the largest eigenvalue of $\bS^*$ is bounded. These technical conditions on $\bS^*$ are difficult to justify and can be avoided by using our approach. We propose to estimate each sparse row of $\bOmega^*$ as explained below. Let $\bar\bS=n^{-1}\sum_{i=1}^n\bar\bX_i\bar\bX_i^T$ denote the sample covariance matrix of $\bar\bX_i$. When $K$ is small, the maximum likelihood estimator of $\bOmega^*$ is $\bar\bS^{-1}$, which can be viewed as the solution of the following equation $\bar\bS\bOmega-\Ib_K=0$. Thus, in the low dimensional setting, this equation defines the maximum likelihood estimator. Since we are only interested in $\bOmega^*_{t,k}$, we can extract the $k$th column from the left hand side of the above equation, and use it as the pseudo-score function $\bU_n(\bOmega_{\cdot k})=\bar\bS\bOmega_{\cdot k}-\eb_k$. To apply the inference strategy in Section \[sec:general\], we need to construct a valid empirical risk function $Q_n(\bOmega_{\cdot k})$ such that $\nabla Q_n(\bOmega_{\cdot k})=\bU_n(\bOmega_{\cdot k})$. Simple algebra shows that a possible choice is $$Q_n(\bOmega_{\cdot k})=\frac{1}{2}\bOmega_{\cdot k}^T\bar\bS\bOmega_{\cdot k}-\eb_k^T\bOmega_{\cdot k}=\frac{1}{n}\sum_{i=1}^n (\frac{1}{2}\bOmega_{\cdot k}^T \bar\bX_i\bar\bX_i^T\bOmega_{\cdot k}-\eb_k^T\bOmega_{\cdot k}),\label{eqloss}$$ which we view in the sequel as the empirical risk corresponding to the population level risk $$\label{risk-ave} \EE Q(\bOmega_{\cdot k},\bar\bX) = \frac{1}{2} \bOmega_{\cdot k}^T \bS^*\bOmega_{\cdot k} -\eb_k^T\bOmega_{\cdot k},$$ based on the loss function $$\label{loss1} Q(\bOmega_{\cdot k},\bar\bX)=: \frac{1}{2} \bOmega_{\cdot k}^T \bar\bX\bar\bX^T\bOmega_{\cdot k} -\eb_k^T\bOmega_{\cdot k}.$$ Since $$\label{qlike} \nabla \EE Q(\bOmega^*_{\cdot k},\bar\bX) =\bS^*\bOmega^{*}_{\cdot k}-\eb_k=0$$ and $$\label{qlike2} \nabla^2 \EE Q(\bOmega_{\cdot k}^*,\bar\bX)=\bS^*,$$ then the theoretical risk $ \EE Q(\bOmega^*_{\cdot k},\bar\bX)$ has the rows $\bOmega^{*}_{\cdot k}$ of the target precision matrix $\bOmega^*$ as the unique minimizers, as desired, provided that $\bS^*$ is positive definite, an assumption we make in Section \[sec:main\_results:assumption\]. We note that the choice of the empirical risk $Q_n(\cdot)$ and that of the corresponding pseudo-score $\bU_n(\cdot)$ is not unique. We chose the particular form (\[eqloss\]) because it is quadratic in $\bOmega_{\cdot k}$, which greatly simplifies the theoretical analysis and leads to weaker technical assumptions. Moreover, the property (\[qlike\]) is the same as that of the score function corresponding to the negative log-likelihood function, supporting our terminology. We use the general strategy presented in Section \[sec:inference:z\_est\] to construct estimators that employ the empirical risk $Q_n(\cdot)$ defined by (\[eqloss\]) above. We first recall that $Q_n(\cdot)$ depends on the unknown cluster structure $G^*$ via $\bar\bX_i$. We note that in general the estimated group $\hat G_k$ may differ from $G_k^*$ by a label permutation. For notational simplicity, we ignore this label permutation issue and treat $\hat G_k$ as an estimate of $G_k^*$ (rather than $G_j^*$ for some $j\neq k$). To define the estimator of $\bOmega^{*}_{t,k}$, we first replace $\bar \bX_i$ by $\hat \bX_i$ and denote $\hat\bS=n^{-1}\sum_{i=1}^n\hat\bX_i\hat\bX_i^T$, where $\hat X_{ik}= \frac{1}{|\hat G_k|}\sum_{j \in \hat G_k} X_{ij}$.\ Let $(t,k)$ be arbitrary, fixed. Replacing $\bar\bS$ by $\hat\bS$ in $Q_n(\cdot)$, we follow Section \[sec:general\] to define the pseudo-score function $$h(\bOmega_{\cdot k}) = \vb_t^{*T}(\hat\bS\bOmega_{\cdot k} - \eb_k),$$ where ${\vb}^*_{t}$ is a $K$-dimensional vector with $(\vb_{t}^*)_t=1$ and $(\vb_{t}^*)_{-t}=-\wb^*_{t}$ with $\wb^*_t=(\bS^*_{-t,-t})^{-1}\bS^*_{-t,t}$ consistent with the definition in (\[w\]) above. To make inference based on $h(\bOmega_{\cdot k})$, we need to further estimate $\wb^*_t$ and $\bOmega^*_{\cdot k}$. Following (\[eqwd\]), an estimate of $\wb^*_{t}$ is given by $$\label{eqw} \hat \wb_{t}=\argmin \|\wb\|_1, ~~\textrm{s.t}~~\|\hat\bS_{t,-t}-\wb^T\hat\bS_{-t,-t}\|_\infty\leq\lambda',$$ where $\lambda'$ is a tuning parameter. Then we can define $\hat \vb_{t}$ accordingly, and $$\label{eqscore} \hat h(\bOmega_{\cdot k})=\hat\vb_{t}^T(\hat{\bS}\bOmega_{\cdot k}-\eb_k).$$ Recall that the construction of the one-step estimator (\[eqest\]) requires an initial estimator of $\bOmega^*_{\cdot k}$. To be concrete, we consider the following initial estimator of $\bOmega^*_{\cdot k}$, $$\label{eqclime1} \hat \bOmega_{\cdot k}=\argmin \|\bbeta\|_1, ~~\textrm{s.t}~~\|\hat\bS\bbeta-\eb_k\|_{\max}\leq\lambda,$$ where $\lambda$ is a tuning parameter. This estimator has the same form as the CLIME estimator for the $k$-th column of $\bOmega$ [@Cai11a]. However, unlike the CLIME estimator which requires $\lambda \asymp \|\bOmega^*_{\cdot k}\|_1\sqrt{\log K/n}$, in our Theorem \[thm:xi\_asymptotic\] we assume $\lambda =C \sqrt{\log (K\vee n)/n}$, where $C$ only depends on the minimum eigenvalue of $\Cb^*$ and the largest diagonal entries of $\Cb^*$ and $\bGamma^*$ which are assumed bounded by constants in Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. With this choice of $\lambda$, we show in Lemma \[lem:group\_averages\_consistency\] in Appendix \[proofmain1\] that $$\label{eqrateomega} \|\hat\bOmega_{\cdot k}-\bOmega_{\cdot k}^*\|_1\lesssim s_1\sqrt{\frac{\log (K\vee n)}{n}},$$ with high probability, where $s_1$ is the sparsity level of $\bOmega_{\cdot k}^*$. As a comparison, Theorem 6 in [@Cai11a] only implies $\|\hat\bOmega_{\cdot k}-\bOmega_{\cdot k}^*\|_1\lesssim s_1\|\bOmega_{\cdot k}^*\|_1^2\sqrt{\frac{\log K}{n}}$. For many sparse matrices, the $\ell_1$ norm of a column, $\|\bOmega_{\cdot k}^*\|_1$, can grow to infinity with $K$ or $s_1$, and thus (\[eqrateomega\]) gives a faster rate. This is possible to obtain under the Gaussian assumption on $\bX$, which can be easily relaxed to sub-Gaussian. In this case, and when $\lambda_{\min}(C^*) > c$, Lemma \[lem:group\_averages\_gradient\_hessian\] is instrumental in showing that the extra $\|\bOmega_{\cdot k}^*\|_1^2$ factor in the rate of the original CLIME estimator can be avoided, whereas if only the marginal components of $\bX$ are assumed to be sub-Gaussian, as in [@Cai11a], it may be unavoidable, without further conditions on $\bOmega^{*}$. Based on the block matrix inverse formula, we can show that the partial pseudo information matrix reduces to $\Ib_{1|2}=1/\Omega_{t,t}^{*}$. Finally, the one-step estimator is defined as $$\label{eqOmega} \tilde\Omega_{t,k}= \hat\Omega_{t,k}-\hat h(\hat\bOmega_{\cdot k}) \hat\Omega_{t,t},$$ in accordance with (\[eqest\]). In Section \[sec:main\_results\], we will show that under mild regularity conditions $n^{1/2}(\tilde\Omega_{t,k}-\Omega_{t,k}^*)\leadsto N(0,s_{tk}^2)$, where $s_{tk}^2=\Omega_{t,k}^{*2} + \Omega^*_{t,t}\Omega^*_{k,k}$. Let $\hat s_{tk}^2=\hat\Omega_{t,k}^2 + \hat\Omega_{t,t}\hat\Omega_{k,k}$ be a consistent estimator of the asymptotic variance. Then, a $(1-\alpha)\times 100\%$ confidence interval for $\Omega_{t,k}$ is $$[\tilde\Omega_{t,k}-z_{1-\alpha/2}\hat s_{tk}/n^{1/2}, \tilde\Omega_{t,k}+z_{1-\alpha/2}\hat s_{tk}/n^{1/2}],$$ where $z_{\alpha}$ is the $\alpha$-quantile of a standard normal distribution. Equivalently, we can use the scaled test statistics $\tilde{\Omega}_{t,k}$ to construct a test for $H_0: \Omega^*_{t,k}=0$ versus $H_1: \Omega^*_{t,k} \neq 0$ with $\alpha$ significance level. Namely, the null hypothesis is rejected if and only if the above $(1-\alpha)\times 100\%$ confidence interval does not contain $0$. We will employ such tests in Section \[sec:main\_results\]. Latent Variable Graph {#sec_LVG} --------------------- Recall that the structure of the latent variable graph is encoded by the sparsity pattern of ${\bTheta_{ }^*}={{\Cb_{ }^*}}^{-1}$, which is generally different from the cluster-average group as ${{\Cb_{ }^*}}^{-1}$ and $\bOmega^*=({\Cb_{ }^*}+ \bar\bGamma^*)^{-1}$ may have different sparsity patterns. In this section, we focus on the inference on the component ${\Theta_{t,k}^*}$, for some $1\leq t<k\leq K$. Similar to the cluster-average graph, we first discuss the likelihood approach. The negative log-likelihood corresponding to model (\[eqn:g\_latent\_model\]) indexed by the parameter $(\bTheta, \bGamma)$ is $$\ell(\bTheta, \bGamma)=-\textrm{tr}(\hat\bSigma(\Ab\bTheta^{-1}\Ab^T+\bGamma)^{-1})+\log\det((\hat\bSigma(\Ab\bTheta^{-1}\Ab^T+\bGamma)^{-1}),$$ where $\hat{\bSigma}=n^{-1}\sum_{i=1}^n \bX_i\bX^T_i$. After tedious algebra, we show that the Fisher information matrix for $(\bTheta, \bGamma)$ is given by $$\begin{aligned} \Ib&= \begin{bmatrix}(\Mb^*\Ab^T\bGamma^{*-1}\bSigma^*\bGamma^{*-1}\Ab\Mb^*)^{\otimes 2} & (\Mb^*\Ab^T\bGamma^{*-1}\Fb^{*T})^{\otimes 2}\Db_d \\ \Db_d^T(\Fb^*\bSigma^{*-1}\bGamma^{*-1}\Ab\Mb^*)^{\otimes 2} & \Db_d^T(\Fb^*\bSigma^{*-1}\Fb^{*T})^{\otimes 2}\Db_d\end{bmatrix}, \label{eqinforlatent}\end{aligned}$$ where $\Db_d = (\Ib_d \otimes 1_d^T)\circ(1_d^T \otimes \Ib_d)$, $\Mb^*=(\bTheta^*+\Ab^T\bGamma^{*-1}\Ab)^{-1}$ and $\Fb^*=\Ib_d-\Ab\Mb^*\Ab^T\bGamma^{*-1}$. As seen in Section \[sec:general\], the inference based on the likelihood or equivalently efficient score function (\[eqeff\]) requires the estimation of $\Ib_{12}\Ib_{22}^{-1}$ which, given the complicated structure of the information matrix (\[eqinforlatent\]), becomes analytically intractable. A solution to this problem is inference based on an empirical risk function similar to (\[eqloss\]), but tailored to the latent variable graph. With a slight abuse of notation, and reasoning as in (\[qlike\]) and (\[qlike2\]), we notice that, for each $k$, $$\label{risk-latent} \EE Q(\bTheta_{\cdot k},\bX) = \frac{1}{2}\bTheta_{\cdot k}^T {\Cb_{ }^*}\bTheta_{\cdot k} -\eb_k^T\bTheta_{\cdot k},$$ has the target $\bTheta^{*}_{\cdot k}$ as a unique minimizer, where the loss function $Q(\bTheta_{\cdot k},\bX)$ is defined as $$\label{loss-latent} Q(\bTheta_{\cdot k},\bX)=\frac{1}{2}\bTheta_{\cdot k}^T \bar\Cb \bTheta_{\cdot k} -\eb_k^T\bTheta_{\cdot k},$$ and the matrix $\bar \Cb :=(\bar C_{jk})_{j,k}$ has entries $$\label{eqCi} \bar C_{jk}=\frac{1}{|G^*_j||G^*_k|}\sum_{a\in G^*_j, b\in G^*_k} (X_a X_b-\bar{\Gamma}_{ab}),$$ and $\bar{\Gamma}_{ab}=0$ if $a\neq b$ and $\bar{\Gamma}_{aa}=X_aX_a-\frac{1}{|G^*_k|-1}\sum_{a\in G^*_k, a\neq j}X_aX_j$. Since $\EE(\bar \Cb)=\Cb^*$, the risk relative to the loss function in (\[loss-latent\]) is indeed (\[risk-latent\]), and the empirical risk is $$\label{eqloss2} Q_n(\bTheta_{\cdot k})=\frac{1}{n}\sum_{i=1}^n (\frac{1}{2}\bTheta_{\cdot k}^T \bar\Cb^{(i)} \bTheta_{\cdot k} -\eb_k^T\bTheta_{\cdot k}),$$ where $\bar\Cb^{(i)}$ is obtained by replacing $\bX$ in $\bar C_{jk}$ by $\bX_i$. Similar to the cluster-average graph, $Q_n(\bTheta_{\cdot k})$ also depends on the unknown cluster structure. We estimate $G^*_k$ by $\hat G_k$, and define $\hat\bGamma=(\hat\Gamma_{ab})$, where $\hat{\Gamma}_{ab}=0$ if $a\neq b$ and $\hat{\Gamma}_{aa}=\frac{1}{n}\sum_{i=1}^n(X_{ia}X_{ia}-\frac{1}{|\hat G_k|-1}\sum_{a\in \hat G_k, a\neq j}X_{ia}X_{ij})$, and $\hat\Cb=(\hat C_{jk})$ where $\hat C_{jk}=\frac{1}{n}\sum_{i=1}^n(\frac{1}{|\hat G_j||\hat G_k|}\sum_{a\in \hat G_j, b\in \hat G_k} (X_{ia} X_{ib}-\hat{\Gamma}_{ab}))$. We replace $\bar \Cb$ by $\hat\Cb$ in (\[eqloss2\]) above and follow exactly the strategy of Section \[sec\_average\], with $\hat \bS$ replaced by $\hat \Cb$, to construct the corresponding pseudo-score function $\hat h(\bTheta_{\cdot k})$, similarly to (\[eqscore\]), and the initial estimator $\hat \bTheta_{\cdot k}$, similarly to (\[eqclime1\]). We combine these quantities, following the general strategy (\[eqest\]), as above, to obtain the final one-step estimator of $\Theta_{t,k}^*$, defined as $$\label{tteta} \tilde\Theta_{t,k}= \hat\Theta_{t,k}-\hat h(\hat\bTheta_{\cdot k}) \hat\Theta_{t,t},$$ after observing that, in this case, $\Ib_{1|2}=1/ {\bTheta}^{*}_{t,t}$. Although the form of this estimator is similar to (\[eqOmega\]), derived for the cluster-average graph, the study of the asymptotic normality of $\tilde\Theta_{t,k}$ reveals that its asymptotic variance is much more involved, and will be discussed in detail in Section \[hard\]. Main Theoretical Results {#sec:main_results} ======================== Assumptions {#sec:main_results:assumption} ----------- In this section we state the two assumptions under which all our results are proved. \[asmp:bounded\_latent\_covariance\] The covariance matrix ${\Cb_{ }^*}$ of $\bZ$ satisfies: $c_1 \leq {\lambda_{\min}\left({\Cb_{ }^*}\right)}$ and $\max_{t}{C_{t,t}^*} \leq c_2$, for some absolute constants $c_1, c_2 > 0$. \[asmp:bounded\_errors\] The matrix ${\bGamma^*}$ satisfies: $\max_{ 1 \leq i \leq d} \gamma^*_i \leq c_3$ for some absolute constant $c_3 > 0$, where ${\gamma_{i}^*}$ are the entries of the diagonal matrix ${\bGamma^*}$. Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] are minimal conditions for inference on precision matrices. Furthermore, they imply the conditions needed for clustering consistency derived in [@Bunea2016a] and discussed in Section \[sec:introduction:glatent\], for $n$ large enough. The latter only require that ${\lambda_{\min}\left({\Cb_{ }^*}\right)}$ is bounded from below by a sequence that converges to zero, as soon as $\|\bSigma^*\|_{\max}$ and $\|\bGamma^*\|_{\max}$ are bounded. This is strengthened by our assumptions. In particular, a constant lower bound on ${\Cb_{ }^*}$ is standard in any inference on graphical models and needed to show the asymptotic normality of the estimator introduced above [@Ren2013; @jankova2014confidence; @jankova2017honest]. Asymptotic Normality via Berry-Esseen-type Bounds ------------------------------------------------- ### Results for the Cluster-Average Graph In the section, we show that the estimators $\tilde\Omega_{t,k}$ given by (\[eqOmega\]) are asymptotically normal, for all $t<k$ . We define the sparsity of the cluster-average graph as $s_1 \in \NN$ such that $$\max_{1\leq j\leq K}\sum_{k=1}^K \II(\Omega^*_{j,k} \neq 0)\leq s_1.$$ Recall that the estimators (\[eqw\]) and (\[eqclime1\]) depend on the tuning parameters $\lambda$ and $\lambda'$. In the following theorem, we choose $\lambda \asymp \lambda' \asymp \sqrt{\frac{\log (K\vee n)}{n}}$. For notational simplicity, we use $C$ to denote a generic constant, the value of which may change from line to line. \[thm:xi\_asymptotic\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, we have $$\label{eqxi_asymptotic2} \max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big|\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{\hat s_{tk}}\leq x\Big)-\Phi(x)\Big|\leq \frac{C}{(d \vee n)^3} + \frac{Cs_1\log (K\vee n)}{n^{1/2}}+ \frac{C}{(K\vee n)^3}$$ where $\hat s_{tk}^2=\hat\Omega_{t,k}^2 + \hat\Omega_{t,t}\hat\Omega_{k,k}$ and $C$ is a positive constant. Theorem \[thm:xi\_asymptotic\], proved in Appendix \[proofmain1\], gives the rate of the normal approximation of the distribution of the scaled and centered entries $\tilde\Omega_{t,k}$. The right hand side in (\[eqxi\_asymptotic2\]) is non-asymptotic and is valid for each $K$, $n$ and $d$. Its first, small, term is the price to pay for having first used the data for clustering, and it is dominated by the other two terms. From this perspective, the clustering step is the least taxing, as long as we can ensure its consistency, which in turn can be guaranteed under the minimal assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] already needed for the remaining steps. The second, and dominant, term regards the normal approximation of the distribution of $$\label{unscaled} {n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}.$$ Specifically, as an intermediate step, Proposition \[prop:asymptotic\_normality\_group\_averages\] in Appendix \[proofmain1\] shows that the difference between the c.d.f. of (\[unscaled\]), scaled by $s_{tk} = \sqrt{\Omega_{t,k}^{*2} + \Omega^*_{t,t}\Omega^*_{k,k}}$, and that of a standard Gaussian random variable is bounded by $\frac{s_1\log (K\vee n)}{n^{1/2}}$. Therefore, asymptotic normality holds as soon as this quantity converges to zero, which agrees with the weakest sparsity conditions for Gaussian graphical model inference in the literature [@Ren2013; @jankova2017honest]. In addition, the asymptotic variance $s_{tk}^2$ agrees with the minimum variance bound in Gaussian graphical models [@jankova2017honest]. Thus, inference based on the empirical risk function (\[eqloss\]) does not lead to any asymptotic efficiency loss. Unlike the previous works, we do not require the bounded operator norm condition, $\lambda_{\max}(\bS^*)\leq C$. This condition is avoided in our analysis by using a more convenient empirical risk function (\[eqloss\]), as opposed to the log-likelihood in [@jankova2014confidence], and a CLIME-type initial estimator (\[eqclime1\]) satisfying (\[eqrateomega\]), as opposed to the node-wise Lasso estimator in [@jankova2017honest]. The last term in the normal approximation is $O(\frac{1}{(K\vee n)^3})$ which is dominated by the second one, and is associated with the replacement of the theoretical variance $s_{tk}^{2}$ by the estimate $\hat s_{tk}^2$. Finally, we note that the powers of the first and the third term in the right hand side of (\[eqxi\_asymptotic2\]) can be replaced by $2 +\delta$, for any $\delta > 0$, and a change in this power also changes the associated constant $C$ in the term $\frac{Cs_1\log (K\vee n)}{n^{1/2}}$. As shown in Proposition \[thm:fdr\_bound\_av\], to obtain valid FDR control, we need $K^2/(K\vee n)^{2+\delta}=o(1)$, which holds for any $\delta>0$. For simplicity, we choose $\delta=1$ which gives the power 3. ### Results for the Latent Variable Graph {#hard} In this section we show that the estimators $\tilde{\Theta}_{t,k}$ given by (\[tteta\]) are asymptotically normal, for all $t<k$. We define the sparsity of the latent graph as $s_0 \in \NN$ such that $$\max_{1\leq j\leq K}\sum_{k=1}^K \II({\Theta_{j,k}^*} \neq 0)\leq s_0.$$ Inference for the estimator $\tilde\Theta_{tk}$ follows the general approach outlined in Section \[sec:inference:z\_est\]. We prove in Proposition \[prop:asymptotic\_normality\_latent\] in Appendix \[proofmain2\] that $$\label{key} n^{1/2}(\tilde\Theta_{t,k}-\Theta_{t,k}^*)=\frac{1}{n^{1/2}}\sum_{i=1}^n \Theta_{t,t}^*\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k)+o_p(1),$$ where ${\vb}^*_{t}$ is a $K$-dimensional vector with $(\vb_{t}^*)_t=1$ and $(\vb_{t}^*)_{-t}=-\wb^*_{t}$ with $\wb^*_t=(\Cb^*_{-t,-t})^{-1}\Cb^*_{-t,t}$. and $\bar \Cb^{(i)}$ is defined in (\[eqCi\]). The terms of the sum in display (\[key\]) are mean zero random variables, and their variance is $$\sigma^2_{tk}=\EE(\Theta_{t,t}^*\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k))^2,$$ which does not have an explicit closed form, unlike the asymptotic variance of the estimates of the entries of $\bOmega^*$ above. However, we show in Proposition \[lem:latent\_variable\_variance\] in Appendix \[proofmain2\] that $\sigma^2_{tk}$ admits an approximation that is easy to estimate: $$\Big|\sigma^2_{tk}-[({\Theta_{t,k}^*})^2 + {\Theta_{k,k}^*}{\Theta_{t,t}^*}]\Big|\lesssim \frac{s_0}{m},$$ where $m=\min_{1 \leq k \leq K} | {G^*_{k}} |$. Guided by this approximation, we estimate $\sigma^2_{tk}$ by $$\hat\sigma^2_{tk}=\hat\Theta_{t,k}^2 + \hat\Theta_{k,k}\hat\Theta_{t,t}.$$ When all clusters have the equal size, we obtain $K=d/m$. Thus the $O(\frac{s_0}{m})$ terms can be ignored asymptotically in the sense that $\frac{s_0}{m}=\frac{s_0K}{d}\leq \frac{K^2}{d}=o(1)$, when the clusters are approximately balanced, and their number satisfies $K^2=o(d)$. This is a reasonable assumption in most applied clustering problems. We note that the estimator $\hat\sigma^2_{tk}$ may be inconsistent when the size of some clusters is too small. However, we recall that our ultimate goal is to use these estimators for recovering the sparsity pattern of $\bOmega^*$ under FDR control. To evaluate the sensitivity of our overall procedure to the size of the smallest cluster, we conduct simulation studies in Section \[sec:numerical\]. The results shows that the proposed method works well as soon as $m>4$. The following theorem gives the Berry–Esseen normal approximation bound for the estimators of the entries of the precision matrix corresponding to the latent variable graph. \[thm:theta\_asymptotic\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then $$\label{eqtheta_asymptotic2} \max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big|\PP\Big(\frac{n^{1/2}(\tilde\Theta_{t,k}-\Theta^*_{t,k})}{\hat\sigma_{tk}}\leq x\Big)-\Phi(x)\Big|\leq \frac{C}{(d \vee n)^3} + \frac{C}{(K\vee n)^3}+\frac{Cs_0\log (K\vee n)}{n^{1/2}}+\frac{Cs_0}{m},$$ where $C$ is a positive constant. Compared to the average graph, the Berry-Esseen bound in (\[eqtheta\_asymptotic2\]) contains an additional $\cO(\frac{s_0}{m})$ term, stemming from the approximation of the analytically intractable asymptotic variance by an estimable quantity. The proof is deferred to Appendix \[proofmain2\]. Post-clustering FDR Control {#sec:main_results:fdr_control} ---------------------------- Given the edge-wise inferential results for the cluster-average and latent variable graphs established above, we explain in this section how to combine them to control graph-wise inferential uncertainty. We view the problem of recovering the sparsity pattern of a precision matrix as one of [*multiple*]{} hypotheses testing, by setting: $$\begin{aligned} \label{eq:multtest1} \Hb_{0;tk}: \Omega^*_{t,k} =0 \quad {\rm vs. } \quad \Hb_{1;tk}: \Omega^*_{t,k} \neq 0 \quad \text{for all } 1\leq t<k\leq K,\end{aligned}$$ for the cluster-average graph, and $$\begin{aligned} \label{eq:multtest2} \Hb_{0;tk}^{'}: {\Theta_{t,k}^*} =0 \quad {\rm vs. } \quad \Hb_{1;tk}^{'}: {\Theta_{t,k}^*} \neq 0 \quad \text{for all } 1\leq t<k\leq K.\end{aligned}$$ for the latent variable graph. In the following, we describe our procedure for the estimation of the cluster-average graph. The same procedure applies to the latent variable graph. Define the set of true null hypotheses, $\cH_{0}\coloneqq \{(t,k):\, 1\leq t<k\leq K,\textrm{ such that } \Omega^*_{t,k} = 0\}$, as the set of indices $(t,k)$ for which there is no edge between the nodes $t$ and $k$. To control the error incurred by multiple testing, we focus on the false discovery rate (FDR), which is the average number of Type I errors relative to the total number of discoveries [@BH95]. Recall that $\tilde\Omega_{t,k}$ is a consistent and asymptotically normal estimator of $\Omega_{t,k}$. We consider the natural test statistic $\tilde W_{t,k}=n^{1/2}\tilde\Omega_{t,k}/\hat s_{tk}$ for $\Hb_{0;tk}$, where $\hat s^2_{tk}=\hat\Omega_{t,k}^2 + \hat\Omega_{t,t}\hat\Omega_{k,k}$. Given a cutoff $\tau>0$, the total number of discoveries is $$R_{\tau} := \sum_{1\leq t < k \leq K} \II[|\tilde W_{t,k}| > \tau ].$$ Similarly, the number of false positives or false discoveries is given by $$V_{\tau} := \sum_{(t, k) \in\cH_{0}} \II[|\tilde W_{t,k}| > \tau ].$$ The FDR is formally defined as the expected ratio of $V_\tau$ over $R_{\tau}$, $${\mathrm{FDR}}(\tau) := \EE\left[\frac{V_\tau}{R_\tau}\II[R_\tau > 0] \right],$$ where the indictor function is included to remove the trivial case $R_\tau=0$. Our goal is to find a data-dependent cutoff $\tau$ such that ${\mathrm{FDR}}(\tau)\leq \alpha+o(1)$ for any given $0 < \alpha < 1$. This is the best one can hope for when, as in our case, the distribution of the test statistics $\tilde W_{t,k}$ is only available asymptotically. The Berry-Esseen type bounds derived in Theorem \[thm:xi\_asymptotic\] and \[thm:theta\_asymptotic\] above allow us to quantify precisely the price that needs to be paid for the asymptotic approximation, and become instrumental for understanding asymptotic FDR control. In addition, the test statistics $\tilde W_{t,k}$ for different hypotheses are dependent. To allow for the dependence, instead of the standard B-H procedure [@BH95], we consider the more flexible B-Y procedure by [@Benjamini2001]. The resulting FDR procedure is as follows: reject all hypotheses such that $|\tilde W_{t,k}| \geq \hat\tau$, where $$\label{eqn:selection_rule_av} \hat \tau := \min\left\{\tau > 0 : \tau \geq \Phi^{-1}\left(1-\frac{\alpha R_{\tau}}{2N_{BY}|\cH|} \right) \right\} \text{ and } N_{BY} = \sum_{i=1}^{|\cH|} \frac{1}{i},$$ where $|\cH|=K(K-1)/2$ is the total number of hypotheses. Our next result shows when the FDR based on our test statistics is guaranteed to be no greater than $\alpha$, asymptotically. \[thm:fdr\_bound\_av\]   1. Assume that the conditions in Theorem \[thm:xi\_asymptotic\] hold. For any $0 < \alpha < 1$, we have $$\label{eqfdr} {\mathrm{FDR}}(\hat \tau) \leq \alpha +2 |\cH_0|b_n,$$ where $b_n=\frac{C}{(d\vee n)^3}+\frac{C}{(K\vee n)^3}+\frac{Cs_1\log (K\vee n)}{n^{1/2}}$, and $|\cH_0|$ is the number of true null hypotheses in (\[eq:multtest1\]). 2. Assume that the conditions in Theorem \[thm:theta\_asymptotic\] hold. If we define the test statistic as $\tilde W_{t,k}=n^{1/2}\tilde\Theta_{t,k}/\hat \sigma_{tk}$, and $\hat\tau$ as in (\[eqn:selection\_rule\_av\]), we have $$\label{eqfdr_av} {\mathrm{FDR}}(\hat \tau) \leq \alpha +2 |\cH'_0|c_n,$$ where $c_n=\frac{C}{(d\vee n)^3}+\frac{C}{(K\vee n)^3}+\frac{Cs_0\log (K\vee n)}{n^{1/2}}+\frac{Cs_0}{m}$, and $|\cH'_0|$ is the number of true null hypotheses in (\[eq:multtest2\]). Thus, this theorem implies that our method can control the FDR asymptotically, in the sense that ${\mathrm{FDR}}(\hat\tau)\leq \alpha+o_p(1)$, provided that $s_1|\cH_0|\log (K\vee n)=o(n^{1/2})$, for the average graph, and $s_0|\cH^{'}_0|\log (K\vee n)=o(n^{1/2})$ for the latent graph. Gaussian graphical model estimation under FDR control was recently studied by [@liu2013gaussian]. Their approach is based on the following Cramer-type moderate deviation result using our terminology, $$\label{eqdeviation} \max_{(t,k)\in\cH_0}\sup_{0\leq t\leq 2\sqrt{\log K}} \Big|\frac{\PP(\hat T_{t,k}\geq t)}{2-2\Phi(t)}-1\Big|=o(1),$$ where $\hat T_{t,k}$ is test statistic they proposed for estimation of the Gaussian graphical model structure. The above result (\[eqdeviation\]) controls the relative error of the Gaussian approximation within the moderate deviation regime $[0, 2\sqrt{\log K}]$, whereas our result is based on the control of the absolute error via the Berry-Esseen-type Gaussian approximation. One of the main advantages of their result is that the number of clusters is allowed to be $K=o(n^r)$, where $r$ is a constant that can be greater than 1. However, to prove (\[eqdeviation\]), they required that the number of strong signals tends to infinity, that is $|\{(t,k): \Omega_{t,k}^*\geq C\sqrt{\log K/n}\}|\rightarrow\infty$, which reduces significantly the parameter space for which inference is valid. In contrast, the aim of this work is the study of pattern recovery under no conditions on the signal strength of the entries of the target precision matrices, as in practice it is difficult to assess whether these conditions are met. The overall message conveyed by Proposition \[thm:fdr\_bound\_av\] is that, in the absence of any signal strength assumptions, cluster-based graphical models can still be recovered, under FDR control, provided that the number of clusters $K$ is not very high relative to $n$, and provided that the clusters are not very small. This further stresses the importance of an initial dimension reduction step in high-dimensional graphical model estimation. For instance, results similar to those of Theorem \[thm:xi\_asymptotic\] can be derived along the same lines for the estimation of the sparsity pattern of $\bSigma^{-1}$, for a generic, unstructured, covariance matrix of $\bX$, where one replaces $K$ by $d$ throughout, and $s_1$ is replaced by $s$, the number of non-zero entries in the $d \times d$ matrix $\bSigma^{-1}$. Then, the analogue of (\[eqfdr\_av\]) of Proposition \[thm:fdr\_bound\_av\] shows that FDR control in generic graphical models, based on [*asymptotic approximations of $p$-values*]{}, cannot generally be guaranteed if $d > n$. Our work shows that extra structural assumptions, for instance those motivated by clustering, do alleviate this problem. The simulation study presented in the next section provides further support to our findings. Numerical Results {#sec:numerical} ================= This section contains simulations and a real data analysis that illustrate the finite sample performance of the inferential procedures developed in the previous sections for the latent variable graph and cluster-average graph, respectively. Synthetic Datasets {#sec:numerical:sim} ------------------ In this subsection, we demonstrate the effectiveness of the FDR control procedures on synthetic datasets. We consider two settings $(n,d)=(800,400)$ and $(500,1000)$, and in each setting we vary the value of $K$ and $m$. The error variable $\bE$ is sampled from the multivariate normal distribution with covariance $\bGamma^*$ whose entries ${\gamma_{i}^*}$ are generated from $U[0.25,0.5]$. Recall that the latent variable $\bZ$ follows from $\bZ\sim \cN(0,{\Cb_{ }^*})$. We consider three different models to generate the graph structure of $\bZ$. Once the graph structure is determined, the corresponding adjacency matrix $\Wb$ is found, and the precision matrix ${\bTheta_{ }^*}=\Cb^{*-1}$ is taken as ${\bTheta_{ }^*}= c\Wb + (|{\lambda_{\min}\left(\Wb\right)}| + 0.2)\Ib$, where $c=0.3$ when $d=400$ and $c=0.5$ when $d=1000$. Finally, we assign the cluster labels for all variables so that all clusters have approximately equal size, which gives us the matrix $\Ab$. Given $\Ab, \bZ$ and $\bE$, we can generate $\bX$ according to the model (\[eqn:g\_latent\_model\]). We consider the following three generating models for the graph structure of $\bZ$: - [*Scale-Free Graph –* ]{} The Scale-Free model is a generative model for network data, whose degree distribution follows a power law. To be concrete, we generate the graph one node at a time, starting with a 2 node chain. For nodes $3,\dots,K$, node $t$ is added and one edge is added between $t$ and one of the $t-1$ previous nodes. Denoting by $k_i$ the current degree of node $i$ in the graph, the probability that node $t$ and node $i$ are connected is $p_i = k_i / (\sum_i k_i)$. The number of edges in the resulting graph is always $K$. - [*Hub Graph –* ]{} The $K$ nodes of the graph are partitioned evenly into groups of size $N$. Within each group, one node is selected to be the group hub, and an edge is added between it and the remainder of its group. $N$ is either 5 or 6 depending upon the choice of $K$. The number of edges in the graph is $K(N-1)/N$, so for $K=100$ with $N=5$, the number of edges in the resulting graph is 80. - [*Band3 Graph –* ]{} This model generates a graph with a Toeplitz adjacency matrix. There is an edge between node $i$ and node $j$ if and only if $|i - j| \leq B$, where we set $B=3$ in this scenario. In general, the number of edges in a band graph with $K$ nodes is given by $BK - \frac{3}{2}B^2 + \frac{5}{2}B$. So, for $K=100$ and $B=3$, the number of edges in the graph is 294. Recall that $\bar \bX \sim \cN(0,{\bS_{ }^*})$, where ${\bS_{ }^*}$ is defined in (\[eqn:s\_star\_definition\]). To determine the structure of the average graph, we numerically compute $\bS^{*-1}$ and threshold the matrix at $10^{-8}$. We examine the empirical FDR of our procedures on some synthetic datasets. The following protocol is followed in all the experiments: 1. Generate the graph structure of $\bZ$ as specified above. 2. Simulate $n$ observations from our model (\[eqn:g\_latent\_model\]). 3. Estimate the cluster partition $\hat G$. For computational convenience, we apply the FORCE algorithm [@Eisenach2017] when $d=400$ and the COD algorithm [@Bunea2016a] when $d=1000$. 4. Construct the test statistic $\tilde W_{t,k}$ defined in Section \[sec:main\_results:fdr\_control\]. The regularization parameters $\lambda$ and $\lambda'$ are chosen by 5-fold cross validation. 5. Find the FDR cutoff (\[eqn:selection\_rule\_av\]) at level $\alpha$, where we consider three cases $\alpha=0.05, 0.1, 0.2$. The simulation is repeated 100 times. To compare with our Benjamini-Yekutieli based FDR procedure, we also report the empirical FDR based on the more classical Benjamini-Hochberg procedure. That is, we apply the same procedures 1-4, but in step 5 we replace the FDR cutoff in (\[eqn:selection\_rule\_av\]) with the Benjamini-Hochberg (B-H) cutoff, i.e., $$\hat \tau_{BH} := \min\left\{\tau > 0 : \tau \geq \Phi^{-1}\left(1-\frac{\alpha R_{\tau}}{2|\cH|} \right) \right\}.$$ Table \[fig:synth\_fdr\] compares the empirical FDR based on our method with the B-H procedure under different $m, K$ settings when $d=400$. When $m$ is relatively large (e.g., $m=20$), both methods can control FDR on average, although our method is relatively more conservative. As expected, the FDR control problem becomes more challenging for large $K$ and small $m$. In this case the graph contains more nodes and each cluster contains fewer variables. We observe that when $m=5$ our method can still control FDR reasonably well but the B-H method produces empirical FDR far beyond the nominal level, especially for the hub graph. The inferior performance of the B-H procedure is due to the fact that the dependence among the test statistics is not accounted in the B-H method. Finally, we examine the empirical power of the FDR procedure under each scenario, which is defined as $$\textrm{Average}\left[\sum_{(t,k) \in \cH_1}\frac{\II[\tilde W_{t,k} \geq \hat \tau]}{|\cH_1|} \right],$$ where $\cH_1$ is the set of alternative hypothesis. Table \[fig:synth\_fdr\_pow\] gives the empirical power of our FDR procedure, and the B-H procedure when $d=400$. It shows that our procedure and the B-H procedure have very high power in all scenarios. The same phenomenon is observed when $d$ is large, i.e., $d=1000$; see Tables \[fig:synth\_fdr\_highd\] and \[fig:synth\_fdr\_pow\_highd\]. In summary, the proposed procedure can identify most signals in the graph with well controlled FDR. fMRI Dataset ------------ [@Power2011] finds that the human brain can be divided into [*regions of interest*]{} (ROIs) which can be further organized into [*functional networks*]{}. We apply our inferential procedures to these regions of interest in publicly available resting-state fMRI data from the Neuro-bureau pre-processed repository [@Bellec2015]. Specifically, we use the data from patient 1018959, session 1 in the KKI dataset. This fMRI data was pre-processed using the Athena pipeline and mapped to T1 MNI152 coordinate space. We choose this dataset to make our experiments easily reproducible, as the data are available pre-processed using standard alignment and denoising procedures. Using the T1 MNI152 coordinates, we extract the 264 ROIs identified in [@Power2011], which gives us $d=264$ mean activities across $n=148$ time periods. We apply the FORCE algorithm to cluster the 264 ROIs, and obtain an estimate of $K=53$ and the corresponding clusters. Using the FDR control procedures with $\alpha=0.01$, we obtain the networks shown in Figures \[fig:graph\_fmri\_latent\] and \[fig:graph\_fmri\_averages\]. For clarification purpose, in these two figures, we only display the nodes and connections corresponding to the 10 largest groups. The groups are colored according to the functional network the majority of their nodes belong to as given in [@Power2011]. In the latent graph, the nodes $Z_5$, $Z_{29}$, $Z_{30}$, and $Z_{32}$ are highly connected. This finding is consistent with [@Power2011] where the authors note that the graph of the observable variables is highly connected within a functional group. By contrast, the average graph shows completely different patterns. For instance, $\bar{X}_5$, $\bar{X}_{29}$, $\bar{X}_{30}$, and $\bar{X}_{32}$ which belong to the same functional group are not connected in the cluster-averages graph. If one uses the cluster-averages graph to interpret the dependence structure of the functional network, the scientific results could be misleading. ![Recovered latent graph structure between 10 largest clusters in fMRI data with FDR level $\alpha=0.01$ colored according to their functions.[]{data-label="fig:graph_fmri_latent"}](sfnwmrda1018959_264_latent_graph.pdf){width="90.00000%"} ![Recovered cluster-averages graph structure between 10 largest clusters in fMRI data with FDR level $\alpha=0.01$.[]{data-label="fig:graph_fmri_averages"}](sfnwmrda1018959_264_averages_graph.pdf){width="90.00000%"} S&P 500 Stock Price Data {#sec:numerical:real} ------------------------ [The clustering results make sense, but it is not easy to interpret the stock graph. Do you think we can put this example to appendix or remove it...? ]{} In this subsection we apply our inferential tools to the stock price data from January 1st, 2010 through December 31st, 2013. Because the S&P 500 membership is not fixed, we remove stocks that have missing data points, leaving $d=468$ stocks. The remaining data are standardized before proceeding. The S&P 500 categorizes its stocks into 11 categories, so we set $K = 11$ in the clustering algorithm. Applying the clustering step, we recover 5 clusters that each consist of exactly one S&P 500 category, 3 clusters in which at least 90% are of the same category, and 3 that are evenly mixed between several categories. It should be noted that in the 3 clusters in which at least 90% of the stocks are of the same category, the stocks not from the category are for companies whose products are integrally related to the business of the companies in the same category. The evenly mixed clusters consist largely of a mixture of “Consumer Discretionary” and “Consumer Staples”. Examining the individual stocks, in many cases there is little to distinguish a company in “Consumer Discretionary” from one in “Consumer Staples”. Table \[fig:sp500\_K11\_clusters\] gives a summary of each cluster recovered by FORCE. After estimation of $\hat G$, we apply our FDR procedure to estimate the latent graph and cluster-average graph. Figure \[fig:graph\_structures\_sp500\] shows the recovered graph structures after applying the FDR control procedures described in Section \[sec:main\_results:fdr\_control\] , for $\alpha = 0.01$. ![Recovered Graph Structures from S&P 500 stock price data.[]{data-label="fig:graph_structures_sp500"}](sp_500lr_K11_latent_graph.pdf "fig:"){width="48.00000%"} ![Recovered Graph Structures from S&P 500 stock price data.[]{data-label="fig:graph_structures_sp500"}](sp_500lr_K11_av_graph.pdf "fig:"){width="48.00000%"} Acknowledgement {#acknowledgement .unnumbered} =============== Florentina Bunea was partially supported by NSF-DMS 712709. Proofs Regarding Estimation in the Cluster-Average Graph {#proofmain1} ======================================================== In this section we provide the proofs of the results needed for establishing the asymptotic normality of the estimators of the entries of $\bOmega^*$. The results below make use of the fact that consistent clustering is possible, under our assumptions, as stated below. \[lemcluster\] Let $\cE=: \{\hat G=G^*\}$, for $\hat{G}$ estimated by either the COD or the PECOK algorithm of [@Bunea2016a]. Then, under Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\], we have $$\PP(\cE)\geq 1-\frac{C}{(d\vee n)^3}.$$ The conclusion of this Lemma is proved in Theorem 3, for the COD algorithm, and Theorem 4, for the PECOK algorithm, of [@Bunea2016a]. Lemma \[lemcluster\] allows us to replace $\hat{G}$ by $G^*$ in all the results below, while incurring a small error, of order $O\left(\frac{C}{(d\vee n)^3}\right)$, which will be shown to be dominated by other error bounds. Main Proofs for the Cluster-Average Graph Estimators ---------------------------------------------------- The proof relies crucially on Proposition \[prop:asymptotic\_normality\_group\_averages\], stated and proved after the end of this proof. Assuming that this result has been obtained, the proof of Theorem \[thm:xi\_asymptotic\] follows the standard steps explained below. Denote $\cE'=\{\max_{1\leq t< k\leq K}|\hat s_{tk}/s_{tk}-1|\leq r\}$, where $r=C\sqrt{\frac{s_1\log (K\vee n)}{n}}$, and $\bar\cE'$ is the complement of the event $\cE'$. We first consider the bound $$\begin{aligned} \PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{\hat s_{tk}}\leq x\Big)-\Phi(x)&\leq \PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{\hat s_{tk}}\leq x, \cE', \cE\Big)-\Phi(x)+\PP(\bar\cE')+\PP(\bar\cE)\\ &=\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x\frac{\hat s_{tk}}{s_{tk}}, \cE',\cE\Big)-\Phi(x)+\PP(\bar\cE')+\PP(\bar\cE).\end{aligned}$$ Proposition \[prop:convergence\_group\_averages\_variance\] below implies $\PP(\bar\cE')\leq C(K\vee n)^{-3}$ and Lemma \[lemcluster\] above implies $\PP(\bar\cE)\leq C(d\vee n)^{-3}$ for some constant $C$. In addition, for $x\geq 0$, $$\begin{aligned} &\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x\frac{\hat s_{tk}}{s_{tk}}, \cE',\cE \Big)-\Phi(x) \leq \PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x(1+r),\cE \Big)-\Phi(x)\nonumber\\ &=\Big\{\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x(1+r), \cE \Big)-\Phi(x(1+r))\Big\}+\Big\{\Phi(x(1+r))-\Phi(x)\Big\}.\label{eqthmxi1}\end{aligned}$$ For the first term, Proposition \[prop:asymptotic\_normality\_group\_averages\] implies $$\max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big|\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x(1+r),\cE\Big)-\Phi(x(1+r))\Big|\lesssim \frac{s_1\log (K\vee n)}{n^{1/2}}.$$ By the mean value theorem, the second term $\Phi(x(1+r))-\Phi(x)=\phi(x(1+tr))xr$, for some $t\in[0,1]$. It is easily seen that $\sup_{x\in\RR}\sup_{t\in[0,1]}|\phi(x(1+tr))x|\leq C$ for some constant $C$. Plugging it into (\[eqthmxi1\]), we obtain $$\label{eqthmxi2} \max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big\{\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x\frac{\hat s_{tk}}{s_{tk}}, \cE',\cE\Big)-\Phi(x)\Big\}\lesssim \frac{s_1\log (K\vee n)}{n^{1/2}}+r\lesssim \frac{s_1\log (K\vee n)}{n^{1/2}}.$$ When $x<0$, similar to (\[eqthmxi1\]), the bound is $$\begin{aligned} &\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x\frac{\hat s_{tk}}{s_{tk}}, \cE',\cE\Big)-\Phi(x)\\ &\leq \Big\{\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x(1-r),\cE\Big)-\Phi(x(1-r))\Big\}+\Big\{\Phi(x(1-r))-\Phi(x)\Big\}.\label{eqthmxi1}\end{aligned}$$ Thus (\[eqthmxi2\]) holds for $x<0$ as well. Combining these results, we obtain $$\max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big\{\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{\hat s_{tk}}\leq x\Big)-\Phi(x)\Big\}\lesssim \frac{s_1\log (K\vee n)}{n^{1/2}}+\frac{1}{(K\vee n)^3}+\frac{1}{(d\vee n)^3}.$$ Following the similar argument, we can also derive $$\max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big\{\Phi(x)-\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{\hat s_{tk}}\leq x\Big)\Big\}\lesssim \frac{s_1\log (K\vee n)}{n^{1/2}}+\frac{1}{(K\vee n)^3}+\frac{1}{(d\vee n)^3}.$$ This completes the proof. \[prop:asymptotic\_normality\_group\_averages\] Under the same conditions as in Theorem \[thm:xi\_asymptotic\], we have $$\max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big|\PP\Big(\frac{n^{1/2}(\tilde\Omega_{t,k}-\Omega^*_{t,k})}{s_{tk}}\leq x, \cE\Big)-\Phi(x)\Big|\leq \frac{C}{(K\vee n)^3}+\frac{Cs_1\log (K\vee n)}{n^{1/2}}.$$ The proof is done in two steps. In Step 1, we show that intersected with the event $\cE$, $$\label{eqn:asymptotic_normality_ga_1} n^{1/2}|({\tilde{\Omega}_{t,k}} - {\Omega_{t,k}^*}) /s_{t,k} + {\Omega_{t,t}^*}h({\bOmega_{ \cdot k}^*})/s_{t,k}| \leq \frac{s_1 \log (K\vee n)}{n^{1/2} },$$ with probability at least $1-C/(K\vee n)^3$ and then use Lemma \[lem:group\_averages\_clt\] to obtain the result. To prove , we decompose it as $$\begin{aligned} &n^{1/2}|({\tilde{\Omega}_{t,k}} - {\Omega_{t,k}^*}) + {\Omega_{t,t}^*} h(\bOmega_{\cdot k}^*)| \\ &= n^{1/2}|({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*}) -{\hat{\Omega}_{t,t}} \hat h(\hat\bOmega_{\cdot k}) + {\Omega_{t,t}^*} h(\bOmega_{\cdot k}^*)| \\ &\leq \underlabel{n^{1/2} |({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*}) - {\Omega_{t,t}^*} (h(\hat\bOmega_{\cdot k})-h(\bOmega_{\cdot k}^*))|}{I.1} \\ &\quad + \underlabel{n^{1/2} |{\Omega_{t,t}^*} (\hat h(\hat\bOmega_{\cdot k})-h(\hat\bOmega_{\cdot k}))|}{I.2} + \underlabel{n^{1/2} |({\hat{\Omega}_{t,t}}-{\Omega_{t,t}^*})\hat h(\hat\bOmega_{\cdot k}) |}{I.3}.\end{aligned}$$ In the following, we study these three terms separately. Recall that $h(\bOmega_{\cdot k}) = \vb_t^{*T}(\hat\bS\bOmega_{\cdot k} - \eb_k),$ and $\hat h(\bOmega_{\cdot k})=\hat\vb_{t}^T(\hat{\bS}\bOmega_{\cdot k}-\eb_k)$, where ${\vb}^*_{t}$ is a $K$-dimensional vector with $(\vb_{t}^*)_t=1$ and $(\vb_{t}^*)_{-t}=-\wb^*_{t}$ with $\wb^*_t=(\bS^*_{-t,-t})^{-1}\bS^*_{-t,t}$. Term I.1 reduces to $$\begin{aligned} |I.1|&=n^{1/2} |({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*}) - {\Omega_{t,t}^*} \vb_t^{*T}\hat\bS(\hat\bOmega_{\cdot k}-\bOmega_{\cdot k}^*)|\nonumber\\ &\leq n^{1/2}|({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*})(1-\Omega_{t,t}^*(\hat\bS_{t,t}-\wb^{*T}_t\hat\bS_{-t,t}))|\label{eqasym1}\\ &+n^{1/2}{\Omega_{t,t}^*} |(\hat \bS_{t,-t}-\wb^{*T}_t\hat \bS_{-t,-t})(\hat\bOmega_{-t,k}-\bOmega^*_{-t,k})|. \label{eqasym2}\end{aligned}$$ Note that $1/\Omega_{t,t}^*=\bS^*_{t,t}-\wb^{*T}_t\bS^*_{-t,t}$. The term in (\[eqasym1\]) can be bounded by $$\begin{aligned} &n^{1/2}|({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*})\Omega_{t,t}^*(\hat\bS_{t,-t}-\bS_{t,-t}^*)|+n^{1/2}|({\hat{\Omega}_{t,k}} - {\Omega_{t,k}^*})\Omega_{t,t}^*\wb_t^{*T}(\hat\bS_{-t,t}-\bS_{-t,t}^*)|\\ &\leq n^{1/2}|\Omega_{t,t}^*| \|\hat\bOmega_{\cdot k}-\bOmega_{\cdot k}^*\|_1 \max(\|\hat\bS-\bS^*\|_{\max}, \|\wb_t^{*T}(\hat\bS_{-t\cdot}-\bS_{-t\cdot}^*)\|_\infty)\\ &\leq \frac{Cs_1\log(K\vee n)}{n^{1/2}},\end{aligned}$$ with probability at least $1-(K\vee n)^{-3}$, by $\lambda_{\max}(\bOmega^*)\leq C$ and the concentration and error bound results in Lemmas \[lem:group\_av\_S\_consistency\], \[lem:group\_averages\_gradient\_hessian\], \[lem:group\_averages\_consistency\]. The term in (\[eqasym2\]) can be bounded by $$n^{1/2}\Omega^*_{t,t} \|\hat \bS_{t,-t}-\wb^{*T}_t\hat \bS_{-t,-t}\|_\infty\|\hat\bOmega_{-t,k}-\bOmega^*_{-t,k}\|_1\leq \frac{Cs_1\log(K\vee n)}{n^{1/2}},$$ with probability at least $1-(K\vee n)^{-3}$ again by Lemmas \[lem:group\_averages\_gradient\_hessian\], \[lem:group\_averages\_consistency\]. Thus, $|\text{I.1}|\leq \frac{s_1\log(K\vee n)}{n^{1/2}}$ with probability at least $1-(K\vee n)^{-3}$. For term I.2, we have $$\begin{aligned} |I.2|&=n^{1/2}\Omega_{t,t}^* |(\hat \vb_t-\vb^*)^T(\hat\bS\hat\bOmega_{\cdot k} - \eb_k)|\\ &\leq n^{1/2}\Omega_{t,t}^* \|\hat \vb_t-\vb^*\|_1\|\hat\bS\hat\bOmega_{\cdot k} - \eb_k\|_\infty\leq \frac{Cs_1\log(K\vee n)}{n^{1/2}},\end{aligned}$$ with probability at least $1-(K\vee n)^{-3}$ by Lemma \[lem:group\_averages\_consistency\] and the constraint of the CLIME-type estimator. To control term I.3, first we observe that $$\begin{aligned} |\hat h(\hat\bOmega_{\cdot k})|&=|\hat\vb_t^T(\hat\bS\hat\bOmega_{\cdot k}-\eb_k)|\\ &\leq |\vb_t^{*T}(\hat\bS \bOmega^*_{\cdot k}-\eb_k)|+ |\vb_t^{*T}\hat\bS (\hat\bOmega_{\cdot k}-\bOmega^*_{\cdot k})|+|(\hat\vb_t-\vb_t^{*})^T(\hat\bS \hat\bOmega_{\cdot k}-\eb_k)|\\ &\leq \|\vb^*_t\|_1\|\hat\bS \bOmega^*_{\cdot k}-\eb_k\|_\infty+\|\hat\bS_{t,-t}-\wb_t^{*T}\hat\bS_{-t,-t}\|_\infty\|\hat\bOmega_{\cdot k}-\bOmega^*_{\cdot k}\|_1+\|\hat\vb_t-\vb_t^{*}\|_1\|\hat\bS \hat\bOmega_{\cdot k}-\eb_k\|_\infty.\end{aligned}$$ As shown in the proof of Lemma \[lem:group\_averages\_clt\], $\|\vb^*_t\|_1\leq s_1^{1/2}\|\vb^*_t\|_2\leq Cs_1^{1/2}$. The rest of the bounds on the above terms follows easily from Lemmas \[lem:group\_averages\_gradient\_hessian\], \[lem:group\_averages\_consistency\]. Thus, we have $|\hat h(\hat\bOmega_{\cdot k})|\leq C(s_1\log (K\vee n)/n)^{1/2}$ with high probability. Since $$|\hat\Omega_{t,t}-\Omega_{t,t}^*|\leq C(s_1\log (K\vee n)/n)^{1/2},$$ by Lemma [\[lem:group\_averages\_consistency\]]{}, we obtain that $|\text{I.3}|\leq \frac{s_1\log(K\vee n)}{n^{1/2}}$ with probability at least $1-(K\vee n)^{-3}$. It is easily seen that $\Omega_{t,t}^*\geq \frac{1}{S^*_{t,t}}\geq C>0$, see Remark \[rem:averages\_assumptions\]. This implies that $s^2_{t,k}=\Omega_{t,t}^*\Omega_{k,k}^*+\Omega_{t,k}^{*2}$ is lower bounded by a positive constant. The proof of (\[eqn:asymptotic\_normality\_ga\_1\]) is complete. In step 2, we need to verify that $$\max_{1\leq t< k\leq K}\sup_{x \in \RR}\Big|\PP\Big(\frac{n^{1/2}\Omega_{t,t}^*h(\bOmega_{\cdot k}^*)}{s_{tk}}\leq x\Big)-\Phi(x)\Big|\leq \frac{C}{(K\vee n)^3}+\frac{1}{n^{1/2}},$$ which has been done in Lemma \[lem:group\_averages\_clt\]. Thus, combining with result (\[eqn:asymptotic\_normality\_ga\_1\]), we can use the same simple union bound in the proof of Theorem \[thm:xi\_asymptotic\] to obtain the desired result. \[prop:convergence\_group\_averages\_variance\] Under the same conditions as in Theorem \[thm:xi\_asymptotic\], we have $$\max_{1\leq t< k\leq K}|\hat s_{t,k}^2 - s_{t,k}^2| \leq C \sqrt{\frac{s_1\log(K \vee n)}{n}},~~ \max_{1\leq t< k\leq K}\Big|\frac{\hat s_{t,k}}{s_{t,k}}-1\Big| \leq C \sqrt{\frac{s_1\log(K \vee n)}{n}},$$ with probability at least $1-(K\vee n)^{-3}$. By Lemma \[lem:group\_averages\_consistency\], under the event $\hat G=G^*$, we have $$\max_{1\leq t,k\leq K} |\hat\Omega_{t,k}-\Omega^*_{t,k}|\leq \max_{1\leq k\leq K} \|\hat\bOmega_{\cdot k}-\bOmega^*_{\cdot k}\|_2\leq C_1 \sqrt{\frac{s_1\log (K\vee n)}{n}},$$ with probability at least $1-\frac{C_4}{(K\vee n)^3}$. Under this event, $$\begin{aligned} \max_{1\leq t< k\leq K}|\hat s_{t,k}^2 - s_{t,k}^2|&=\max_{1\leq t< k\leq K}|\hat\Omega_{t,k}^2 + \hat\Omega_{t,t}\hat\Omega_{k,k}-(\Omega_{t,k}^{*2} + \Omega_{t,t}^*\Omega_{k,k}^*)|\\ &\leq \max_{1\leq t< k\leq K}|(\hat\Omega_{t,k}-\Omega_{t,k}^*)(\hat\Omega_{t,k}+\Omega_{t,k}^*)|+\hat\Omega_{t,t}|\hat\Omega_{k,k}-\Omega_{k,k}^*|+\Omega_{k,k}^*|\hat\Omega_{t,t}-\Omega_{t,t}^*|\\ &\leq C_1 \sqrt{\frac{s_1\log (K\vee n)}{n}}(4\|\bOmega^*\|_{\max}+\delta)\leq C \sqrt{\frac{s_1\log(K \vee n)}{n}},\end{aligned}$$ for some constant $\delta>0$ since $\|\bOmega^*\|_{\max}\leq \lambda_{\max}(\bOmega^*)\leq C$. It is easily seen that $\Omega_{t,t}^*\geq \frac{1}{S^*_{t,t}}\geq C>0$, see Remark \[rem:averages\_assumptions\]. This implies that $s^2_{t,k}=\Omega_{t,t}^*\Omega_{k,k}^*+\Omega_{t,k}^{*2}$ is lower bounded by a positive constant. Thus, $$\max_{1\leq t< k\leq K}\Big|\frac{\hat s_{t,k}}{s_{t,k}}-1\Big| \leq \max_{1\leq t< k\leq K}\Big|\frac{\hat s^2_{t,k}-s^2_{t,k}}{s_{t,k}(s_{t,k}+\hat s_{t,k})}\Big|\leq C \sqrt{\frac{s_1\log(K \vee n)}{n}}.$$ Key Lemmas for Estimators of the Cluster-Average Graph ------------------------------------------------------ \[rem:averages\_assumptions\] While Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] are made for $\Cb^*$, they do imply that $c_1 \leq {\lambda_{\min}\left({\bS_{ }^*}\right)}$ and $\max_{t}{S_{t,t}^*} \leq c_2 + c_3$ holds for $\bS^*$. Furthermore Lemma \[lem:latent\_re\_condition\] implies the same restricted eigenvalue condition on ${\lambda_{\min}\left({\bS_{ }^*}\right)}$ as on ${\Cb_{ }^*}$. In the following proof, we always assume the event $\cE=\{\hat G=G^*\}$ holds. Using a similar argument to that used in the proof of Theorem \[thm:xi\_asymptotic\], the following bounds will hold with probability at least $1 - \frac{C}{(K\vee n)^3}$. \[lem:group\_av\_S\_consistency\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then with probability greater than $1 - \frac{C}{(K\vee n)^3}$, $$\|{\hat{\bS}_{ }}- {\bS_{ }^*}\|_{\max}\leq C\sqrt{\frac{\log (K\vee n)}{n}}$$ for some constant $C$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. The proof follows from the proof of Theorem 1 in [@Cai11a]. \[lem:group\_averages\_gradient\_hessian\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then with probability greater than $1-\frac{C}{(K\vee n)^3}$, we have that - $\max_{1\leq k\leq K}\|{\hat{\bS}_{ }}{\bOmega_{ \cdot k}^*}-\eb_k\|_{\infty}\leq C_1\sqrt{\frac{\log (K\vee n)}{n}}$, - $\max_{1\leq k\leq K}\|{\hat{\bS}_{ t,-t }}-\wb_t^{*T}{\hat{\bS}_{ -t,-t }}\|_\infty\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$, and - $\max_{1\leq k\leq K}\|\wb_t^{*T}({\hat{\bS}_{ -t,-t }}-{\bS_{ -t,-t }^*})\|_\infty\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$, for some constants $C_1$ and $C_2$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. We start from the decomposition $$\begin{aligned} {\hat{\bS}_{ }}- {\bS_{ }^*}&=(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}.\end{aligned}$$ Denoting by $\Bb^*=\Ab^{*T}\Ab^*$, we can write $$\begin{aligned} &(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}\nonumber\\ &=\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\Big\{(\Ab^*\bZ_i+\bE_i)(\Ab^*\bZ_i+\bE_i)^T-\Ab^*\Cb^*\Ab^{*T}-\bGamma^*\Big\}\Ab^*\Bb^{*-1}\nonumber\\ &=\frac{1}{n}\sum_{i=1}^n\bZ_i\bZ_i^T-\Cb^*+\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\bE_i\bZ_i^T\nonumber\\ &~~~+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}.\label{eqn:group_averages_decomp}\end{aligned}$$ Note that ${\hat{\bS}_{ }}{\bOmega_{ \cdot k}^*}-\eb_k=({\hat{\bS}_{ }}-{\bS_{ }^*}){\bOmega_{ \cdot k}^*}$, and $||{\bOmega_{ \cdot k}^*}||_2 \leq {\lambda_{\max}\left({\bOmega_{ }^*}\right)} \leq {\lambda_{\min}\left({\bS_{ }^*}\right)}^{-1}$. As seen in Remark \[rem:averages\_assumptions\], we can bound the smallest eigenvalue of ${\bS_{ }^*}$ from below by ${\lambda_{\min}\left({\Cb_{ }^*}\right)}$. We therefore obtain that $||{\bOmega_{ \cdot k}^*}||_2 \leq c_1 + c_3$. By using the triangle inequality, we can apply Lemma \[lem:conc\_sum\_ZiZi\], Lemma \[lem:conc\_sum\_ZiEi\] and Lemma \[lem:conc\_sum\_EiEi\] to bound . Combining these results, we obtain $$\max_{1\leq k\leq K}\|{\hat{\bS}_{ }}{\bOmega_{ \cdot k}^*}-\eb_k\|_{\infty}\leq C_1\sqrt{\frac{\log (K\vee n)}{n}}$$ with probability at least $1 - \frac{C}{(K\vee n)^3}$, concluding the proof of claim (a). For the remaining two claims, we can rewrite $$\wb_t^* = \left({S_{t,t}^*} - \bS^{*T}_{-t,t}({\bS_{ -t,-t }^*})^{-1}{\bS_{ -t,t }^*}\right) {\bOmega_{ -t,t}^*} = \frac{1}{{\Omega_{t,t}^*}}{\bOmega_{ -t,t}^*}$$ by the block matrix inverse formula. Using Lemma \[lem:pd\_matrix\_diag\], it follows that $||\wb^*_t||_2 \leq {\lambda_{\min}\left({\bOmega_{ }^*}\right)} \max_t {S_{t,t}^*}$. Then we can see that $$\begin{aligned} \max_{1\leq k\leq K}||{\hat{\bS}_{ t,-t }}-\wb_t^{*T}{\hat{\bS}_{ -t,-t }}||_{\infty} &= \max_{1\leq k\leq K}||({\hat{\bS}_{ t,-t }}-{\bS_{ t,-t }^*})-\wb_t^{*T}({\hat{\bS}_{ -t,-t }}-{\bS_{ -t,-t }^*})||_{\infty} \\ &\leq \underlabel{\max_{1\leq k\leq K}||({\hat{\bS}_{ t,-t }}-{\bS_{ t,-t }^*})||}{(i)} + \underlabel{\max_{i \neq t}|\wb_t^{*T}({\hat{\bS}_{ -t,i }}-{\bS_{ -t,i }^*})|}{(ii)}.\end{aligned}$$ By using Lemma \[lem:group\_av\_S\_consistency\], (i) is bounded with high probability. Similar to the the proof of Theorem 1 in [@Cai11a], we can show that $(ii)\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$, with probability at least $1 - \frac{C}{(K\vee n)^3}$. Part (c) is the same as the term (ii), concluding the proof. \[lem:group\_averages\_consistency\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then, - $\max_{1\leq k\leq K}\|{\hat{\bOmega}_{ \cdot k}}-{\bOmega_{ \cdot k}^*} \|_1\leq C_1 s_1\sqrt{\frac{\log (K\vee n)}{n}}$, $\max_{1\leq k\leq K}\|{\hat{\bOmega}_{ \cdot k}}-{\bOmega_{ \cdot k}^*} \|_2\leq C_1 \sqrt{\frac{s_1\log (K\vee n)}{n}}$, - $\max_{1\leq t\leq K}\|\hat\vb_{t}-\vb_{t}^*\|_1 \leq C_2 s_1\sqrt{\frac{\log (K\vee n)}{n}}$, and - $\max_{1\leq k\leq t\leq K}|(\hat\vb_{t}-\vb_{t}^*)^T{\hat{\bS}_{ }}({\hat{\bOmega}_{ \cdot k}}-{\bOmega_{ \cdot k}^*})|\leq C_3 \frac{s_1\log (K\vee n)}{n}$, with probability at least $1-\frac{C_4}{(K\vee n)^3}$. $C_1$, $C_2$, $C_3$, $C_4$ and $C_5$ are constants, dependent only upon $c_0$, $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. The proof follows from the same argument as in the proof of Lemma \[lem:latent\_consistency\] with Lemma \[lem:latent\_gradient\_hessian\] replaced by Lemma \[lem:group\_averages\_gradient\_hessian\]. \[lem:group\_averages\_clt\] Recall that $s_{tk}^2=\Omega_{t,k}^{*2} + \Omega^*_{t,t}\Omega^*_{k,k}$. Let $F_n$ denote the CDF of $n^{1/2}\vb_t^{*T}(\hat\bS{\bOmega_{ \cdot k}^*}-\eb_k)/(s_{tk}/\Omega^*_{t,t})$. If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then $$\max_{1\leq t< k\leq K} \sup_{x\in \RR}|F_{n}(x) - \Phi(x) | \leq C(n^{-1/2}+(d\vee n)^{-3}),$$ where $C$ is a constant dependent only upon $c_0$, $c_1$, and $c_2$. We have the similar bound that $F_{n}(x) - \Phi(x)\leq \tilde F_n(x)-\Phi(x)+\PP(\bar\cE)$, where $\cE$ is the event $\hat G = G^*$, and $\tilde F_n(x)$ is the CDF of $n^{-1/2}\sum_{i=1}^n\vb_t^{*T}(\bar\bX_i\bar\bX_i^T\bTheta^*_{\cdot k}-\eb_k)/(s_{tk}/\Theta^*_{tt})$. Lemma \[lemcluster\] implies that $\PP(\bar\cE)\leq (d\vee n)^{-3}$. We can similarly verify the Lyapunov condition to control $\tilde F_n(x)-\Phi(x)$. Finally, $\EE[\vb_t^{*T}(\bar\bX_i\bar\bX_i^T\bTheta^*_{\cdot k}-\eb_k)^2]=(s_{tk}/\Theta^*_{tt})^2$ by applying the Isserlis’ theorem $\Var(\bv_1^T\bar\bX\bar\bX^T\bv_2)=(\bv_1^T\bS^*\bv_1)(\bv_2^T\bS^*\bv_2)+(\bv_1^T\bS^*\bv_2)^2$, for any vector $\bv_1$ and $\bv_2$. The proof follows verbatim that of Theorem 8.5 in [@giraud2014introduction], with the exception of the fact that exact $p$-values are replaced by approximate $p$-values, including the rate of approximation. We included the full proof for the convenience of the reader. By the definition of the FDR, $${\mathrm{FDR}}(\hat \tau)=\EE\Big[\frac{ \sum_{(t, k) \in\cH_{0}} \II[|\tilde W_{t,k}| > \hat\tau ] \II[R_{\hat\tau} > 0]}{R_{\hat\tau}}\Big]=\sum_{(t, k) \in\cH_{0}}\EE\Big[\frac{\II[|\tilde W_{t,k}| > \hat\tau ] \II[R_{\hat\tau} > 0]}{R_{\hat\tau}}\Big].\label{eqfdr1}$$ To handle the $R_{\hat\tau}$ in the denominator, we use the identity $1=\sum_{i=R_{\hat\tau}}^\infty\frac{R_{\hat\tau}}{i(i+1)}$. This implies $$1/R_{\hat\tau}=\sum_{i=R_{\hat\tau}}^\infty\frac{1}{i(i+1)}=\sum_{i=1}^\infty\frac{\II[i\geq R_{\hat\tau}]}{i(i+1)}$$ Plugging this into (\[eqfdr1\]) and bringing the expectation inside the summation gives that $$\begin{aligned} {\mathrm{FDR}}(\hat \tau)&=\sum_{(t, k) \in\cH_{0}}\sum_{i=1}^\infty\frac{1}{i(i+1)}\EE\Big[\II[|\tilde W_{t,k}| > \hat\tau ] \II[R_{\hat\tau} > 0]\II[i\geq R_{\hat\tau}]\Big]\nonumber\\ &\leq \sum_{(t, k) \in\cH_{0}}\sum_{i=1}^\infty\frac{1}{i(i+1)}\EE\Big[\II[|\tilde W_{t,k}| > \Phi^{-1}\left(1-\frac{\alpha R_{\hat\tau}}{2N_{BY}|\cH|} \right) ] \II[R_{\hat\tau} > 0]\II[i\geq R_{\hat\tau}]\Big]\nonumber\\ &\leq \sum_{(t, k) \in\cH_{0}}\sum_{i=1}^\infty\frac{1}{i(i+1)}\EE\Big[\II[|\tilde W_{t,k}| > \Phi^{-1}\left(1-\frac{\alpha (i\wedge |\cH|)}{2N_{BY}|\cH|} \right) ] \Big],\label{eqfdr2}\end{aligned}$$ where the second line follows from the definition of the FDR cutoff and the last inequality holds since $R_{\hat\tau}\leq (i\wedge |\cH|)$. The Berry-Esseen bound in Theorem \[thm:xi\_asymptotic\] implies that $$\PP(|\tilde W_{t,k}| > \Phi^{-1}\left(1-\frac{\alpha (i\wedge |\cH|)}{2N_{BY}|\cH|} \right))\leq \frac{\alpha (i\wedge |\cH|)}{N_{BY}|\cH|}+2b_n.$$ Thus, it follows that $$\begin{aligned} {\mathrm{FDR}}(\hat \tau)&\leq \sum_{(t, k) \in\cH_{0}}\sum_{i=1}^\infty \frac{1}{i(i+1)}\Big(\frac{\alpha (i\wedge |\cH|)}{N_{BY}|\cH|}+2b_n\Big)\\ &=\alpha\frac{|\cH_0|}{|\cH|}\Big(\sum_{i=1}^{|\cH|}\frac{i}{i(i+1)N_{BY}}+\sum_{i=|\cH|+1}^{\infty}\frac{|\cH|}{i(i+1)N_{BY}}\Big)+2|\cH_0| b_n\\ &=\alpha\frac{|\cH_0|}{|\cH|}\Big(\sum_{i=1}^{|\cH|}\frac{1}{i+1}\frac{1}{N_{BY}}+\frac{|\cH|}{|\cH|+1}\frac{1}{N_{BY}}\Big)+2|\cH_0| b_n\\ &\leq \alpha+2|\cH_0| b_n,\end{aligned}$$ where the last step follows from $\frac{|\cH_0|}{|\cH|}\leq 1$ and the definition of $N_{BY}$. This completes the proof. Proofs Regarding Estimation in the Latent Variable Graph {#proofmain2} ======================================================== This section contains the proofs that establish the asymptotic normality of the estimator of $\bTheta^*$. Main Proofs for Estimators of the Latent Variable Graph ------------------------------------------------------- The proof follows exactly the line of the proof of Theorem \[thm:xi\_asymptotic\], but invokes Proposition \[prop:convergence\_latent\_variance\], instead of Proposition \[prop:convergence\_group\_averages\_variance\], and Proposition \[prop:asymptotic\_normality\_latent\], instead of Proposition \[prop:asymptotic\_normality\_group\_averages\], as one needs to establish different intermediate results, specifically tailored to estimation of the latent graph. \[prop:asymptotic\_normality\_latent\] Under the same conditions as in Theorem \[thm:theta\_asymptotic\], we get $$\max_{1\leq t< k\leq K}\sup_{x\in\RR} \Big|\PP\left( \frac{n^{1/2}(\tilde \Theta_{t,k} - {\Theta_{t,k}^*})}{\sigma_{t,k}}<x , \cE \right) - \Phi(x)\Big| \leq \frac{C}{(K\vee n)^3}+\frac{Cs_0\log (K\vee n)}{n^{1/2}}.$$ The proof follows all the steps of Proposition \[prop:asymptotic\_normality\_group\_averages\], with Lemma \[lem:group\_av\_S\_consistency\] replaced by Lemma \[lem:latent\_C\_consistency\], \[lem:group\_averages\_gradient\_hessian\] by \[lem:latent\_gradient\_hessian\], \[lem:group\_averages\_consistency\] by \[lem:latent\_consistency\] and \[lem:group\_averages\_clt\] by \[lem:latent\_clt\]. \[prop:convergence\_latent\_variance\] Under the same conditions as in Theorem \[thm:theta\_asymptotic\], we get $$\max_{1\leq t< k\leq K}|\hat \sigma_{t,k}^2 - \sigma_{t,k}^2| \leq C \sqrt{\frac{s_0\log(K \vee n)}{n}}+ \frac{Cs_0}{m},~~ \max_{1\leq t< k\leq K}\Big|\frac{\hat \sigma_{t,k}}{\sigma_{t,k}}-1\Big| \leq C \sqrt{\frac{s_0\log(K \vee n)}{n}}+ \frac{Cs_0}{m},$$ with probability at least $1-(K\vee n)^{-3}$. Similar to the proof of Proposition \[prop:convergence\_group\_averages\_variance\], we can prove that $$\max_{1\leq t< k\leq K}|\hat \sigma_{t,k}^2 - (\Theta_{t,k}^{*2}+\Theta^*_{t,t}\Theta^*_{k,k})| \leq C \sqrt{\frac{s_0\log(K \vee n)}{n}},$$ with probability at least $1-(K\vee n)^{-3}$. Then, by Lemma \[lem:latent\_variable\_variance\], we obtain $$\max_{1\leq t< k\leq K}|\hat \sigma_{t,k}^2 - \sigma_{t,k}^2| \leq C \sqrt{\frac{s_0\log(K \vee n)}{n}}+ \frac{Cs_0}{m}.$$ The second statement can be similarly derived. Key Lemmas for Estimators of the Latent Graph {#pfb2} --------------------------------------------- \[lem:latent\_C\_consistency\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then with probability greater than $1 - \frac{C}{(K\vee n)^3}$, $$\|\hat \Cb-\Cb^*\|_{\max}\leq C\sqrt{\frac{\log (K\vee n)}{n}},$$ for some constant $C$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. Let $\cE$ denote the event that $\hat G = G^*$. Under the event $\cE$, we have $$\label{eqn:c_hat_minus_c_star} \hat\Cb-\Cb^* =(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}-(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bGamma}-\bGamma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}.$$ We can write, for $i \in {G^*_{k}}$, $$\begin{aligned} (\hat\bGamma - {\bGamma^*})_{i,i} &= \hat \gamma_i - \gamma_i^*\\ &= {\hat{\Sigma}_{ i, i }} - \frac{1}{|{G^*_{k}}| - 1}\sum_{j \in {G^*_{k}}, j\neq i} {\hat{\Sigma}_{ i, j }} - \gamma_i^*\\ &= {\hat{\Sigma}_{ i, i }} - \frac{1}{|{G^*_{k}}| - 1}\sum_{j \in {G^*_{k}}, j\neq i} {\hat{\Sigma}_{ i, j }} - {\Sigma_{ i, i }^*} + \frac{1}{|{G^*_{k}}| - 1}\sum_{j \in {G^*_{k}}, j\neq i} {\Sigma_{ i, j }^*}\\ &= {\hat{\Sigma}_{ i, i }} - {\Sigma_{ i, i }^*} - \frac{1}{|{G^*_{k}}| - 1}\sum_{j \in {G^*_{k}}, j\neq i}\left[{\hat{\Sigma}_{ i, j }} - {\Sigma_{ i, j }^*}\right],\end{aligned}$$ which implies that $||{\hat{\bGamma}}- {\bGamma^*}||_{\max} \leq 2||{\hat{\bSigma}_{ }}- {\bSigma_{ }^*}||_{\max}$. Therefore from Lemma \[lem:ATAIA\] we see that $$||(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bGamma}-\bGamma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}||_{\max}\leq \frac{2}{m}||(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}||_{\max},$$ demonstrating that it will suffice to bound the first term in , which we now do. Let $\Bb^*=\Ab^{*T}\Ab^*$. For the first term in , we have $$\begin{aligned} &(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}\nonumber\\ &=\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\Big\{(\Ab^*\bZ_i+\bE_i)(\Ab^*\bZ_i+\bE_i)^T-\Ab^*\Cb^*\Ab^{*T}-\bGamma^*\Big\}\Ab^*\Bb^{*-1}\nonumber\\ &=\frac{1}{n}\sum_{i=1}^n\bZ_i\bZ_i^T-\Cb^*+\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\bE_i\bZ_i^T\nonumber\\ &~~~+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}.\label{eqlemconcen11}\end{aligned}$$ Using the triangle inequality, we can apply Lemma \[lem:conc\_sum\_ZiZi\], Lemma \[lem:conc\_sum\_ZiEi\] and Lemma \[lem:conc\_sum\_EiEi\] to bound \[eqlemconcen11\]. Combining these results with that $\PP(\cE)\geq 1-c_0/(d\vee n)^3$, we obtain $$\|\hat \Cb-\Cb^*\|_{\max}\leq C\sqrt{\frac{\log (K\vee n)}{n}}$$ with probability at least $1 - \frac{C}{(K\vee n)}$ for some constant $C$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. \[lem:latent\_gradient\_hessian\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then with probability greater than $1-\frac{C}{(K\vee n)^3}$, we have that - $\max_{1\leq k\leq K}\|\hat \Cb\bTheta^*_{\cdot k}-\eb_k\|_{\infty}\leq C_1\sqrt{\frac{\log (K\vee n)}{n}}$, - $\max_{1\leq k\leq K}\|\hat \Cb_{t,-t}-\wb_t^{*T}\hat\Cb_{-t,-t}\|_\infty\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$, and - $\max_{1\leq k\leq K}\|\wb_t^{*T}(\hat\Cb_{-t,-t}-\Cb^*_{-t,-t})\|_\infty\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$, for absolute constants $C_1$ and $C_2$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. Let $\cE$ denote the event that $\hat G = G^*$. Note that $\hat \Cb\bTheta^*_{\cdot k}-\eb_k=(\hat \Cb-\Cb^*)\bTheta^*_{\cdot k}$. Under $\cE$, following the decomposition (\[eqlemconcen11\]), we can similarly show that $$\begin{aligned} &(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}\bTheta^*_{\cdot k}\nonumber\\ &=\frac{1}{n}\sum_{i=1}^n\bZ_i\bZ_i^T\bTheta^*_{\cdot k}-\Cb^*\bTheta^*_{\cdot k}+\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}\bTheta^*_{\cdot k}+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\bE_i\bZ_i^T\bTheta^*_{\cdot k}\nonumber\\ &~~~+\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}\bTheta^*_{\cdot k} \numberthis \label{eqn:latent_gradient1}.\end{aligned}$$ As in the proof of Lemma \[lem:latent\_C\_consistency\], we have that $$||(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bGamma}-\bGamma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}\bTheta^*_{\cdot k}||_{\infty}\leq \frac{2}{m}||(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}(\hat{\bSigma}-\bSigma^*)\Ab^*(\Ab^{*T}\Ab^*)^{-1}\bTheta^*_{\cdot k}||_{\infty},$$ demonstrating that again it will suffice to bound the first term in \[eqn:latent\_gradient1\]. Note that $||{\bTheta_{ \cdot k}^*}||_2 \leq {\lambda_{\max}\left({\bTheta_{ }^*}\right)} \leq c_1^{-1}$. Therefore, by using the triangle inequality, we can apply Lemma \[lem:conc\_sum\_ZiZi\], Lemma \[lem:conc\_sum\_ZiEi\] and Lemma \[lem:conc\_sum\_EiEi\] to bound the first term in \[eqn:latent\_gradient1\]. Combining these results and $\PP(\cE)\geq 1-C/(d\vee n)^3$, we obtain $$\max_{1\leq k\leq K}\|\hat \Cb\bTheta^*_{\cdot k}-\eb_k\|_{\infty}\leq C_1\sqrt{\frac{\log (K\vee n)}{n}},$$ with probability at least $1 - \frac{C}{(K\vee n)^3}$ for some constant $C_1$ dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. For the remaining two claims, we can rewrite $$\label{eqn:norm_wstar_bound} \wb_t^* = \left({C_{t,t}^*} - \Cb^{*T}_{-t,t}({\Cb_{ -t,-t }^*})^{-1}{\Cb_{ -t,t }^*}\right) {\bTheta_{ -t,t}^*} = \frac{1}{{\Theta_{t,t}^*}}{\bTheta_{ -t,t}^*}$$ by the block matrix inverse formula. Using Lemma \[lem:pd\_matrix\_diag\], it follows that $||\wb^*_t||_2 \leq {\lambda_{\max}\left({\bTheta_{ }^*}\right)} \max_t {C_{t,t}^*}$. Then we see that $$\begin{aligned} \max_{1\leq k\leq K}||\hat \Cb_{t,-t}-\wb_t^{*T}\hat\Cb_{-t,-t}||_{\infty} &= \max_{1\leq k\leq K}||(\hat \Cb_{t,-t}-\Cb^*_{t,-t})-\wb_t^{*T}(\hat\Cb_{-t,-t}-\Cb^*_{-t,-t})||_{\infty} \\ &\leq \underlabel{\max_{1\leq k\leq K}||(\hat \Cb_{t,-t}-\Cb^*_{t,-t})||}{(i)} + \underlabel{\max_{i \neq t}|\wb_t^{*T}(\hat\Cb_{-t,i}-\Cb^*_{-t,i})|}{(ii)}\end{aligned}$$ Clearly, using Lemma \[lem:latent\_C\_consistency\], (i) is bounded with high probability. Likewise, Lemma \[lem:latent\_C\_consistency\] demonstrates that $\hat\Cb_{-t,i}-\Cb^*_{-t,i}$ is a sub-exponential random vector with parameters dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. Thus $\wb_t^{*T}(\hat\Cb_{-t,i}-\Cb^*_{-t,i})$ is sub-exponential and because $||\wb^*_t||_2 \leq {\lambda_{\max}\left({\bTheta_{ }^*}\right)} \max_t {C_{t,t}^*}$, we obtain that $$\max_{1\leq k\leq K}\|\hat \Cb_{t,-t}-\wb_t^{*T}\hat\Cb_{-t,-t}\|_\infty\leq C_2\sqrt{\frac{\log (K\vee n)}{n}}$$ with probability at least $1 - \frac{C}{(K\vee n)^3}$ for some constant $C_2$. $C_2$ is dependent only on $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. The final result is bounded by the previous one, concluding the proof. \[lem:latent\_consistency\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then - $\max_{1\leq k\leq K}\|\hat\Theta_{\cdot k}-\Theta_{\cdot k}^*\|_1 \leq C_1 s_0\sqrt{\frac{\log (K\vee n)}{n}}$, $\max_{1\leq k\leq K}\|\hat\Theta_{\cdot k}-\Theta_{\cdot k}^*\|_2 \leq C_1 \sqrt{\frac{s_0\log (K\vee n)}{n}}$, - $\max_{1\leq t\leq K}\|\hat\vb_{t}-\vb_{t}^*\|_1 \leq C_2 s_0\sqrt{\frac{\log (K\vee n)}{n}}$, and - $\max_{1\leq k\leq t\leq K}|(\hat\vb_{t}-\vb_{t}^*)^T\hat \Cb(\hat\bTheta_{\cdot k}-\bTheta_{\cdot k}^*)| \leq C_3 \frac{s_0\log (K\vee n)}{n}$, with probability at least $1-\frac{C_4}{(K\vee n)^3}$. $C_1$, $C_2$, $C_3$, $C_4$ are constants, dependent only upon $c_0$, $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. Below, the constants $C_a$, $C_a'$, $C_b$, $C_b'$, $C_b''$, $C_c$ and $C_c'$ will depend only upon $c_0$, $c_1$, $c_2$, and $c_3$ from Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. To prove the first claim, define $D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) =({\hat{\bTheta}_{ \cdot k}} - {\bTheta_{ \cdot k}^*})^T{\hat{\Cb}_{ }}({\hat{\bTheta}_{ \cdot k}} - {\bTheta_{ \cdot k}^*})$ and denote $\hat\bDelta = {\hat{\bTheta}_{ \cdot k}} - {\bTheta_{ \cdot k}^*}$. From the KKT conditions, it follows that ${\hat{\Cb}_{ }}{\hat{\bTheta}_{ \cdot k}} - e_k = -\lambda \bZ$, where $\bZ \in \partial\| {\hat{\bTheta}_{ \cdot k}}\|_1$ is any vector in $\RR^K$ satisfying $$Z_i = \begin{cases} -1 & \text{ if } {\hat{\Theta}_{i,k}} < 0 \\ \in [-1,1] & \text{ if } {\hat{\Theta}_{i,k}} = 0 \\ 1 & \text{ if } {\hat{\Theta}_{i,k}} > 0. \end{cases}$$ Let $S = \supp({\bTheta_{ \cdot,k}^*})$. Thus $$\begin{aligned} D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) &= \hat\bDelta^T{\hat{\Cb}_{ }}\hat\bDelta\\ &= \hat\bDelta^T\left[{\hat{\Cb}_{ }}{\hat{\bTheta}_{ \cdot k}} - {\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*} \right] \\ &= \hat\bDelta^T\left[-(\be_k - {\hat{\Cb}_{ }}{\hat{\bTheta}_{ \cdot k}}) + (\be_k - {\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*}) \right]\\ &= - \lambda\hat\bDelta^T_{\bar{S}}\bZ_{\bar{S}} - \lambda\hat\bDelta_S^T\bZ_S + \hat\bDelta^T(\be_k - {\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*}).\end{aligned}$$ The KKT conditions gives that $ \lambda\hat\bDelta^T_{\bar{S}}\bZ_{\bar{S}} = \lambda||\hat\bDelta_{\bar S}||_1$. Choose $\lambda = 2C_a\sqrt{\frac{\log (K\vee n)}{n}}$. It follows, using Lemma \[lem:latent\_gradient\_hessian\] part (a) and Holder’s inequality, that $$\begin{aligned} D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) &\leq -\lambda||\hat\bDelta_{\bar S}||_1 + \lambda||\hat\bDelta_S||_1 + ||\hat\bDelta||_1||\be_k - {\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*}||_{\infty}\\ &\leq C_a \sqrt{\frac{\log (K\vee n)}{n}}\left(3||\hat\bDelta_S||_1 - ||\hat\bDelta_{\bar S}||_1 \right), \numberthis \label{eqn:latent_theta_consistency}\end{aligned}$$ with probability at least $1-\frac{C_a'}{(K\vee n)^3}$. Furthermore, ${\hat{\Cb}_{ }}$ is positive semidefinite, thus $D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) \geq 0$ and so $||\hat\bDelta_{\bar S}||_1 \leq 3||\hat\bDelta_S||_1 $. In addition, by Lemma \[lem:latent\_re\_condition\], $D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) \geq c_1 ||\hat \bDelta||_2^2$. From we have that $D({\hat{\bTheta}_{ \cdot k}},{\bTheta_{ \cdot k}^*}) \leq 3C_a\sqrt{\frac{s_0 \log K}{n}}||\hat \bDelta||_2$. Thus, $||\hat\bDelta||^2_2 \leq 3C_a c_1 \sqrt{\frac{s_0 \log K}{n}}||\hat \bDelta||_2$. Finally, $$||\hat\bDelta||_1 \leq 4||\hat\bDelta_S||_1 \leq 4\sqrt{s_1} ||\hat\bDelta_S||_2 \leq 4\sqrt{s_1} ||\hat\bDelta_S||_2 \leq 12C_ac_1s_0 \sqrt{\frac{\log K}{n}},$$ with probability at least $1-\frac{C_a'}{K}$, as desired. We first prove part (b). The proof of part (a) is similar. Let $\hat\bDelta = \hat \wb_t - \wb_t^*$, noting that we can consider $\wb_t$ instead of $\vb_t$ as the $t^{th}$ entries in both the estimated and true value are 1. By $S$ denote the support of $\wb_t^*$. $\wb_t^*$ is $s_0$-sparse because $\wb_t^*$ is a multiple of ${\bTheta_{ -t,k}^*}$, which we know to be $s_0$-sparse. By Lemma \[lem:latent\_gradient\_hessian\], there exists $C_b$ such that for $\lambda \geq C_b \sqrt{\frac{\log (K\vee n)}{n}}$, $\wb_t^*$ is feasible for with probability at least $1-\frac{C_b'}{(K\vee n)^3}$. Assuming $\wb_t^*$ is feasible, then it follows by definition that $||(\wb_t^*)_{S}||_1 \geq ||(\hat\wb_t)_{S}||_1 + ||(\hat\wb_t)_{\bar{S}}||_1$. This in turn implies by the triangle inequality that $||\hat\bDelta_{S}||_1 \geq ||\hat\bDelta_{\bar{S}}||_1$. Letting $\lambda = C_b \sqrt{\frac{\log (K\vee n)}{n}}$, it follows from the triangle inequality that $$||\hat\Cb_{-t,-t} \hat\bDelta||_\infty \leq ||\hat\wb_t^T\hat\Cb_{-t,-t} - \Cb_{t,-t}\||_\infty + ||\wb_t^{*T}\hat\Cb_{-t,-t} - \Cb_{t,-t}\||_\infty \leq 2C_b\sqrt{\frac{\log (K\vee n)}{n}}.$$ In addition, note that $||\hat\bDelta||_1 \leq 2 ||\hat\bDelta_S||_1 \leq 2 \sqrt{s_0}||\hat\bDelta_S||_2\leq 2\sqrt{s_0}||\hat\bDelta||_2$. Therefore combining with the above, this gives $$\hat\bDelta^T\hat\Cb_{-t,-t}\bDelta \leq ||\hat\bDelta||_1||\hat\Cb_{-t,-t}\hat\bDelta||_\infty \leq 2C\sqrt{\frac{\log (K\vee n)}{n}}||\hat\bDelta||_1 \leq 4C_b\sqrt{\frac{s_0\log (K\vee n)}{n}}||\hat\bDelta||_2.$$ From Lemma \[lem:latent\_re\_condition\], $\hat\bDelta^T\hat\Cb_{-t,-t}\hat\bDelta \geq \frac{4c_1}{3} ||\hat\bDelta||_2^2$ with probability at least $1-\frac{C_b''}{(K\vee n)^3}$. Therefore $$||\hat\bDelta||_2 \leq \frac{16C c_1}{3}\sqrt{\frac{s_0\log (K\vee n)}{n}} \quad\quad\text{ and }\quad\quad||\hat\bDelta||_1 \leq \frac{32C_bs_0 c_1}{3}\sqrt{\frac{\log (K\vee n)}{n}},$$ with probability at least $1-\frac{\max\{C_b',C_b''\}}{(K\vee n)^3}$. To obtain part (c), first we apply Holder’s inequality and the triangle inequality which give $$\begin{aligned} \max_{1\leq k\leq t\leq K}|(\hat\vb_{t}-\vb_{t}^*)^T\hat \Cb(\hat\bTheta_{\cdot k}-\bTheta_{\cdot k}^*)| &\leq \max_{1\leq k\leq t\leq K}||\hat\vb_{t}-\vb_{t}^*||_1||\hat \Cb(\hat\bTheta_{\cdot k}-\bTheta_{\cdot k}^*)||_{\infty}\\ &\leq \max_{1\leq k\leq t\leq K}||\hat\vb_{t}-\vb_{t}^*||_1 \left( ||{\hat{\Cb}_{ }}{\hat{\bTheta}_{ \cdot k}} - \be_k ||_{\infty} + ||{\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*} - \be_k||_{\infty} \right). \numberthis \label{eqn:latent_consistency_partc1}\end{aligned}$$ With choice of $\lambda$ as above, the KKT conditions give that $||{\hat{\Cb}_{ }}{\hat{\bTheta}_{ \cdot k}} - \be_k ||_{\infty} \leq C_b \sqrt{\frac{\log (K\vee n)}{n}}$. From Lemma \[lem:latent\_gradient\_hessian\], we have that $||{\hat{\Cb}_{ }}{\bTheta_{ \cdot k}^*} - \be_k||_{\infty} \leq C \sqrt{\frac{\log (K\vee n)}{n}}$ with probability at least $1-\frac{C}{(K\vee n)^3}$. Using part (b), we get that $\max_{1\leq t\leq K}\|\hat\vb_{t}-\vb_{t}^*\|_1 \leq Cs_0\sqrt{\frac{\log (K\vee n)}{n}}$ with probability at least $1 - \frac{C}{(K\vee n)^3}$. The desired result now follows from . \[lem:latent\_clt\] Recall that $\sigma^2_{tk}=\EE(\Theta_{tt}^*\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k))^2$ with $\bar \Cb^{(i)}$ defined in (\[eqCi\]). Let $F_{n}$ denote the CDF of $n^{-1/2}\vb_t^{*T}(\hat \Cb\bTheta^*_{\cdot k}-\eb_k)/(\sigma_{tk}/\Theta^*_{tt})$. If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then we have $$\max_{1\leq t< k\leq K}\sup_{x\in \RR}|F_{n}(x) - \Phi(x) | \leq C (n^{-1/2}+(d\vee n)^{-3}).$$ where $C$ is a constant dependent only upon $c_0$, $c_1$, and $c_2$. Denote by $\cE$ the event that $\hat G = G^*$. We have $$F_{n}(x) - \Phi(x)\leq \tilde F_n(x)-\Phi(x)+\PP(\bar\cE),$$ where $\tilde F_n(x)$ is the CDF of $n^{-1/2}\sum_{i=1}^n\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k)/(\sigma_{tk}/\Theta^*_{tt})$. To control $\tilde F_n(x)-\Phi(x)$, we now verify the Lyapunov condition. As in the proof of Lemma \[lem:latent\_gradient\_hessian\], we can write $$\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k) = \vb_t^{*T}(\bar \Cb^{(i)}-\Cb^*)\bTheta^*_{\cdot k}.$$ From Lemmas \[lem:conc\_sum\_ZiZi\] - \[lem:conc\_sum\_EiEi\] we see that the entries in $\bQ_i=(\bar \Cb^{(i)}-\Cb^*)\bTheta^*_{\cdot k}$ are sub-exponential with parameters $\alpha= C_1$ and $\nu = C_2$ which depend only upon ${\lambda_{\max}\left({\bTheta_{ }^*}\right)}$, $\max_k {\gamma_{k}^*}$, and $\max_t {C_{t,t}^*}$. Recall the definition of $\veetS$: $(\veetS)_t = 1$ and $(\veetS)_{-t} = -\wb^*_t = -({\Cb_{ -t,-t }^*})^{-1}{\Cb_{ -t,t }^*}$. By the block matrix inverse formula, we can rewrite $$\wb_t^* = \left({C_{t,t}^*} - \Cb^{*T}_{-t,t}({\Cb_{ -t,-t }^*})^{-1}{\Cb_{ -t,t }^*}\right) {\bTheta_{ -t,t}^*} = \frac{1}{{\Theta_{t,t}^*}}{\bTheta_{ -t,t}^*}.$$ Using Lemma \[lem:pd\_matrix\_diag\], it follows that $||\wb^*_t||_2 \leq {\lambda_{\max}\left({\bTheta_{ }^*}\right)} \max_t {C_{t,t}^*}$ and $||\vb^*_t||_2 \leq {\lambda_{\max}\left({\bTheta_{ }^*}\right)} \max_t {C_{t,t}^*} + 1$. From Lemma \[cor:sum\_independent\_subexponential\] and the above, $\vb_t^{*T}\bQ_i$ is sub-exponential with parameters $\alpha = C_1$ and $\nu = ||\vb^*_t||_2 C_2\leq \left({\lambda_{\max}\left({\bTheta_{ }^*}\right)} \max_t {C_{t,t}^*} + 1\right)C_2$. Therefore, $\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k)$ has third moments bounded above by some constant $\rho$ that depends only upon ${\lambda_{\max}\left({\bTheta_{ }^*}\right)}$, $\max_k {\gamma_{k}^*}$, and $\max_t {C_{t,t}^*}$. All three quantities are bounded above by constants per Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\]. Thus, $\max_{1\leq t< k\leq K}\sup_x(\tilde F_n(x)-\Phi(x))\leq Cn^{-1/2}$ by the classical Berry-Esseen Theorem, and therefore $$\max_{1\leq t< k\leq K}\sup_{x\in \RR}(F_{n}(x) - \Phi(x) ) \leq C (n^{-1/2}+(d\vee n)^{-3}).$$ Similarly, it can be shown that $\sup_{x\in \RR}(\Phi(x)-F_{n}(x) ) \leq C (n^{-1/2}+(d\vee n)^{-3}).$ This completes the proof. \[lem:latent\_variable\_variance\] Under Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\], we have that $$\label{eqsigma} \sigma^2_{tk} = \Theta_{t,k}^{*2} + \Theta^*_{t,t}\Theta^*_{k,k} + \Delta,$$ where $|\Delta| \leq \frac{Cs_0}{m}$ and $C$ is a constant dependent only upon $c_1$, $c_2$ and $c_3$. Recall that $\sigma^2_{tk}=\EE(\Theta_{tt}^*\vb_t^{*T}(\bar \Cb^{(i)}\bTheta^*_{\cdot k}-\eb_k))^2$. Using the identity $vec(\Mb_1\Mb_2\Mb_3) = (\Mb_3^T \otimes \Mb_1)^Tvec (\Mb_2)(\Mb_3^T \otimes \Mb_1)$, we have $$\begin{aligned} \sigma^2_{tk} &= ({\Theta_{t,t}^*})^2(\bTheta_{\cdot k}^{*T} \otimes \veetS)^T\EE\left[vec(\bar \Cb^{(i)})vec(\bar \Cb^{(i)})^T \right] (\bTheta_{\cdot k}^{*T} \otimes \veetS) \numberthis \label{eqn:latent_variance_decomp}.\end{aligned}$$ [*Computing the expectation*]{}: After some straightforward, albeit lengthy, algebra we can show that $$\EE\left[vec(\bar \Cb^{(i)})vec(\bar \Cb^{(i)})^T \right] = \Mb_1 + \Mb_2 + \Mb_3,$$ where $\Mb_1 := {\Cb_{ }^*}\otimes{\Cb_{ }^*}$ and $\Mb_2 := [{\Cb_{ \cdot j }^*}\Cb_{\cdot i}^{*T}]_{ij}$. The matrices $\Mb_1$ and $\Mb_2$ contribute to the first two terms in (\[eqsigma\]). The term $\Mb_3:=\EE\left[vec(\bar \Cb^{(i)})vec(\bar \Cb^{(i)})^T \right] - \Mb_1 - \Mb_2$, however, is unique to the latent graph and contributes the higher order term $\Delta$ in (\[eqsigma\]). [*Evaluating the first order terms*]{}: By (\[eqn:latent\_variance\_decomp\]), we have $$\sigma^2_{tk}= ({\Theta_{t,t}^*})^2(\bTheta_{\cdot k}^{*} \otimes \veetS)^T(\Mb_1+\Mb_2)(\bTheta_{\cdot k}^{*} \otimes \veetS) + \Delta$$ with $\Delta$ defined as $$\label{eqn:latent_variance2} \Delta := ({\Theta_{t,t}^*})^2 (\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_3 (\bTheta_{\cdot k}^{*} \otimes \veetS).$$ Next, we observe that $$\begin{aligned} (\bTheta_{\cdot k}^{*} \otimes \veetS)^T\Mb_1(\bTheta_{\cdot k}^{*} \otimes \veetS) &= \bTheta_{\cdot k}^{*T}{\Cb_{ }^*}\bTheta_{\cdot k}^{*} \otimes \veetST {\Cb_{ }^*}\veetS \\ &= \frac{{\Theta_{k,k}^*}}{{\Theta_{t,t}^*}},\end{aligned}$$ where we used that $\veetST {\Cb_{ }^*}\veetS = ({\Theta_{t,t}^*})^{-1}$. Similarly, we can find that $$(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_2 (\bTheta_{\cdot k}^{*} \otimes \veetS) = \frac{\Theta_{tk}^{*2}}{\Theta_{tt}^{*2}}.$$ [*Bounding the higher order terms*]{}: What remains is to bound the magnitude of the term $\Delta$ in . Lengthy algebra yields: $$|(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_3 (\bTheta_{\cdot k}^{*} \otimes \veetS)| \leq \frac{C'}{m}(\bTheta_{\cdot k}^{*} \otimes \veetS)^T\left(\Mb_4 + \Mb_5 + \Mb_6 + \Mb_7 \right)(\bTheta_{\cdot k}^{*} \otimes \veetS),$$ where $C'$ depends only on $c_1$, $c_2$ and $c_3$. Here, $\Mb_4 = \Ib \otimes (\bone\bone^T)$. For $l=5,6,7$ the matrices $\Mb_l$ are defined block-wise by $$\Mb_{5;ij} := \begin{cases} \bone\eb_i^T &\text{ if } i \neq j \\ \bzero &\text{ o/w } \end{cases} \text{ and } \Mb_{6;ij} := \begin{cases} \eb_j\bone^T &\text{ if } i \neq j \\ \bzero &\text{ o/w } \end{cases} \text{ and } \Mb_{7;ij} := \begin{cases} \Ib &\text{ if } i \neq j \\ \bzero &\text{ o/w } \end{cases}.$$ Further lengthy algebra gives: $$|(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_4 (\bTheta_{\cdot k}^{*} \otimes \veetS)| \le s_0\frac{2}{c_1^2}(1+\frac{c_2^2}{c_1^2}),$$ $$|(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_5 (\bTheta_{\cdot k}^{*} \otimes \veetS)| \le s_0\frac{2\sqrt{2}}{c_1^2}(1+\frac{c_2^2}{c_1^2}),$$ $$|(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_6 (\bTheta_{\cdot k}^{*} \otimes \veetS)| \le s_0\frac{2\sqrt{2}(c_1^2 + c_2^2)}{c_1^4}\sqrt{1+\frac{c_2^2}{c_1^2}},$$ and $$|(\bTheta_{\cdot k}^{*} \otimes \veetS)^T \Mb_7 (\bTheta_{\cdot k}^{*} \otimes \veetS)| \le s_0\frac{2(c_1^2 + c_2^2)}{c_1^4}.$$ Plugging these bounds into the expression for $\Delta$ in , we obtain $$|\Delta| \leq \frac{Cs_0}{m},$$ concluding the proof. Concentration Results {#sec:concentration_of_estimators} ===================== The lemmas below provide important results regarding the concentration properties of some of the estimators ${\hat{\Cb}_{ }}$ and variables $\bZ$. \[lem:conc\_sum\_ZiZi\]    - $ \bZ_i\bZ_i^T$ consists of entries which are sub-exponential with parameters $\alpha = 4\max_{t}({C_{t,t}^*})^2$ and $\nu = 2\sqrt{2}\max_{t}({C_{t,t}^*})^2$, - $\PP\left( \Big\| \frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T - \Cb^*\Big\|_{\max} \geq C \max_{t}({C_{t,t}^*})^2 \sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{2}{(K\vee n)^3}$ - $\Zb_i\Zb_i{\bTheta_{ \cdot k}^*}$ consists of entries which are sub-exponential with parameters $\alpha = 4\max_{t}({C_{t,t}^*})^2$ and $\nu = 2\sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 \max_{t}({C_{t,t}^*})^2$, and - $\PP\left( \Big\| \frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T{\bTheta_{ \cdot k}^*} - \Cb^*{\bTheta_{ \cdot k}^*}\Big\|_{\max} \geq C ||{\bTheta_{ \cdot k}^*}||_2 \max_{t}({C_{t,t}^*})^2 \sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{2}{(K\vee n)^3}$, where $C = 4\sqrt{3}$. From Lemma \[lem:jointly\_gaussian\_product\], each element in the matrices $\bZ_i\bZ_i^T$ are sub-exponential with parameters $\alpha = 4\max_{t}({C_{t,t}^*})^2$ and $\nu = 2\sqrt{2}\max_{t}({C_{t,t}^*})^2$. Therefore by Corollary \[cor:sum\_independent\_subexponential\], the entries in $\frac{1}{n}\sum_{i=1}^n \bZ_i\bZ_i^T$ are sub-exponential with parameters $\alpha = \frac{4}{n}\max_{t}({C_{t,t}^*})^2$ and $\nu = \frac{2\sqrt{2}}{\sqrt{n}}\max_{t}({C_{t,t}^*})^2$. Therefore by the tail bound for sub-exponential random variables, we see that $$\begin{aligned} &\PP\left(\left(\frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T - \Cb^*\right)_{s,t} \geq D_1\sqrt{\frac{\log (K\vee n)}{n}}\right)\\ &\leq \begin{cases} \exp\left(-\frac{(\log (K\vee n)) D_1^2}{16\max_{t}({C_{t,t}^*})^4} \right) & \text{ if } 0 \leq D_1 \sqrt{\frac{\log (K\vee n)}{n}} \leq 2\max_{t}({C_{t,t}^*})^2 \\ \exp\left(\frac{-D_1\sqrt{n\log (K\vee n)} }{8\max_{t}({C_{t,t}^*})^2} \right) & \text{ if } D_1\sqrt{\frac{\log (K\vee n)}{n}} > 2\max_{t}({C_{t,t}^*})^2 \end{cases}\end{aligned}$$ for arbitrary $D_1 > 0$. Observe that for $n$ sufficiently large, $D_1 \sqrt{\log (K\vee n) / n} \leq 2\max_{t}({C_{t,t}^*})^2$, and thus we need only consider this case. Choose $D_1 \geq 4 \sqrt{3} \max_{t}({C_{t,t}^*})^2$. Then it is clear that $$\PP\left( \left(\frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T - \Cb^*\right)_{s,t} \geq D_1\sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{1}{(K\vee n)^5}.$$ By applying the union bound across all entries in the matrix, we get the desired result that $$\PP\left( \Big\| \frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T - \Cb^*\Big\|_{\max} \geq D_1\sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{2}{(K\vee n)^3},$$ concluding the proof of parts (a) and (b). The proof of (c) and (d) are very similar and omitted. To prove part (c) observe that entries in the $K$ dimensional vector $\bZ_i\bZ_i^T{\bTheta_{ \cdot k}^*}$ are sub-exponential with parameters $\alpha = 4\max_{t}({C_{t,t}^*})^2$ and $\nu = 2\sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 \max_{t}({C_{t,t}^*})^2$ by the above and Lemma \[cor:sum\_independent\_subexponential\]. Therefore, letting $N= ||{\bTheta_{ \cdot k}^*}||_2\max_{t}({C_{t,t}^*})^2$, $$\PP\left(\left(\frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T{\bTheta_{ \cdot k}^*} - \Cb^*{\bTheta_{ \cdot k}^*}\right)_{s} \geq D_2\sqrt{\frac{\log K }{n}}\right) \leq \begin{cases} \exp\left(-\frac{(\log K) D_2^2}{16N^2} \right) & \text{ if } 0 \leq D_2 \sqrt{\frac{\log K }{n}} \leq 2N\\ \exp\left(\frac{-D_2\sqrt{n\log K} }{8N} \right) & \text{ if } D_2\sqrt{\frac{\log K }{n}} > 2N \end{cases}$$ for arbitrary $D_2 > 0$. Observe that for $n$ sufficiently large, $D_2 \sqrt{\log K / n} \leq 2||{\bTheta_{ \cdot k}^*}||_2\max_{t}({C_{t,t}^*})^2$. Choose $D_2 \geq 4\sqrt{3}||{\bTheta_{ \cdot k}^*}||_2 \max_{t}({C_{t,t}^*})^2$. Then it is clear that $$\PP\left( \left(\frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T{\bTheta_{ \cdot k}^*} - \Cb^*{\bTheta_{ \cdot k}^*}\right)_{s} \geq D_2\sqrt{\frac{\log K}{n}} \right) \leq \frac{1}{K^3}.$$ By applying the union bound across all entries in the vector and across $1\leq k\leq K$, we get the desired result that $$\PP\left( \max_{1\leq k \leq K} \Big\| \frac{1}{n} \sum_{i=1}^n \bZ_i\bZ_i^T{\bTheta_{ \cdot k}^*} - \Cb^*{\bTheta_{ \cdot k}^*}\Big\|_{\infty} \geq D_2\sqrt{\frac{\log K}{n}} \right) \leq \frac{2}{K},$$ concluding the proof of part (c) and (d). \[lem:conc\_sum\_ZiEi\]    - $\left(\bZ_i\bE_i^T\Ab^*\Bb^{*-1}\right)_{s,t}$ is sub-exponential with parameters $\alpha = \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$ and $\nu = \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$, - $\PP\left( \Big\|\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}\Big\|_{\max} \geq C \max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right)\sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{2}{(K\vee n)^3}$ - $\left(\bZ_i\bE_i^T\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\right)_s$ is sub-exponential with parameters $\alpha = \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$ and $\nu = \sqrt{2}||{\bTheta_{ \cdot k}^*}||_2\max\left(\sigma_s^2,{C_{t,t}^*} \right)$, and - $\PP\left( \Big\|\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\Big\|_{\infty} \geq C \max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right)||{\bTheta_{ \cdot k}^*}||_2 \sqrt{\frac{\log (K\vee n)}{n}} \right) \leq \frac{2}{(K\vee n)^3}$, where $C = 2\sqrt{3}$ and $\sigma_s^2 = \frac{1}{|{G^*_{s}}|^2} \sum_{i \in {G^*_{s}}} \gamma_i$. Let $\bM =\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}$. From Lemma \[lem:ATAIA\], $\bY_1 = \Bb^{*-1}\Ab^{*T}\bE_1$ is a $K$-dimensional vector where the $k^{th}$ entry is given by $$(Y_1)_k = \frac{1}{|{G^*_{k}}|} \sum_{i \in {G^*_{k}}} (E_1)_i.$$ Because the errors are all independent mean zero Gaussian random variables, $(Y_1)_s \sim \cN(0,\sigma_s^2)$. Therefore, as $Y_1$ is independent of $Z_1$ by definition, $\EE[(Y_1)_s (Z_1)_t] = \EE[(Y_1)_s]\EE[(Z_1)_t] = 0$. Further, Lemma \[lem:jointly\_gaussian\_product\] gives that $(Y_1)_s (Z_1)_t$ is sub-exponential with parameters $\alpha = \nu = \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$. Using the independence of the samples, Corollary \[cor:sum\_independent\_subexponential\] gives that $\bM_{s,t}$ is sub-exponential with parameters $\alpha =\sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$ and $\nu = \sqrt{2n}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$. Then, Corollary \[cor:tail\_bound\_sum\] gives that for arbitrary choice of $D_1 > 0$, $$\begin{aligned} &\PP\left(\frac{1}{n}\bM_{s,t} \geq D_1\sqrt{\frac{\log (K\vee n)}{n}}\right)\\ &\leq \begin{cases} \exp\left(-\frac{(\log (K\vee n)) D_1^2}{4\max\left(\sigma_s^2,{C_{t,t}^*} \right)^2} \right) & \text{ if } 0 \leq D_1 \sqrt{\frac{\log (K\vee n)}{n}} \leq \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right) \\ \exp\left(\frac{-D_1\sqrt{n\log (K\vee n)}}{\sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)} \right) & \text{ if } D_1 \sqrt{\frac{\log (K\vee n)}{n}} > \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right). \end{cases}\end{aligned}$$ Observe that for $n$ sufficiently large, $D_1 \sqrt{\log K / n} \leq \sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$. If we choose $D_1 \geq 2\sqrt{3}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$, then we obtain that for $n$ sufficiently large, $$\PP\left(\frac{1}{n}\bM_{s,t} \geq D_1\sqrt{\frac{\log (K\vee n)}{n}}\right) \leq \frac{1}{(K\vee n)^5}.$$ Then by the union bound we can obtain $$\PP\left(\Big\|\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}\Big\|_{\max}\geq D_1\sqrt{\frac{\log (K\vee n)}{n}}\right) \leq \frac{2}{(K\vee n)^3}$$ for $D_1 \geq 2\sqrt{3}\max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right)$, concluding the proof of parts (a) and (b). The proof of (c) and (d) are very similar and omitted. Now, let $\bV_i = \bZ_i\bE_i^T\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}$. We know the entry $s,t$ in $\bZ_i\bE_i^T\Ab^*\Bb^{*-1}$ is sub-exponential with parameters $\alpha =\sqrt{2}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$ and $\nu = \sqrt{2n}\max\left(\sigma_s^2,{C_{t,t}^*} \right)$. Thus the entries in the $K$-dimensional vector $\bV_i$ are sub-exponential with parameters $\alpha =\sqrt{2}\max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right)$ and $\nu = \sqrt{2}||{\bTheta_{ \cdot k}^*}||_2\max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right)$. Let $N = \max\left(\max_s\sigma_s^2,\max_t{C_{t,t}^*} \right) $. Then, Corollary \[cor:tail\_bound\_sum\] gives that for arbitrary choice of $D_2 > 0$, $$\PP\left(\left(\frac{1}{n}\sum_{i=1}^n \bV_i\right)_s \geq D_2\sqrt{\frac{\log K}{n} }\right) \leq \begin{cases} \exp\left(-\frac{(\log K) D_2^2}{4||{\bTheta_{ \cdot k}^*}||_2^2 N^2} \right) & \text{ if } 0 \leq D_2 \sqrt{\frac{\log K}{n}} \leq \sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 N \\ \exp\left(\frac{-D_2\sqrt{n\log K}}{\sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 N }\right) & \text{ if } D_2 \sqrt{\frac{\log K}{n}} > \sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 N. \end{cases}$$ Observe that for $n$ sufficiently large, $D_2 \sqrt{\log K / n} \leq \sqrt{2}||{\bTheta_{ \cdot k}^*}||_2 N$. If we choose $D_2 \geq 2\sqrt{3}||{\bTheta_{ \cdot k}^*}||_2 N$, then we obtain that for $n$ sufficiently large, $$\PP\left(\left(\frac{1}{n}\sum_{i=1}^n \bV_i\right)_s \geq D_2\sqrt{\frac{\log K}{n} }\right) \leq \frac{1}{K^3}.$$ Then by the union bound we can obtain $$\PP\left(\max_{1\leq k \leq K}\Big\|\frac{1}{n}\sum_{i=1}^n \bZ_i\bE_i^T\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\Big\|_{\infty}\geq D_2\sqrt{\frac{\log K}{n}}\right) \leq \frac{2}{K},$$ concluding the proof of parts (c) and (d). \[lem:conc\_sum\_EiEi\] Recall that $m=\min_{k}|{G^*_{k}}|$, then - $\left( \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-{\bGamma^*})\Ab^*\Bb^{*-1}\right)_{t,k}$ is sub-exponential with parameters\ $\alpha_{t,k} = \frac{\sqrt{2}}{|{G^*_{k}}||{G^*_{t}}|}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}$ and $\nu_{t,k} = \sqrt{\frac{2}{ |{G^*_{k}}| |{G^*_{t}}|}}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}$, - $\PP\left(\Big\|\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}\Big\|_{\max}\geq C\max_{k} \gamma_k \sqrt{\frac{\log (K\vee n)}{n m^2}}\right) \leq \frac{2}{(K\vee n)^3}$ - $\left(\Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-{\bGamma^*})\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\right)_t$ is sub-exponential with parameters\ $\alpha_t = \frac{\sqrt{2}}{m^2}\max_{k}{\gamma_{k}^*} $ and $\nu_t = \sqrt{\frac{2}{ m^2}}||{\bTheta_{ \cdot k}^*}||_2\max_{k} {\gamma_{k}^*}$, and - $\PP\left(\Big\|\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\Big\|_{\max}\geq C||{\bTheta_{ \cdot k}^*}||_2\max_{k} \gamma_k \sqrt{\frac{\log (K\vee n)}{n m^2}}\right) \leq \frac{2}{(K\vee n)^3}$, where $C = 2\sqrt{3}$. We bound the sum entrywise $$\left(\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-{\bGamma^*})\Ab^*\Bb^{*-1}\right)_{t,k}.$$ First, from Lemma \[lem:ATAIE\], Corollary \[cor:independent\_gaussian\_product\] and Corollary \[cor:sum\_independent\_subexponential\], we have that $( \Bb^{*-1}\Ab^{*T}\bE_1\bE_1^T\Ab^*\Bb^{*-1})_{t,k}$ is sub-exponential with parameters $$\alpha_{t,k} = \frac{\sqrt{2}}{|{G^*_{k}}||{G^*_{t}}|}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*} \quad \text{and} \quad \nu_{t,k} = \sqrt{\frac{2}{ |{G^*_{k}}| |{G^*_{t}}|}}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}.$$ Therefore $\frac{1}{n} M_{t,k}$, defined by $\bM := \sum_{i=1}^n \Bb^{*-1}\Ab^{*T}\bE_i\bE_i^T\Ab^*\Bb^{*-1}$, is sub-exponential with parameters $$\alpha = \frac{\sqrt{2}}{n|{G^*_{k}}||{G^*_{t}}|}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*} \quad \text{and}\quad \nu = \sqrt{\frac{2}{ n |{G^*_{k}}| |{G^*_{t}}|}}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}.$$ Denote $$\mu_{t,k} = \begin{cases} \frac{1}{|{G^*_{t}}|^2}\sum_{p \in {G^*_{t}}}{\gamma_{p}^*} & \text{ if } t = k \\ 0 & \text{ otherwise} \end{cases}$$ and $N = \max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}$. Then from Lemma \[lem:tail\_bound\_subexponential\] we obtain $$\PP\left(\frac{1}{n}M_{t,k} - \mu_{t,k} \geq D_1\sqrt{\frac{\log (K\vee n)}{n |{G^*_{k}}| |{G^*_{t}}|}}\right) \leq \begin{cases} \exp\left(-\frac{(\log (K\vee n)) D_1^2}{4N^2} \right) & \text{ if } 0 \leq D_1\sqrt{\frac{\log (K\vee n)}{n |{G^*_{k}}| |{G^*_{t}}|}} \leq \sqrt{2} N \\ \exp\left(\frac{-D_1\sqrt{n\log (K\vee n)}}{\sqrt{2}N} \right) & \text{ if } D_1\sqrt{\frac{\log (K\vee n)}{n |{G^*_{k}}| |{G^*_{t}}|}} > \sqrt{2}N. \end{cases}$$ Observe that for $n$ sufficiently large, $D_1\sqrt{\frac{\log (K\vee n)}{n |{G^*_{k}}| |{G^*_{t}}|}} \leq \sqrt{2} N$. If we choose $D_1 \geq 2\sqrt{3}N$, then we obtain that for $n$ sufficiently large, $$\PP\left(\frac{1}{n}M_{t,k} - \mu_{t,k} \geq D_1\sqrt{\frac{\log (K\vee n)}{n |{G^*_{k}}| |{G^*_{t}}|}}\right) \leq \frac{1}{(K\vee n)^5}.$$ Therefore by taking the union bound, lower bounding $n |{G^*_{k}}| |{G^*_{t}}|$ by $n m^2$ and choosing $D_1 \geq 2\sqrt{3}\max_{k} {\gamma_{k}^*}$, $$\PP\left(\Big\|\frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma^*)\Ab^*\Bb^{*-1}\Big\|_{\max}\geq D_1\sqrt{\frac{\log (K\vee n)}{n m^2}}\right) \leq \frac{2}{(K\vee n)^3},$$ concluding the proof of parts (a) and (b). The proof of (c) and (d) are very similar and omitted. Now let $$\bV_i = \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-{\bGamma^*})\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}.$$ First, from the above, $(\Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-{\bGamma^*})\Ab^*\Bb^{*-1})_{t,k}$ is sub-exponential with parameters $$\alpha = \frac{\sqrt{2}}{|{G^*_{k}}||{G^*_{t}}|}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*} \quad \text{and}\quad \nu = \sqrt{\frac{2 }{|{G^*_{k}}| |{G^*_{t}}|}}\max_{i \in {G^*_{t}} \cup {G^*_{k}}} {\gamma_{i}^*}.$$ Therefore $(\bV_i)_s$ is sub-exponential with parameters $$\alpha = \frac{\sqrt{2}}{m^2}\max_{k} {\gamma_{k}^*} \quad \text{and}\quad \nu = \sqrt{\frac{2}{m^2}}||{\bTheta_{ \cdot k}^*}||_2\max_{k} {\gamma_{k}^*}.$$ Denote $$\mu_s = {\Theta_{s,s}^*} \frac{1}{|{G^*_{s}}|^2}\sum_{p \in {G^*_{s}}}{\gamma_{p}^*}.$$ Then from Lemma \[lem:tail\_bound\_subexponential\] we obtain $$\PP\left(\left(\frac{1}{n}\sum_{i=1}^n \bV_i\right)_s - \mu_{s} \geq D_2\sqrt{\frac{\log K}{n m^2}} \right) \leq \begin{cases} \exp\left(-\frac{(\log K) D_2^2}{4||{\bTheta_{ \cdot k}^*}||_2^2\max_{k} ({\gamma_{k}^*})^2} \right) & \text{ if } 0 \leq D_2\sqrt{\frac{\log K}{n m^2}} \leq \sqrt{2} \max_{k}{\gamma_{k}^*} ||{\bTheta_{ \cdot k}^*}||_2 \\ \exp\left(\frac{-D_2\sqrt{n\log K}}{\sqrt{2}||{\bTheta_{ \cdot k}^*}||_2\max_{k} {\gamma_{k}^*}} \right) & \text{ if } D_2\sqrt{\frac{\log K}{n m^2}} > \sqrt{2} \max_{k} {\gamma_{k}^*} ||{\bTheta_{ \cdot k}^*}||_2. \end{cases}$$ Observe that for $n$ sufficiently large, $D_2\sqrt{\frac{\log K}{n m^2}} \leq \sqrt{2} \max_{k} \gamma_k ||{\bTheta_{ \cdot k}^*}||_2$. If we choose $D_2 \geq 2\sqrt{3}||{\bTheta_{ \cdot k}^*}||_2 \max_{k} \gamma_k $, then we obtain that for $n$ sufficiently large, $$\PP\left(\left(\frac{1}{n}\sum_{i=1}^n \bV_i\right)_s - \mu_{s} \geq D_2\sqrt{\frac{\log K}{n m^2}}\right) \leq \frac{1}{K^3}.$$ By taking the union bound and choosing $D_2 \geq 2\sqrt{3}||{\bTheta_{ \cdot k}^*}||_2\max_{k} {\gamma_{k}^*}$, $$\PP\left(\Big\| \frac{1}{n}\sum_{i=1}^n \Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T-\bGamma)\Ab^*\Bb^{*-1}{\bTheta_{ \cdot k}^*}\Big\|_{\infty}\geq D_2\sqrt{\frac{\log K}{n m^2}}\right) \leq \frac{2}{K},$$ concluding the proof of parts (c) and (d). Auxiliary Technical lemmas {#sec:misc_results} ========================== \[lem:ATAIA\] For $1\leq k\leq K$, denote $m_k=|G_k^*|$. Then the matrix $(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}$ is a $K\times d$ dimensional matrix given as $$[(\Ab^{*T}\Ab^*)^{-1}\Ab^{*T}]_{k,i} = \begin{cases} \frac{1}{m_k} & \text{ if } i \in {G^*_{k}} \\ 0 & \text{ otherwise.} \end{cases}$$ First, we must calculate $\Bb^{*-1}\Ab^{*T}$. For $1\leq k\leq K$, denote $m_k=|G_k^*|$ and let $\eb_k$ be a unit vector in $\RR^K$ with $1$ on the $k$ position and $0$ otherwise. Without loss of generality, we permute the rows of $\Ab^*$ such that for any $1\leq k\leq K$ $\Ab^*_{j\cdot}=\eb_k$, for $\sum_{i=1}^{k-1}m_{i}+1\leq j<\sum_{i=1}^{k}m_{i}+1$ – that is rows are ordered according to ascending group index. Here, for notational simplicity, we let $m_0=0$. Thus, $\Ab^{*T}\Ab^*=\textrm{diag}(m_1,...,m_K)$ and the result follows immediately. \[lem:ATAIE\] The matrices $\Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T)\Ab^*\Bb^{*-1}$ and $\Bb^{*-1}\Ab^{*T}(\Gamma^*)\Ab^*\Bb^{*-1}$ are given by $$(\Bb^{*-1}\Ab^{*T}(\bE_i\bE_i^T)\Ab^*\Bb^{*-1})_{t,k} = \frac{1}{|{G^*_{t}}{G^*_{k}}|}\sum_{p \in {G^*_{t}}}\sum_{q \in {G^*_{k}}}E_{i,p}E_{i,q}$$ and $$(\Bb^{*-1}\Ab^{*T}(\Gamma^*)\Ab^*\Bb^{*-1})_{t,k} = \begin{cases} \frac{1}{|{G^*_{t}}|^2}\sum_{p \in {G^*_{t}}}\gamma^*_p & \text{ if } t = k \\ 0 & \text{ otherwise.} \end{cases}$$ The result can be obtained by a straightforward computation. \[lem:latent\_re\_condition\] If Assumptions \[asmp:bounded\_latent\_covariance\] and \[asmp:bounded\_errors\] hold, then the matrix $\hat\Cb$ satisfies with probability at least $1- \frac{C}{(K\vee n)^3}$, $$\kappa \leq \min\left\{\frac{\vb^T \hat\Cb \vb}{||\vb||_2^2} : \vb \in \RR^{K}\setminus\{0\}, ||\vb_{\bar{S}}||_1 \leq 3||\vb_S||_1 \right\}, \text{ and}$$ $$\kappa \leq \min\left\{\frac{\vb^T \hat\Cb_{-t,-t} \vb}{||\vb||_2^2} : \vb \in \RR^{K}\setminus\{0\}, ||\vb_{\bar{S'}}||_1 \leq 3||\vb_{S'}||_1 \right\},$$ where $\kappa \geq \frac{3}{4c_1} > 0$. We begin by proving the first claim. By Lemma \[lem:latent\_C\_consistency\], we have that $\|\hat \Cb-\Cb^*\|_{\max}\leq C_1\sqrt{\frac{\log (K\vee n)}{n}}$ with high probability. Therefore, for $K$ sufficiently large and for any $\vb \in \RR^K\setminus \{0\}$, $$\frac{\vb^T \hat\Cb \vb}{||\vb||_2^2} \geq \frac{3}{4}\frac{\vb^T \Cb^* \vb}{||\vb||_2^2}.$$ The proof is then done for $\kappa = \frac{3}{4c_1}$ as we assume the minimum eigenvalue of $\Cb^*$ is bounded below by $c_0^{-1}$. The proof of the second claim is identical because $\Cb^*$ is positive semidefinite, and it is well known that the minimum eigenvalue of any principal submatrix $\Cb^*_{-t,-t}$ is bounded below by $\lambda_{\min}(\Cb^*) \geq c_0^{-1}$. \[lem:pd\_matrix\_diag\] Let $\Mb$ be a $n \times n$ positive definite matrix and denote its inverse by $\Lb$. Then, for all $i = 1,\dots,n$ $$M_{i,i} L_{i,i} \geq 1.$$ By the block matrix inverse formula, it follows that $$\label{eqn:pd_matrix_diag1} M_{i,i}^{-1} = L_{i,i} - \Lb_{-i,i}^T \Lb_{-i,-i}^{-1}\Lb_{-i,i}.$$ Because $\Mb$ is positive definite, so is $\Lb$. Recall that a matrix is positive definite if and only if all its principal minors are also positive definite. Therefore, $\Lb_{-i,-i}$ is positive definite, as is $\Lb_{-i,-i}^{-1}$. Therefore, $\Lb_{-i,i}^T \Lb_{-i,-i}^{-1}\Lb_{-i,i} \geq 0$ and becomes $M_{i,i}^{-1} \leq L_{i,i}$. Lastly, if a matrix is positive definite, all its diagonal elements must be nonnegative, giving that $M_{i,i}L_{i,i} \geq 1$ as desired. Basic Tail Bounds for Random Variables ====================================== This section collects some basic tail probability results for random variables. The proof is standard and omitted. \[lem:jointly\_gaussian\_product\] Let $\Yb = (Y_1,Y_2)$ be a jointly Gaussian random vector with covariance matrix $\Cb$. Then $Y_1Y_2$ is sub-exponential with parameters $\alpha = 4{\lambda_{\max}\left(\Cb_Y\right)}$ and $\nu = 2\sqrt{2}{\lambda_{\max}\left(\Cb_Y\right)}$. Denote $\mu_Y = (\Cb_{Y})_{1,2}$ and observe that without loss of generality we can assume that ${\lambda_{\max}\left(\Cb_Y\right)} \geq 1$ because if not we can perform a change of variables on $\gamma$ to rescale $\bY$ without affecting the final result. Then we can bound the moment generating function of $Y_1 Y_2$ as follows. $$\begin{aligned} \EE[\exp(\gamma( Y_1 Y_2 - \mu_Y)) ] &= (2\pi)^{-1} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} |\Cb_Y|^{-1/2} \exp(\gamma( Y_1 Y_2 - \mu_Y)) \exp(-\frac{1}{2}\Yb^T \Cb_Y^{-1} \Yb) dY_1 dY_2 \\ &= (2\pi)^{-1}\exp( - \gamma \mu_Y) |\Cb_Y|^{-1/2} |(\Cb_Y^{-1} - \gamma \Bb)^{-1}|^{1/2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} |(\Cb_Y^{-1} - \gamma \Bb)^{-1}|^{1/2}\\ & \quad \times \exp(-\frac{1}{2}\Yb^T(\Cb_Y^{-1} - \gamma \Bb) \Yb) dY_1 dY_2 \\ &=\exp(- \gamma \mu_Y) |\Ib - \gamma \Cb_Y\Bb|^{-1/2},\end{aligned}$$ where $\Bb = \left[ \begin{array}{c c} 0 & 1 \\ 1 & 0 \end{array} \right]$. The eigenvalues of $\Bb$ are $1$ and $-1$, so we see that a lower bound on the minimum eigenvalue of $\Cb_Y^{-1} - \gamma \Bb$ is symmetric in $\gamma$. Thus without loss of generality we can assume $\gamma \geq 0$. What remains is to find a bound on $\gamma$ such that $\Cb_Y^{-1} - \gamma \Bb$ is positive definite. To establish the bound on $\gamma$, recall that for the product of two symmetric matrices $\Ab$ and $\Bb$, ${\lambda_{\max}\left(\Ab\Bb\right)} \leq {\lambda_{\max}\left(\Ab\right)}{\lambda_{\max}\left(\Bb\right)}$. Therefore it follows that $${\lambda_{\min}\left(\Cb_Y^{-1} - \gamma \Bb\right)} \geq \frac{1}{{\lambda_{\max}\left(\Cb_Y\right)}} - \gamma \quad \rightarrow \quad \gamma < \frac{1}{{\lambda_{\max}\left(\Cb\right)}}.$$ Furthermore, we can find $|\Ib - \gamma \Cb\Bb|$ easily by finding the eigenvalues of $\gamma \Cb\Bb$. These are given as $$1 - \gamma \mu_Y \pm \gamma \sqrt{(C_Y)_{1,1}(C_Y)_{2,2}}.$$ Therefore $$|\Ib - \gamma \Cb\Bb| = 1 - 2\mu_Y\gamma - \gamma^2 |\Cb_Y|.$$ To conclude the proof, we need $c$ such that $$\label{eqn:subexponential_constant_joint_gaussian} \EE[\exp(\gamma( Y_1 Y_2 - \mu_Y)) ] \leq \frac{\exp(-\gamma \mu_Y)}{(1 - 2\mu_Y\gamma - \gamma^2 |\Cb_Y|)^{1/2}} \leq \exp(c^2 \gamma^2).$$ First, consider when both $\gamma$ and $\mu_Y$ have the same sign and without loss of generality, assume both are nonnegative. Then taking the logarithm of both sides we can define $$f(\gamma) = c^2\gamma^2 + \gamma \mu_Y + \frac{1}{2}\log(1 - 2\mu_Y - \gamma^2 |\Cb_Y|).$$ If we find $c$ and $\gamma^* > 0$ such that $f(\gamma) \geq 0$ for all $\gamma \leq \gamma^*$, then we are done. Clearly at $f(0)$ this is satisfied for any $c$, so if $f'(\gamma) \geq 0$ over the entire interval, then the condition is met. Differentiating with respect to $\gamma$ gives $$f'(\gamma) = 2c^2\gamma + \mu_Y - \frac{|\Cb_Y| \gamma + \mu_Y}{1 - 2\gamma\mu_Y - |\Cb_Y|\gamma^2}$$ and thus we need to show that $$\begin{aligned} 2c^2\gamma + \mu_Y - 4c^2\mu_Y \gamma^2 - 2\mu_Y^2\gamma - 2c^2|\Cb_Y|\gamma^3 - |\Cb_Y|\mu_Y\gamma^2 - |\Cb_Y| \gamma - \mu_Y &\geq 0\\ \span \span \Big\Updownarrow \span \span\\ (- 2c^2|\Cb_Y|)\gamma^3 - (4c^2\mu_Y + |\Cb_Y|\mu_Y)\gamma^2 + (2c^2 - 2\mu_Y^2 - |\Cb_Y| )\gamma &\geq0. \numberthis \label{eqn:c_and_gammastar}\end{aligned}$$ If we take $c = 2{\lambda_{\max}\left(\Cb_Y\right)}$ and plug it into , then we can easily see that for all $\gamma \leq \frac{1}{4{\lambda_{\max}\left(\Cb_Y\right)}}$, the equation holds. Next, if $\mu_Y$ and $\gamma$ have opposite signs, first consider $\gamma \geq 0$ and $\mu_Y \leq 0$. Then if we take $c = 2{\lambda_{\max}\left(\Cb_Y\right)}$, it follows immediately from that for $\gamma \leq \frac{1}{4{\lambda_{\max}\left(\Cb_Y\right)}}$ the equation holds. In second case, when $\gamma<0$, we can denote $\tilde\gamma=-\gamma$ and $\tilde\mu=-\mu$, and so by the above we have that for $\gamma \leq \frac{1}{4{\lambda_{\max}\left(\Cb_Y\right)}}$ the equation holds. Thus we see that $Y_1Y_2$ is sub-exponential with parameters $\alpha = 4{\lambda_{\max}\left(\Cb_Y\right)}$ and $\nu = 2\sqrt{2}{\lambda_{\max}\left(\Cb_Y\right)}$. \[cor:independent\_gaussian\_product\] Let $Y_1 \sim \cN(0,\sigma_1^2)$ and $Y_2 \sim \cN(0,\sigma_2^2)$ where $\sigma_1^2 \geq \sigma_2^2$.Then $Y_1Y_2$ is sub-exponential with parameters $\alpha = \sqrt{2}\sigma_1^2$ and $\nu = \sqrt{2}\sigma_1^2$. This follows from the proof of Lemma \[lem:jointly\_gaussian\_product\]. Instead, we now have $$f(\gamma) = c^2\gamma^2 + \frac{1}{2}\log(1 - \gamma^2\sigma_1^2).$$ Without loss of generality, we can take $\gamma \geq 0$. Then we need that $$f'(\gamma) = 2c^2\gamma - \frac{2\sigma_1^2 \gamma}{1 - \gamma^2\sigma_1^2} \geq 0.$$ Choosing $\alpha = \sqrt{2}\sigma_1^2$ and $c = \sigma_1^2$, we see this condition is satisfied, concluding the proof. \[cor:sum\_independent\_subexponential\] Consider $\sum_{i=1}^n X_i$ where $X_i$ are centered, independent sub-exponential random variables. Then $Y = \sum_{i=1}^n X_i$ is sub-exponential with parameters $\alpha = \max_i \alpha_i$ and $\nu = \sqrt{\sum_{i=1}^n \nu_i^2}$. \[lem:tail\_bound\_subexponential\] Let $X$ be a sub-exponential random variable with mean $\mu$ and parameters $\alpha$ and $\nu$. Then $$\PP(X-\mu \geq t) \leq \begin{cases} \exp(-\frac{t^2}{2\nu^2} ) & \text{ for } 0 \leq t \leq \frac{\nu^2}{\alpha} \\ \exp(-\frac{t}{2\alpha} ) & \text{ for } t > \frac{\nu^2}{\alpha}. \end{cases}$$ \[cor:tail\_bound\_sum\] Consider $Y = \sum_{i=1}^n X_i$, where $X_i$ are centered, independent sub-exponential random variables. Let $\alpha = \max_i \alpha_i$ and $\nu = \sqrt{\sum_{i=1}^n \nu_i^2}$. Then, $$\PP(\frac{1}{n}\sum_{i=1}^n X_i \geq t) \leq \begin{cases} \exp(-\frac{n t^2}{2\nu^2 / n} ) & \text{ for } 0 \leq t \leq \frac{\nu^2}{n\alpha} \\ \exp(-\frac{nt}{2\alpha} ) & \text{ for } t > \frac{\nu^2}{n\alpha}. \end{cases}$$ Construction of a pre-clustering estimator of $\Gamma$ {#pregamma} ====================================================== We include in this section the construction of the pre-clustering estimator of $\Gamma$ needed as an input of the PECOK algorithm of Section \[sec:introduction:glatent\] above. For any $a,b\in [d]$, define $$\label{eq:definition_V} V(a,b):= \max_{c,d \in [p]\setminus\{a,b\}} \frac{\left| (\widehat \Sigma_{ac}-\widehat\Sigma_{ad})-(\widehat\Sigma_{bc}-\widehat\Sigma_{bd}) \right|}{\sqrt{\widehat \Sigma_{cc}+ \widehat \Sigma_{dd}-2 \widehat \Sigma_{cd}}}\ ,$$ with the convention $0/0=0$. Guided by the block structure of $\Sigma$, we define $$b_1(a):= \argmin_{b\in [p]\setminus\{a\}}V(a,b)\quad \text{ and }\quad b_2(a):= \argmin_{b\in [p]\setminus\{a,b_1(a)\}}V(a,b) ,$$ to be two elements ”close” to $a$, that is two indices $b_1 = b_1(a)$ and $b_2 = b_2(a)$ such that the empirical covariance difference $ \widehat \Sigma_{b_{i}c}- \widehat \Sigma_{b_{i}d}$, $i =1,2$, is most similar to $ \widehat \Sigma_{ac}- \widehat \Sigma_{ad}$, for all variables $c$ and $d$ not equal to $a$ or $b_{i}$, $i = 1,2$. It is expected that $b_1(a)$ and $b_2(a)$ either belong to the same group as $a$, or belong to some ”close” groups. Then, our estimator $\widetilde \Gamma$ is a diagonal matrix, defined by $$\label{eq:estim:gamma2} \widetilde \Gamma_{aa}= \widehat \Sigma_{aa}+ \widehat \Sigma_{b_{1}(a)b_{2}(a)}- \widehat \Sigma_{ab_{1}(a)}- \widehat \Sigma_{ab_{2}(a)}, \quad \text{ for $a=1,\ldots, d$.}$$ Intuitively, $\widetilde \Gamma_{aa}$ should be close to $\Sigma_{aa}+ \Sigma_{b_{1}(a)b_{2}(a)}- \Sigma_{ab_{1}(a)}-\Sigma_{ab_{2}(a)}$, which is equal to $\Gamma_{aa}$ in the favorable event where both $b_1(a)$ and $b_2(a)$ belong to the same group as $a$. In general, $b_1(a)$ and $b_2(a)$ cannot be guaranteed to belong to the same group as $a$. Nevertheless, these two surrogates $b_1(a)$ and $b_2(a)$ are close enough to $a$ so that $\|\widetilde{\Gamma} - \Gamma\|_{\max} \lesssim |\Gamma|_{\max}\sqrt{\log d/n}$. This last fact and the above construction are shown in [@Bunea2016a]. [^1]: Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA; e-mail: [eisenach@princeton.edu](eisenach@princeton.edu) [^2]: Department of Statistical Science, Cornell University, Ithaca, NY 14850, USA; e-mail: [fb238@cornell.edu](fb238@cornell.edu) [^3]: Department of Statistical Science, Cornell University, Ithaca, NY 14850, USA; e-mail: [yn265@cornell.edu](yn265@cornell.edu) [^4]: Department of Statistical Science, Cornell University, Ithaca, NY 14850, USA; e-mail: [cd535@cornell.edu](cd535@cornell.edu)
--- abstract: | This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar [*reference images*]{} in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on $L_2$-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking [@arandjelovic2016netvlad] and contrastive losses [@radenovic2016cnn]. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the $3^{rd}$ place out of 209 teams in the 2018 Google Landmark Retrieval Challenge [@Google_Landmark]. Our code and model are available at <https://github.com/Liumouliu/deepIBL>. author: - | Liu Liu $^{1,2}$, Hongdong Li $^{1,2}$ and Yuchao Dai $^3$\ $^{1}$ Australian National University, Canberra, Australia\ $^{2}$ Australian Centre for Robotic Vision\ $^{3}$ School of Electronics and Information, Northwestern Polytechnical University, Xian, China\ @anu.edu.au; daiyuchao@nwpu.edu.cn bibliography: - 'egbib.bib' title: 'Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization' --- Introduction ============ ![[]{data-label="fig:pipeline"}](pipeline2.pdf){width="48.50000%"} The task of Image-Based Localization (IBL) is to estimate the geographic location of where a query image is taken, based on comparing it against geo-tagged images from a city-scale image database (a map). IBL has attracted considerable attention recently due to the wide-spread potential applications such as in robot navigation [@mur2015orb] and VR/AR [@middelberg2014scalable; @ventura2014global]. Depending on whether or not 3D point-clouds are used in the map, existing IBL methods can be roughly classified into two groups: [*image-retrieval*]{} based methods [@arandjelovic2016netvlad; @kim2017crn; @sattler2017large; @Noh_2017_ICCV; @Vo_2017_ICCV; @radenovic2016cnn] and [*direct 2D-3D matching*]{} based methods [@sattler2011fast; @sattler2017efficient; @li2010location; @Liu_2017_ICCV; @CarlSematic]. This paper belongs to the [*image-retrieval*]{} group for its effectiveness at large scale and robustness to changing conditions [@sattler2017benchmarking]. For [*image-retrieval*]{} based methods, the main challenge is how to discriminatively represent images so that images depicting same landmarks would have similar representations while those depicting different landmarks would have dissimilar representations. The challenge is underpinned by the typically large-scale image database, in which many images may contain repetitive structures and similar landmarks, causing severe ambiguities. Convolution Neural Networks (CNNs) have demonstrated great success for the IBL task [@arandjelovic2016netvlad; @kim2017crn; @Noh_2017_ICCV; @gordo2016deep; @gordo2017end; @radenovic2016cnn]. Typically, CNNs trained for image classification task are fine-tuned for IBL. As far as we know, all the state-of-the-art IBL methods focus on how to effectively aggregate a CNN feature map to obtain discriminative image representation, but have overlooked another important aspect which can potentially boost the IBL performance markedly. The important aspect is how to effectively organize the aggregated image representations. So far, all state-of-the-art IBL methods use triplet ranking and contrastive embedding to supervise the representation organization process. This paper fills this gap by proposing a new method to effectively organize the image representations (embeddings). We first define a “place” as a set of images depicting same location landmarks, and then directly enforce the intra-place image similarity and inter-place dissimilarity in the embedding space. Our goal is to cluster learned embeddings from the same place while separating embeddings from different places. Intuitively, we are organizing image representations using places as agents. The above idea may directly lead to a multi-class classification problem if we can label the “place” tag for each image. Apart from the time-consuming labeling process, the formulation will also result in too many pre-defined classes and we need a large training image set to train the classification CNN net. Recently-proposed methods [@Vo_2017_ICCV; @weyand2016planet] try to solve the multi-class classification problem using large GPS-tagged training dataset. In their setting, a class is defined as images captured from nearby geographic positions while disregarding their visual appearance information. Since images within the same class do not necessarily depict same landmarks, CNN may only learn high-level information [@Vo_2017_ICCV] for each geographic position, thus inadequate for accurate localization. Can we capture the intra-place image “attraction” and inter-place image “repulsion” relationship with limited data? To tackle the “attraction” and “repulsion” relationship, we formulate the IBL task as image similarity-based binary classification in feature embedding space. Specifically, the similarity for images in the same place is defined as 1, and 0 otherwise. This binary-partition of similarity is used to capture the intra-place “attraction” and inter-place “repulsion”. To tackle the limited data issue, we use triplet images to train CNN, consisting of one query, positive (from the same place as the query), and negative image (from a different place). Note that a triplet is a minimum set to define the intra-place “attraction” and inter-place “repulsion”. Our CNN architecture is given in Fig. \[fig:pipeline\]. We name our metric-learning objective as Stochastic Attraction and Repulsion Embedding (SARE) since it captures pairwise image relationships under the probabilistic framework. Moreover, our SARE objective can be easily extended to handle multiple negative images coming from different places, enabling competition with multiple other places for each place. In experiments, we demonstrate that, with SARE, we obtain improved performance on various IBL benchmarks. Validations on standard image retrieval benchmarks further justify the superior generalization ability of our method. Related Work ============ There is a rich family of work in IBL. We briefly review CNN-based image representation learning methods. Please refer to [@DBLP:journals/corr/Wu16e; @zheng2018sift] for an overview. While there have been many works [@RazavianBaseline; @gordo2016deep; @gordo2017end; @radenovic2016cnn; @arandjelovic2016netvlad; @Noh_2017_ICCV; @kim2017crn; @sattler2017large] in designing effective CNN feature map aggregation methods for IBL, they almost all exclusively use triplet or contrastive embedding objective to supervise CNN training. Both of these two objectives in spirit pulling the $L_2$ distance of matchable image pair while pushing the $L_2$ distance of non-matching image pair. While they are effective, we will show that our SARE objective outperforms them in the IBL task later. Three interesting exceptions which do not use triplet or contrastive embedding objective are the planet [@weyand2016planet], IM2GPS-CNN [@Vo_2017_ICCV], and CPlaNet [@seo2018cplanet]. They formulate IBL as a geographic position classification task. They first partition a 2D geographic space into cells using GPS-tags and then define a class per-cell. CNN training process is supervised by the cross-entropy classification loss which penalizes incorrectly classified images. We also show that our SARE objective outperforms the multi-class classification objective in the IBL task. Although our SARE objective enforces intra-place image “attraction” and inter-place image “repulsion”, it differs from traditional competitive learning methods such as Self-Organizing Map [@kohonen1998self] and Vector Quantization [@munoz2002expansive]. They are both devoted to learning cluster centers to separate original vectors. No constraints are imposed on original vectors. Under our formulation, we directly impose the “attraction-repulsion” relationship on original vectors to supervise the CNN learning process. Problem Definition and Method Overview ====================================== Given a large geotagged image database, the IBL task is to estimate the geographic position of a query image $q$. Image-retrieval based method first identifies the most visually similar image from the database for $q$, and then use the location of the database image as that of $q$. If the identified most similar image comes from the same place as $q$, then we deem that we have successfully localized $q$, and the most similar image is a positive image, denoted as $p$. If the identified most similar image comes from a different place as $q$, then we have falsely localized $q$, and the most similar image is a negative image, denoted as $n$. Mathematically, an image-retrieval based method is executed as follows: First, query image and database images are converted to compact representations (vectors). This step is called image feature embedding and is done by a CNN network. For example, query image $q$ is converted to a fixed-size vector $f_\theta(q)$, where $f$ is a CNN network and $\theta$ is the CNN weight. Second, we define a similarity function $S(\cdot)$ on pairwise vectors. For example, $S\left ( f_\theta(q), f_\theta(p)\right )$ takes vectors $f_\theta(q)$ and $f_\theta(p)$, and outputs a scalar value describing the similarity between $q$ and $p$. Since we are comparing the entire large database to find the most similar image for $q$, $S(\cdot)$ should be simple and efficiently computed to enable fast nearest neighbor search. A typical choice for $S(\cdot)$ is the $L_2$-metric distance, or functions monotonically increase/decrease with the $L_2$-metric distance. Relying on feature vectors extracted by un-trained CNN to perform nearest neighbor search would often output a negative image $n$ for $q$. Thus, we need to train CNN using easily obtained geo-tagged training images (Sec.\[sec::implentation\]). The training process in general defines a loss function on CNN extracted feature vectors, and use it to update the CNN weight $\theta$. State-of-the-art triplet ranking loss (Sec.\[sec::Revisiting\]) takes triplet training images $q,p,n$, and imposes that $q$ is more similar to $p$ than $n$. Another contrastive loss (Sec.\[sec::Contrastive\]) tries to separate $q\sim n$ pair by a pre-defined distance margin (see Fig.\[fig:triplet\_constrastive\]). While these two losses are effective, we construct our metric embedding objective in a substantially different way. Given triplet training images $q,p,n$, we have the prior knowledge that $q\sim p$ pair is matchable and $q\sim n$ pair is non-matchable. This simple match-ability prior actually defines a probability distribution. For $q\sim p$ pair, the match-ability is defined as 1. For $q\sim n$ pair, the match-ability is defined as 0. Can we respect this match-ability prior in feature embedding space? Our answer is yes. To do it, we directly fit a kernel on the $L_2$-metric distances of $q\sim p$ and $q\sim n$ pairs and obtain a probability distribution. Our metric-learning objective is to minimize the Kullback-Leibler divergence of the above two probability distributions (Sec.\[sec::SNE\]). What’s the benefit of respecting the match-ability prior in feature embedding space? Conceptually, in this way, we capture the intra-place (defined by $q\sim p$ pair) “attraction” and inter-place (defined by $q\sim n$ pair) “repulsion” relationship in feature embedding space. Potentially, the “attraction” and “repulsion” relationship balances the embedded positions of the entire image database well. Mathematically, we use gradients of the resulting metric-learning objective with respect to triplet images to figure out the characteristics, and find that our objective adaptively adjusts the force (gradient) to pull the distance of $q\sim p$ pair, while pushing the distance of $q\sim n$ pair (Sec.\[sec::Gradients\]). Deep Metric Embedding Objectives in IBL {#sec::Revisiting} ======================================== In this section, we first give the two widely-used deep metric embedding objectives in IBL - the triplet ranking and contrastive embedding, and they are facilitated by minimizing the triplet ranking and contrastive loss, respectively. We then give our own objective - Stochastic Attraction and Repulsion Embedding (SARE). Triplet Ranking Loss {#sec::Revisiting} -------------------- The triplet ranking loss is defined by [ $$\label{eq::triplet_violating} L_\theta\left (q,p,n \right )=\max\left ( 0,m+ \left \| f_\theta(q)- f_\theta(p)\right \|^2 - \left \| f_\theta(q)- f_\theta(n)\right \|^2\right),$$ ]{} where $m$ is an empirical margin, typically $m=0.1$ [@netVLAD_pami; @arandjelovic2016netvlad; @gordo2016deep; @radenovic2016cnn]. $m$ is used to prune out triplet images with $\left \| f_\theta(q)- f_\theta(n)\right \|^2 > m+ \left \| f_\theta(q)- f_\theta(p)\right \|^2$. Contrastive Loss {#sec::Contrastive} ---------------- The contrastive loss imposes constraint on image pair $i\sim j$ by: $$\label{Eq:ContrastiveLoss} \begin{split} L_\theta\left (i,j \right ) = & \frac{1}{2} \eta\left \| f_\theta(i)- f_\theta(j)\right \|^2 + \\ & \frac{1}{2} (1-\eta)\left(\max\left ( 0,\tau- \left \| f_\theta(i)- f_\theta(j)\right \|\right )^2\right) \end{split}$$ where for $q\sim p$ pair, $\eta = 1$, and for $q\sim n$ pair, $\eta = 0$. $\tau$ is an empirical margin to prune out negative images with $\left \| f_\theta(i)- f_\theta(j)\right \| > \tau$. Typically, $\tau = 0.7$ [@radenovic2016cnn]. Intuitions to the above two losses are compared in Fig.\[fig:triplet\_constrastive\]. ![[]{data-label="fig:triplet_constrastive"}](triplet_constrastive.pdf){width="48.50000%"} SARE-Stochastic Attraction and Repulsion Embedding {#sec::SNE} -------------------------------------------------- In this subsection, we present our Stochastic Attraction and Repulsion Embedding (SARE) objective, which is optimized to learn discriminative embeddings for each “place”. A triplet images $q,p,n$ define two places, one defined by $q\sim p$ pair and the other defined by $n$. The intra-place and inter-place similarity are defined in a probabilistic framework. Given a query image $q$, the probability $q$ picks $p$ as its match is conditional probability $h_{p|q}$, which equals to 1 based on the co-visible or matchable prior. The conditional probability $h_{n|q}$ equals to 0 following above definition. Since we are interested in modeling pairwise similarities, we set $h_{q|q} = 0$. Note that the triplet probabilities $h_{q|q}, h_{p|q}, h_{n|q}$ actually define a probability distribution (summing to 1). In the feature embedding space, we would like CNN extracted feature vectors to respect the above probability distribution. We define another probability distribution $c_{q|q}, c_{p|q}, c_{n|q}$ in the embedding space, and try to minimize the mismatch between the two distributions. The Kullback-Leibler divergence is employed to describe the cross-entropy loss and is given by: $$\label{triplet_loss} \begin{split} L_\theta\left (q,p,n \right ) &= h_{p|q}\log\left ( \frac{h_{p|q}}{c_{p|q}} \right ) +h_{n|q}\log\left ( \frac{h_{n|q}}{c_{n|q}} \right ) \\ & =-\log\left ( c_{p|q} \right ), \end{split}$$ In order to define the probability $q$ picks $p$ as its match in the feature embedding space, we fit a kernel on pairwise $L_2$-metric feature vector distances. We use three typical-used kernels to compare their effectiveness: Gaussian, Cauchy, and Exponential kernels. In next paragraphs, we use the Gaussian kernel to demonstrate our method. Loss functions defined by using Cauchy and Exponential kernels are given in Appendix. For the Gaussian kernel, we have: [$$\begin{aligned} c_{p|q} &= \frac{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )}{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )+\exp\left ( -\left \| f_\theta(q)- f_\theta(n)\right \|^2 \right )}. \label{Eq:Gaussian:cpq} \end{aligned}$$ ]{} In the feature embedding space, the probability of $q$ picks $n$ as its match is given by $c_{n|q} = 1 - c_{p|q}$. If the embedded feature vectors $f_\theta(q)$ and $f_\theta(p)$ are sufficiently near, and $f_\theta(q)$ and $f_\theta(n)$ are far enough under the $L_2$-metric, the conditional probability distributions $c_{\cdot|q}$ and $h_{\cdot|q}$ will be equal. Thus, our SARE objective aims to find an embedding function $f_\theta(\cdot)$ that pulls the $L_2$ distance of $f_\theta(q) \sim f_\theta(p)$ to infinite-minimal, and that of $f_\theta(q) \sim f_\theta(n)$ to infinite-maximal. Note that although ratio-loss [@hoffer2015deep] looks similar to our Exponential kernel $\exp(-||x-y||)$ defined loss function, they are theoretically different. The building block of ratio-loss is $\exp(||x-y||)$, and it directly applies $\exp()$ to distance $||x-y||$. This is problematic since it is not positive-defined (Please refer to Proposition 3&4 [@scholkopf2001kernel] or [@schoenberg1938metric]). Comparing the Three Losses {#sec::Gradients} ========================== In this section, we illustrate the connections between the above three different loss functions. This is approached by deriving and comparing their gradients, which are key to the back-propagation stage in networks training. Note that gradient may be interpreted as the resultant force created by a set of springs between image pair [@maaten2008visualizing]. For the gradient with respect to the positive image $p$, the spring pulls the $q\sim p$ pair. For the gradient with respect to the negative image $n$, the spring pushes the $q\sim n$ pair. In Fig. \[fig:gradients\], we compare the magnitudes of gradients with respect to $p$ and $n$ for different objectives. The mathematical equations of gradients with respect to $p$ and $n$ for different objectives are given in Table \[tab::gradients\]. For each objective, the gradient with respect to $q$ is given by ${\partial L}/{\partial f_\theta(q)} =- {\partial L}/{\partial f_\theta(p)} - {\partial L}/{\partial f_\theta(n)}$. [$ {\partial L} /\partial f_\theta(p) $]{} [${\partial L} /\partial f_\theta(n)$]{} ------------------ --------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------- Triplet ranking $2\left(f_\theta(p)-f_\theta(q)\right)$ $2\left(f_\theta(q)-f_\theta(n)\right)$ Contrastive $f_\theta(p)-f_\theta(q)$ $-\left ( 1-\tau/\left \| f_\theta(q)-f_\theta(n) \right \| \right )\left ( f_\theta(q)-f_\theta(n) \right )$ Gaussian SARE $2\left ( 1-c_{p|q} \right )\left(f_\theta(p)-f_\theta(q)\right)$ $2\left ( 1-c_{p|q} \right )\left(f_\theta(q)-f_\theta(n)\right)$ Cauchy SARE $2\left ( 1-\hat{c}_{p|q} \right )\frac{f_\theta(p)-f_\theta(q)}{ 1+\left \| f_\theta(p)- f_\theta(q)\right \|^2 }$ $2\left ( 1-\hat{c}_{p|q} \right )\frac{f_\theta(q)-f_\theta(n)}{ 1+\left \| f_\theta(q)- f_\theta(n)\right \|^2 }$ Exponential SARE $\left ( 1-\bar{c}_{p|q} \right )\frac{f_\theta(p)-f_\theta(q)}{ \left \| f_\theta(p)- f_\theta(q)\right \| }$ $\left ( 1-\bar{c}_{p|q} \right )\frac{f_\theta(q)-f_\theta(n)}{ \left \| f_\theta(q)- f_\theta(n)\right \| }$ ![image](dLdp_triplet.pdf){width="18.50000%"} ![image](dLdp_contra.pdf){width="19.50000%"} ![image](dLdp_sne.pdf){width="19.50000%"} ![image](dLdp_tsne.pdf){width="19.50000%"} ![image](dLdp_exp.pdf){width="19.50000%"} ![image](dLdn_triplet.pdf){width="18.50000%"} ![image](dLdn_contra.pdf){width="19.50000%"} ![image](dLdn_sne.pdf){width="19.50000%"} ![image](dLdn_tsne.pdf){width="19.50000%"} ![image](dLdn_exp.pdf){width="19.50000%"} In the case of triplet ranking loss, $\left \|{\partial L} /\partial f_\theta(p)\right \|$ and $\left \|{\partial L}/{\partial f_\theta(n)}\right \|$ increase linearly with respect to the distance $\left \| f_\theta(q)- f_\theta(p)\right \|$ and $\left \| f_\theta(q)- f_\theta(n)\right \|$, respectively. The saturation regions in which gradients equal to zero correspond to triplet images producing a zero loss (Eq. ). For triplet images producing a non-zero loss, $\left \|{\partial L} /\partial f_\theta(p)\right \|$ is independent of $n$, and vice versa. Thus, the updating of $f_\theta(p)$ disregards the current embedded position of $n$ and vice versa. For the contrastive loss, $\left \|{\partial L} /\partial f_\theta(p)\right \|$ is independent of $n$ and increase linearly with respect to distance $\left \| f_\theta(q)- f_\theta(p)\right \|$ . $\left \|{\partial L}/{\partial f_\theta(n)}\right \|$ decreases linearly with respect to distance $\left \| f_\theta(q)- f_\theta(n)\right \|$ . The area in which $\left \|{\partial L}/{\partial f_\theta(n)}\right \|$ equals zero corresponds to negative images with $\left \| f_\theta(q)-f_\theta(n) \right \| > \tau$. For all kernel defined SAREs, $\left \|{\partial L} /\partial f_\theta(p)\right \|$ and $\left \|{\partial L} /\partial f_\theta(n)\right \|$ depend on distances $\left \| f_\theta(q)- f_\theta(p)\right \|$ and $\left \| f_\theta(q)- f_\theta(n)\right \|$. The implicitly respecting of the distances comes from the probability $c_{p|q}$ (Eq. ). Thus, the updating of $f_\theta(p)$ and $f_\theta(n)$ considers the current embedded positions of triplet images, which is beneficial for the possibly diverse feature distribution in the embedding space. The benefit of kernel defined SARE-objectives can be better understood when combined with hard-negative mining strategy, which is widely used in CNN training. The strategy returns a set of hard negative images (nearest negatives in $L_2$-metric) for training. Note that both the triplet ranking loss and contrastive loss rely on empirical parameters ($m,\tau$) to prune out negatives (the saturation regions). In contrast, our kernel defined SARE-objectives do not rely on these parameters. They preemptively consider the current embedded positions. For example, hard negative with $\left \| f_\theta(q)- f_\theta(p)\right \| > \left \| f_\theta(q)- f_\theta(n)\right \|$ (top-left-triangle in gradients figure) will trigger large force to pull $q\sim p$ pair while pushing $q\sim n$ pair. “semi-hard” [@schroff2015facenet] negative with $\left \| f_\theta(q)- f_\theta(p)\right \| < \left \| f_\theta(q)- f_\theta(n)\right \|$ (bottom-right-triangle in gradients figure) will still trigger force to pull $q\sim p$ pair while pushing $q\sim n$ pair, however, the force decays with increasing $\left \| f_\theta(q)- f_\theta(n)\right \|$. Here, large $\left \| f_\theta(q)- f_\theta(n)\right \|$ may correspond to well-trained samples or noise, and the gradients decay ability has the potential benefit of reducing over-fitting. ![[ Comparison of the gradients with respect to $n$ for different objectives. $m = 0.1, \tau = 0.7$. ]{}[]{data-label="fig:gradients_compare"}](dLdn_compare.pdf){width="35.00000%"} To better understand the gradient decay ability of kernel defined SARE objectives, we fix $\left \| f_\theta(q)- f_\theta(p)\right \| = \sqrt[]{2}$, and compare $\left \|{\partial L} /\partial f_\theta(n)\right \|$ for all objectives in Fig. \[fig:gradients\_compare\]. Here, $\left \| f_\theta(q)- f_\theta(p)\right \| = \sqrt[]{2}$ means that for uniformly distributed feature embeddings, if we randomly sample $q\sim p$ pair, we are likely to obtain samples that are $\sqrt[]{2}$-away [@manmatha2017sampling]. Uniformly distributed feature embeddings correspond to an initial untrained/un-fine-tuned CNN. For triplet ranking loss, Gaussian SARE and Cauchy SARE, $\left \|{\partial L} /\partial f_\theta(n)\right \|$ increases with respect to $\left \| f_\theta(q)- f_\theta(n)\right \|$ when it is small. In contrast to the gradually decay ability of SAREs, triplet ranking loss suddenly “close” the force when the triplet images produce a zero loss (Eq. ). For contrastive loss and Exponential SARE, $\left \|{\partial L} /\partial f_\theta(n)\right \|$ decreases with respect to $\left \| f_\theta(q)- f_\theta(n)\right \|$. Again, the contrastive loss “close” the force when the negative image produces a zero loss. Handling Multiple Negatives {#sec::SNE_extension} =========================== In this section, we give two methods to handle multiple negative images in CNN training stage. Equation defines a SARE loss on a triplet and aims to shorten the embedded distance between the query and positive images while enlarging the distance between the query and negative images. Usually, in the task of IBL, the number of positive images is very small since they should depict same landmarks as the query image while the number of negative images is very big since images from different places are negative. At the same time, the time-consuming hard negative images mining process returns multiple negative images for each query image [@arandjelovic2016netvlad; @kim2017crn]. There are two ways to handle these negative images: one is to treat them independently and the other is to jointly handle them, where both strategies are illustrated in Fig. \[fig:multi\_negs\]. ![[]{data-label="fig:multi_negs"}](multi_negs.png){width="35.00000%"} Given $N$ negative images, treating them independently results in $N$ triplets, and they are substituted to Eq.  to calculate the loss to train CNN. Each triplet focuses on the competitiveness of two places (positive *VS* negative). The repulsion and attractive forces from multiple place pairs are averaged to balance the embeddings. Jointly handling multiple negatives aims to balance the distance of positives over multiple negatives. In our formulation, we can easily construct an objective function to push $N$ negative images simultaneously. Specifically, the match-ability priors for all the negative images are defined as zero, $h_{n|q} = 0, n = 1,2,...,N$. The Kullback-Leibler divergence loss over multiple negatives is given by: $$\label{multiNegative_loss0} L_\theta\left (q,p,n \right ) =-\log\left ( c^{\ast}_{p|q} \right ),$$ where for Gaussian kernel SARE, $c^{\ast}_{p|q}$ is defined as: $$\scriptsize c^{\ast}_{p|q} = \frac{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )}{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )+ \sum_{n=1}^{N}\exp\left ( -\left \| f_\theta(q)- f_\theta(n)\right \|^2 \right )}.$$ The gradients of Eq.  can be easily computed to train CNN. Experiments {#sec::Experiments} =========== This section mainly discusses the performance of SARE objectives for training CNN. We show that with SARE, we can improve the IBL performance on various standard place recognition and image retrieval datasets. Implementation Details {#sec::implentation} ---------------------- #### Datasets. Google Street View Time Machine datasets have been widely-used in IBL [@torii201524; @arandjelovic2016netvlad; @kim2017crn]. It provides multiple street-level panoramic images taken at different times at close-by spatial locations on the map. The panoramic images are projected into multiple perspective images, yielding the training and testing datasets. Each image is associated with a GPS-tag giving its approximate geographic location, which can be used to identify nearby images not necessarily depicting the same landmark. We follow [@arandjelovic2016netvlad; @Vo_2017_ICCV] to identify the positive and negative images for each query image. For each query image, the positive image is the closest neighbor in the feature embedding space at its nearby geo-position, and the negatives are far away images. The above positive-negative mining method is very efficient despite some outliers may exist in the resultant positive/negative images. If accurate positives and negatives are needed, pairwise image matching with geometric validation [@kim2017crn] or SfM reconstruction [@radenovic2016cnn] can be used. However, they are time-consuming. The Pitts30k-training dataset [@arandjelovic2016netvlad] is used to train CNN, which has been shown to obtain best CNN [@arandjelovic2016netvlad]. To test our method for IBL, the Pitts250k-test [@arandjelovic2016netvlad], TokyoTM-val [@arandjelovic2016netvlad], 24/7 Tokyo [@torii201524] and Sf-0 [@chen2011city; @sattler2017large] datasets are used. To show the generalization ability of our method for image retrieval, the Oxford 5k [@philbin2007object], Paris 6k [@philbin2008lost], and Holidays [@jegou2008hamming] datasets are used. #### CNN Architecture. We use the widely-used compact feature vector extraction method NetVLAD [@arandjelovic2016netvlad; @Noh_2017_ICCV; @kim2017crn; @sattler2017large; @sattler2017benchmarking] to demonstrate the effectiveness of our method. Our CNN architecture is given in Fig. \[fig:pipeline\]. #### Evaluation Metric. For the place recognition datasets Pitts250k-test [@arandjelovic2016netvlad], TokyoTM-val [@arandjelovic2016netvlad], 24/7 Tokyo [@torii201524] and Sf-0 [@chen2011city], we use the Precision-Recall curve to evaluate the performance. Specifically, for Pitts250k-test [@arandjelovic2016netvlad], TokyoTM-val [@arandjelovic2016netvlad], and 24/7 Tokyo [@torii201524], the query image is deemed correctly localized if at least one of the top $N$ retrieved database images is within $d = 25$ meters from the ground truth position of the query image. The percentage of correctly recognized queries (Recall) is then plotted for different values of $N$. For the large-scale Sf-0 [@chen2011city] dataset, the query image is deemed correctly localized if at least one of the top $N$ retrieved database images shares the same building IDs ( manually labeled by [@chen2011city] ). For the image-retrieval datasets Oxford 5k [@philbin2007object], Paris 6k [@philbin2008lost], and Holidays [@jegou2008hamming], the mean-Average-Precision (mAP) is reported. #### Training Details. We use the training method of [@arandjelovic2016netvlad] to compare different objectives. For the state-of-the-art triplet ranking loss, the off-the-shelf implementation [@arandjelovic2016netvlad] is used. For the contrastive loss [@radenovic2016cnn], triplet images are partitioned into $q \sim p$ and $q \sim n$ pairs to calculate the loss (Eq. ) and gradients. For our method which treats multiple negatives independent (*Our-Ind.*), we first calculate the probability $c_{p|q}$ (Eq. ). $c_{p|q}$ is then used to calculate the gradients (Table \[tab::gradients\]) with respect to the images. The gradients are back-propagated to train CNN. For our method which jointly handles multiple negatives (*Our-Joint*), we use Eq. to train CNN. Our implementation is based on MatConvNet [@vedaldi15matconvnet]. ![Comparison of recalls for different kernel defined SARE-objectives. From left to right and top to down: Pitts250k-test, TokyoTM-val, 24/7 Tokyo and Sf-0. (Best viewed in color on screen) []{data-label="fig:Kernels"}](recall_curve_pitts250k_kernels.pdf "fig:"){width="23.50000%"} ![Comparison of recalls for different kernel defined SARE-objectives. From left to right and top to down: Pitts250k-test, TokyoTM-val, 24/7 Tokyo and Sf-0. (Best viewed in color on screen) []{data-label="fig:Kernels"}](recall_curve_TokyoTimeMachine_kernels.pdf "fig:"){width="23.50000%"} ![Comparison of recalls for different kernel defined SARE-objectives. From left to right and top to down: Pitts250k-test, TokyoTM-val, 24/7 Tokyo and Sf-0. (Best viewed in color on screen) []{data-label="fig:Kernels"}](recall_curve_Tokyo724_kernels.pdf "fig:"){width="23.50000%"} ![Comparison of recalls for different kernel defined SARE-objectives. From left to right and top to down: Pitts250k-test, TokyoTM-val, 24/7 Tokyo and Sf-0. (Best viewed in color on screen) []{data-label="fig:Kernels"}](recall_curve_sf0_kernels.pdf "fig:"){width="23.50000%"} Kernels for SARE ---------------- To assess the impact of kernels on fitting the pairwise $L_2$-metric feature vector distances, we compare CNNs trained by Gaussian, Cauchy and Exponential kernel defined SARE-objectives, respectively. All the hyper-parameters are the same for different objectives, and the results are given in Fig. \[fig:Kernels\]. CNN trained by Gaussian kernel defined SARE generally outperforms CNNs trained by others. We find that handling multiple negatives jointly (*Gaussian-Joint*) leads to better training and validation performances than handling multiple negatives independently (*Gaussian-Ind.*). However, when testing the trained CNNs on Pitts250k-test, TokyoTM-val, and 24/7 Tokyo datasets, the recall performances are similar. The reason may come from the negative images sampling strategy. Since the negative images are dropped randomly from far-away places from the query image using GPS-tags, they potentially are already well-balanced in the whole dataset, thus the repulsion and attractive forces from multiple place pairs are similar, leading to a similar performance of the two methods. *Gaussian-Ind.* behaves surprisingly well on the large-scale Sf-0 dataset. Comparison with state-of-the-art -------------------------------- We use Gaussian kernel defined SARE objectives to train CNNs, and compare our method with state-of-the-art NetVLAD [@arandjelovic2016netvlad] and NetVLAD with Contextual Feature Reweighting [@kim2017crn]. The complete *Recall@N* performance for different methods are given in Table \[Comparison\_stateofart\]. ------------------------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- r@1 r@5 r@10 r@1 r@5 r@10 r@1 r@5 r@10 r@1 r@5 r@10 Our-Ind. **88.97** **95.50** **96.79** 94.49 96.73 97.30 79.68 86.67 90.48 **80.60** **86.70** **89.01** Our-Joint 88.43 95.06 96.58 **94.71** **96.87** 97.51 **80.63** **87.30** **90.79** 77.75 85.07 87.52 CRN [@kim2017crn] 85.50 93.50 95.50 - - - 75.20 83.80 87.30 - - - NetVLAD [@arandjelovic2016netvlad] 85.95 93.20 95.13 93.85 96.77 **97.59** 73.33 82.86 86.03 75.58 83.31 85.21 ------------------------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- CNNs trained by Gaussian-SARE objectives consistently outperform state-of-the-art CNNs by a large margin on almost all benchmarks. For example, on the challenging 24/7 Tokyo dataset, *our-Ind.* trained NetVLAD achieves recall@1 of 79.68% compared to the second-best 75.20% obtained by CRN [@kim2017crn], a relative improvement in recall of 4.48%. On the large-scale challenging Sf-0 dataset, *our-Ind.* trained NetVLAD achieves recall@1 of 80.60% compared to the 75.58% obtained by NetVLAD [@arandjelovic2016netvlad], a relative improvement in recall of 5.02%. Note that we do not use the Contextual Reweighting layer to capture the “context” within images, which has been shown to be more effective than the original NetVLAD structure [@kim2017crn]. Similar improvements can be observed in other datasets. This confirms the important premise of this work: formulating the IBL problem in competitive learning framework, and using SARE to supervise the CNN training process can learn discriminative yet compact image representations for IBL. We visualize 2D feature embeddings of query images from 24/7 Tokyo and Sf-0 datasets. Images taken from the same place are mostly embedded to nearby 2D positions despite the significant variations in viewpoint, pose, and configuration. Qualitative Evaluation ---------------------- To visualize the areas of the input image which are most important for localization, we adopt [@gruen16featurevis] to obtain a heat map showing the importance of different areas of the input image. The results are given in Fig. \[fig:imageRetrieval\]. As can be seen, our method focuses on regions that are useful for image geo-localization while emphasizing the distinctive details on buildings. On the other hand, the NetVLAD [@arandjelovic2016netvlad] emphasizes local features, not the overall building style. [0.15]{} ![image](images/Query_1.jpg){width="\textwidth"} [0.15]{} ![image](images/Our-heatmap_1.jpg){width="\textwidth"} [0.15]{} ![image](images/NetVLAD-heatmap_1.jpg){width="\textwidth"} [0.15]{} ![image](images/Our-top1_1.jpg){width="\textwidth"} [0.15]{} ![image](images/NetVLAD-top1_1.jpg){width="\textwidth"} [0.15]{} ![image](images/Query_4.jpg){width="\textwidth"} [0.15]{} ![image](images/Our-heatmap_4.jpg){width="\textwidth"} [0.15]{} ![image](images/NetVLAD-heatmap_4.jpg){width="\textwidth"} [0.15]{} ![image](images/Our-top1_4.jpg){width="\textwidth"} [0.15]{} ![image](images/NetVLAD-top1_4.jpg){width="\textwidth"} Generalization on Image Retrieval Datasets ------------------------------------------ To show the generalization ability of our method, we compare the compact image representations trained by different methods on standard image retrieval benchmarks (Oxford 5k [@philbin2007object], Paris 6k [@philbin2008lost], and Holidays [@jegou2008hamming]) without any fine-tuning. The results are given in Table \[tab:retrieval\_netvlad\] . Comparing the CNN trained by our methods and the off-the-shelf NetVLAD [@arandjelovic2016netvlad] and CRN [@kim2017crn], in most cases, the mAP of our methods outperforms theirs’. Since our CNNs are trained using a city-scale building-oriented dataset from urban areas, it lacks the ability to understand the natural landmarks ($\eg$ water, boats, cars), resulting in a performance drop in comparison with the city-scale building-oriented datasets. CNN trained by images similar to images encountered at test time can increase the retrieval performance [@babenko2014neural]. However, our purpose here is to demonstrate the generalization ability of SARE trained CNNs, which has been justified. ------------------------------------ ----------- ----------- ----------- ----------- ----------- full crop full crop Our-Ind. **71.66** **75.51** **82.03** 81.07 80.71 Our-Joint 70.26 73.33 81.32 **81.39** **84.33** NetVLAD [@arandjelovic2016netvlad] 69.09 71.62 78.53 79.67 83.00 CRN [@kim2017crn] 69.20 - - - - ------------------------------------ ----------- ----------- ----------- ----------- ----------- : Retrieval performance of CNNs on image retrieval benchmarks. No spatial re-ranking or query expansion is performed. The accuracy is measured by the mean Average Precision (mAP).[]{data-label="tab:retrieval_netvlad"} Comparison with Metric-learning Methods --------------------------------------- Although deep metric-learning methods have shown their effectiveness in classification and fine-grain recognition tasks, their abilities in the IBL task are unknown. As another contribution of this paper, we show the performances of six current state-of-the-art deep metric-learning methods in IBL, and compare our method with : (1) Contrastive loss used by [@radenovic2016cnn]; (2) Lifted structure embedding [@oh2016deep]; (3) N-pair loss [@sohn2016improved]; (4) N-pair angular loss [@Wang_2017_ICCV]; (5) Geo-classification loss [@Vo_2017_ICCV]; (6) Ratio loss [@hoffer2015deep]. Fig. \[fig:deepmetric\] shows the results of the quantitative comparison between our method and other deep metric learning methods. Our theoretically-grounded method outperforms the Contrastive loss [@radenovic2016cnn] and Geo-classification loss [@Vo_2017_ICCV], while remains comparable with other state-of-the-art methods. Conclusion ========== This paper has addressed the problem of learning discriminative image representations specifically tailored for the task of Image-Based Localization (IBL). We have proposed a new Stochastic Attraction and Repulsion Embedding (SARE) objective for this task. SARE directly enforces the “attraction" and “repulsion" constraints on intra-place and inter-place feature embeddings, respectively. The “attraction" and “repulsion" constraints are formulated as a similarity-based binary classification task. It has shown that SARE improves IBL performance, outperforming other state-of-the-art methods. Acknowledgement {#acknowledgement .unnumbered} =============== [ This research was supported in part by the Australian Research Council (ARC) grants (CE140100016), Australia Centre for Robotic Vision, and the Natural Science Foundation of China grants (61871325, 61420106007, 61671387, 61603303). Hongdong Li is also funded in part by ARC-DP (190102261) and ARC-LE (190100080). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU. We thank all anonymous reviewers for their valuable comments.]{} Appendix {#appendix .unnumbered} ======== In the appendix, we describe the gradients of loss functions which jointly handle multiple negative images (Sec.\[sec::handle\_multi\_negs\]). Our implementation details are given in Sec.\[sec::impleDetails\]. Additional experimental results are given in Sec.\[sec::Additional\_Results\]. Handling Multiple Negatives {#sec::handle_multi_negs} =========================== Give a query image $q$, a positive image $p$, and multiple negative images $\{n\}, n =1,2,...,N$. The Kullback-Leibler divergence loss over multiple negatives is given by: $$\label{multiNegative_loss} L_\theta\left (q,p,n \right ) =-\log\left ( c^{\ast}_{p|q} \right ),$$ For Gaussian kernel SARE, $c^{\ast}_{p|q}$ is defined as: $$\label{cpq} \thinmuskip = 1mu \medmuskip = 1mu \thickmuskip = 1mu \footnotesize c^{\ast}_{p|q} = \frac{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )}{\exp\left ( -\left \| f_\theta(q)- f_\theta(p)\right \|^2 \right )+ \sum_{n=1}^{N}\exp\left ( -\left \| f_\theta(q)- f_\theta(n)\right \|^2 \right )}.$$ where $f_\theta(q),f_\theta(p),f_\theta(n)$ are the feature embeddings of query, positive and negative images, respectively. Substituting Eq.  into Eq.  gives: $$\begin{gathered} \label{multiNegative_loss_gauss} \thinmuskip = 1mu \medmuskip = 1mu \thickmuskip = 1mu \footnotesize L_\theta\left (q,p,n \right ) = \\ \log\left ( 1+ \sum_{n=1}^N \exp ({ \| f_\theta(q)- f_\theta(p) \|^2 - \| f_\theta(q)- f_\theta(n) \|^2 } ) \right )\end{gathered}$$ Denote [$ 1+ \sum_{n=1}^N\exp ({\left \| f_\theta(q)- f_\theta(p)\right \|^2 -\left \| f_\theta(q)- f_\theta(n)\right \|^2 })$]{} as $\eta$, the gradients of Eq.  with respect to the query, positive and negative images are given by: $$\begin{aligned} \frac{\partial L}{\partial f_\theta(p)} &=\sum_{n=1}^{N}-\frac{2}{\eta}\exp\left ({\left \| f_\theta(q)- f_\theta(p)\right \|^2 -\left \| f_\theta(q)- f_\theta(n)\right \|^2 }\right ) \nonumber \\ & \left [f_\theta(q)- f_\theta(p)\right ], \label{Eq:gauss:dldp} \\ \frac{\partial L}{\partial f_\theta(n)} &=\frac{2}{\eta}\exp\left ({\left \| f_\theta(q)- f_\theta(p)\right \|^2 -\left \| f_\theta(q)- f_\theta(n)\right \|^2 }\right )\nonumber \\ &\left [f_\theta(q)- f_\theta(n)\right ],\label{Eq:gauss:dldn} \\ \frac{\partial L}{\partial f_\theta(q)} &=- \frac{\partial L}{\partial f_\theta(p)} - \sum_{n=1}^{N}\frac{\partial L}{\partial f_\theta(n)}. \label{Eq:gauss:dldq}\end{aligned}$$ Similarly, for Cauchy kernel, the loss function is given by: $$\label{multiNegative_loss_t} L_\theta\left (q,p,n \right ) =\log\left ( 1+ \sum_{n=1}^N\frac{1+\left \| f_\theta(q)- f_\theta(p)\right \|^2}{1+\left \| f_\theta(q)- f_\theta(n)\right \|^2} \right ).$$ Denote $ 1+ \sum_{n=1}^N\frac{1+ \| f_\theta(q)- f_\theta(p) \|^2}{1+ \| f_\theta(q)- f_\theta(n) \|^2} $ as $\eta$, the gradients of Eq.  with respect to the query, positive and negative images are given by: $$\begin{aligned} \frac{\partial L}{\partial f_\theta(p)} &=\sum_{n=1}^{N}\frac{-2}{\eta\left ( 1+\left \| f_\theta(q)-f_\theta(n) \right \|^2 \right )} \left [f_\theta(q)-f_\theta(p) \right ], \label{Eq:t:dldp} \\ \frac{\partial L}{\partial f_\theta(n)} &=\frac{2\left ( 1+ \left \| f_\theta(q)-f_\theta(p) \right \|^2 \right )}{\eta\left ( 1+\left \| f_\theta(q)-f_\theta(n) \right \|^2 \right )^2} \left [f_\theta(q)-f_\theta(n) \right ], \label{Eq:t:dldn} \\ \frac{\partial L}{\partial f_\theta(q)} &=- \frac{\partial L}{\partial f_\theta(p)} - \sum_{n=1}^{N}\frac{\partial L}{\partial f_\theta(n)}. \label{Eq:t:dldq}\end{aligned}$$ For Exponential kernel, the loss function is given by: $$\label{multiNegative_loss_exp} \footnotesize L_\theta\left (q,p,n \right ) =\log\left ( 1+ \sum_{n=1}^N \exp\left ({\left \| f_\theta(q)- f_\theta(p)\right \| -\left \| f_\theta(q)- f_\theta(n)\right \| }\right ) \right ).$$ Denote $1+\sum_{n=1}^N\exp ({ \| f_\theta(q)- f_\theta(p) \| - \| f_\theta(q)- f_\theta(n)\|})$ as $\eta$, the gradients of Eq.  with respect to the query, positive and negative images are given by: $$\begin{aligned} \frac{\partial L}{\partial f_\theta(p)} &=\sum_{n=1}^{N}-\frac{\exp\left ({\left \| f_\theta(q)- f_\theta(p)\right \| -\left \| f_\theta(q)- f_\theta(n)\right \| }\right )}{\eta\left \| f_\theta(q)- f_\theta(p)\right \|} \nonumber \\ & \left [f_\theta(q)- f_\theta(p)\right ], \label{Eq:gauss:dldp} \\ \frac{\partial L}{\partial f_\theta(n)} &=\frac{\exp\left ({\left \| f_\theta(q)- f_\theta(p)\right \| -\left \| f_\theta(q)- f_\theta(n)\right \| }\right )}{\eta\left \| f_\theta(q)- f_\theta(n)\right \|} \nonumber \\ &\left [f_\theta(q)- f_\theta(n)\right ], \label{Eq:gauss:dldn} \\ \frac{\partial L}{\partial f_\theta(q)} &=- \frac{\partial L}{\partial f_\theta(p)} - \sum_{n=1}^{N}\frac{\partial L}{\partial f_\theta(n)}. \label{Eq:gauss:dldq}\end{aligned}$$ The gradients are back propagated to train the CNN. Implementation Details {#sec::impleDetails} ====================== We exactly follow the training method of [@arandjelovic2016netvlad], without fine-tuning any hyper-parameters. The VGG-16 [@simonyan2014very] net is cropped at the last convolutional layer (conv5), before ReLU. The learning rate for the Pitts30K-train and Pitts250K-train datasets are set to 0.001 and 0.0001, respectively. They are halved every 5 epochs, momentum 0.9, weight decay 0.001, batch size of 4 tuples. Each tuple consist of one query image, one positive image, and ten negative images. The CNN is trained for at most 30 epochs but convergence usually occurs much faster (typically less than 5 epochs). The network which yields the best recall@5 on the validation set is used for testing. #### **Triplet ranking loss** {#triplet-ranking-loss} For the triplet ranking loss [@arandjelovic2016netvlad], we set margin $m = 0.1$, and triplet images producing a non-zero loss are used in gradient computation, which is the same as [@arandjelovic2016netvlad]. #### **Contrastive loss** {#contrastive-loss} For the contrastive loss [@radenovic2016cnn], we set margin $\tau = 0.7$, and negative images producing a non-zero loss are used in gradient computation. Note that positive images are always used in training since they are not pruned out. #### **Geographic classification loss** For the geographic classification method [@Vo_2017_ICCV], we use the Pitts250k-train dataset for training. We first partition the 2D geographic space into square cells, with each cell size at $25m$. The cell size is selected the same as the evaluation metric for compatibleness, so that the correctly classified images are also the correctly localized images according to our evaluation metric. We remove the Geo-classes which do not contain images, resulting in $1637$ Geo-classes. We append a fully connected layer (random initialization, with weights at $0.01\times randn$) and Softmax-log-loss layer after the NetVLAD pooling layer to predict which class the image belongs to. #### **SARE loss** For our methods (*Our-Ind.*, and *Our-Joint* ), *Our-Ind.* treats multiple negative images independently while *Our-Joint* treats multiple negative images jointly. The two methods only differ in the loss function and gradients computation. For each method, the corresponding gradients are back-propagated to train the CNN. #### **Triplet angular loss** For the triplet angular loss [@Wang_2017_ICCV], we use the N-pair loss function (Eq. (8) in their paper) with $\alpha = 45^{\circ}$ as it achieves the best performance on the Stanford car dataset. #### **N-pair loss** For the N-pair loss [@sohn2016improved], we use the N-pair loss function (Eq. (3) in their paper). #### **Lifted structured loss** For the lifted structured loss [@oh2016deep], we use the smooth loss function (Eq. (4) in their paper). Note that training images producing a zero loss ($\tilde{J}_{i,j} < 0$) are pruned out. #### **Ratio loss** For the Ratio loss [@hoffer2015deep], we use the MSE loss function since it achieves the best performance in there paper. Dataset \#database images \#query images ----------------- ------------------- ---------------- Pitts250k-train 91,464 7,824 Pitts250k-val 78,648 7,608 Pitts250k-test 83,952 8,280 Pitts30k-train 10,000 7,416 Pitts30k-val 10,000 7,608 Pitts30k-test 10,000 6,816 TokyoTM-val 49,056 7,186 Tokyo 24/7 75,984 315 Sf-0 610,773 803 Oxford 5k 5063 55 Paris 6k 6412 220 Holidays 991 500 : Datasets used in experiments. The Pitts250k-train dataset is only used to train the Geographic classification CNN [@Vo_2017_ICCV]. For all the other CNNs, Pitts30k-train dataset is used to enable fast training. []{data-label="tab:datasets"} Additional Results {#sec::Additional_Results} ================== #### **Dataset.** Table \[tab:datasets\] gives the details of datasets used in our experiments. #### **Visualization of feature embeddings.** Fig. \[fig:embedding\_247Tokyo\_query\] and Fig. \[fig:embedding\_sf0\_query\] visualize the feature embeddings of the 24/7 Tokyo-query and Sf-0-query dataset computed by our method (*Our-Ind.*) in 2-D using the t-SNE [@maaten2008visualizing], respectively. Images are displayed exactly at their embedded locations. Note that images taken from the same place are mostly embedded to nearby 2D positions although they differ in lighting and perspective. ------------------------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- r@1 r@5 r@10 r@1 r@5 r@10 r@1 r@5 r@10 r@1 r@5 r@10 Our-Ind. **88.97** **95.50** **96.79** 94.49 96.73 97.30 79.68 86.67 90.48 **80.60** **86.70** **89.01** Our-Joint 88.43 95.06 96.58 [94.71]{} [96.87]{} **97.51** [80.63]{} [87.30]{} [90.79]{} 77.75 85.07 87.52 Contrastive [@radenovic2016cnn] 86.33 94.09 95.88 [93.39]{} [96.09]{} 96.98 [75.87]{} [86.35]{} [88.89]{} 74.63 82.23 84.53 N-pair [@sohn2016improved] 87.56 94.57 96.21 [94.42]{} [96.73]{} 97.41 [80.00]{} **89.52** **91.11** 76.66 83.85 87.11 Angular [@Wang_2017_ICCV] 88.60 94.86 96.44 **94.84** [96.83]{} 97.45 **80.95** [87.62]{} [90.16]{} 79.51 86.57 88.06 Liftstruct [@oh2016deep] 87.40 94.52 96.28 [94.48]{} **96.90** 97.47 [77.14]{} [86.03]{} [89.21]{} 78.15 84.67 87.11 Geo-Classification [@Vo_2017_ICCV] 83.19 92.67 94.59 [93.54]{} [96.80]{} 97.50 [71.43]{} [82.22]{} 85.71 67.84 78.15 [81.41]{} Ratio [@hoffer2015deep] 87.28 94.25 96.07 [94.24]{} [96.84]{} 97.41 [80.32]{} [87.30]{} 88.89 76.80 85.62 [87.38]{} ------------------------------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ![image](embedding_247_query.pdf){width="99.00000%" height="\textwidth"} ![image](embedding_sf0_query.pdf){width="99.00000%" height="\textwidth"} #### **Metric learning methods** Table \[tab:metric-learning\] gives the complete *Recall@N* performance for different methods. Our method outperforms the contrastive loss [@radenovic2016cnn] and Geo-classification loss [@Vo_2017_ICCV], while remains comparable with other state-of-the-art metric learning methods. #### **Image retrieval for varying dimensions.** Table \[tab:retrieval\_netvlad\_full\] gives the comparison of image retrieval performance for different output dimensions. ------------------------------------ ------ ----------- ----------- ----------- ----------- ----------- full crop full crop Our-Ind. 4096 **71.66** **75.51** **82.03** 81.07 80.71 Our-Joint 4096 70.26 73.33 81.32 **81.39** **84.33** NetVLAD [@arandjelovic2016netvlad] 4096 69.09 71.62 78.53 79.67 83.00 CRN [@kim2017crn] 4096 69.20 - - - - Our-Ind. 2048 **71.11** **73.93** **80.90** 79.91 79.09 Our-Joint 2048 69.82 72.37 80.48 **80.49** **83.17** NetVLAD [@arandjelovic2016netvlad] 2048 67.70 70.84 77.01 78.29 82.80 CRN [@kim2017crn] 2048 68.30 - - - - Our-Ind. 1024 **70.31** **72.20** **79.29** **78.54** 78.76 Our-Joint 1024 68.46 70.72 78.49 78.47 **83.15** NetVLAD [@arandjelovic2016netvlad] 1024 66.89 69.15 75.73 76.50 82.06 CRN [@kim2017crn] 1024 66.70 - - - - Our-Ind. 512 **68.96** **70.59** **77.36** 76.44 77.65 Our-Joint 512 67.17 69.19 76.80 **77.20** **81.83** NetVLAD [@arandjelovic2016netvlad] 512 65.56 67.56 73.44 74.91 81.43 CRN [@kim2017crn] 512 64.50 - - - - Our-Ind. 256 **65.85** 67.46 **75.61** 74.82 76.27 Our-Joint 256 65.30 **67.51** 74.50 **75.32** **80.57** NetVLAD [@arandjelovic2016netvlad] 256 62.49 63.53 72.04 73.47 80.30 CRN [@kim2017crn] 256 64.20 - - - - Our-Ind. 128 **63.75** **64.71** **71.60** **71.23** 73.57 Our-Joint 128 62.92 63.63 69.53 70.24 77.81 NetVLAD [@arandjelovic2016netvlad] 128 60.43 61.40 68.74 69.49 **78.65** CRN [@kim2017crn] 128 61.50 - - - - ------------------------------------ ------ ----------- ----------- ----------- ----------- -----------
--- abstract: 'In this paper, we propose opportunistic interference alignment (OIA) schemes for three-transmitter multiple-input multiple-output (MIMO) interference channels (ICs). In the proposed OIA, each transmitter has its own user group and selects a single user who has the most aligned interference signals. The user dimensions provided by multiple users are exploited to align interfering signals. Contrary to conventional IA, perfect channel state information of all channel links is not required at the transmitter, and each user just feeds back one scalar value to indicate how well the interfering channels are aligned. We prove that each transmitter can achieve the same degrees of freedom (DoF) as the interference free case via user selection in our system model that the number of receive antennas is twice of the number of transmit antennas. Using the geometric interpretation, we find the required user scaling to obtain an arbitrary non-zero DoF. Two OIA schemes are proposed and compared with various user selection schemes in terms of achievable rate/DoF and complexity.' author: - ', , ,  [^1]' title: 'On the Achievable DoF and User Scaling Law of Opportunistic Interference Alignment in 3-Transmitter MIMO Interference Channels' --- MIMO interference channel, interference alignment, opportunistic interference alignment, postprocessing Introduction ============ Interference alignment (IA) has been touted as a key technology for handling interference in future wireless communications [@ETW2008; @CJ2008; @JS2008; @PH2011; @J2012; @KV2009; @GCJ2011; @GCJ2008; @ST2008; @GJ2008]. Contrary to the conventional schemes which orthogonalize interference signals, [@CJ2008] showed that IA can achieve a total of $N/2$ degrees of freedom (DoF) in an $N$-transmitter single-input single-output (SISO) interference channel (IC). The achievable DoF for $N$-transmitter MIMO has been found in [@GJ2008]. Despite the promising aspects of IA, its implementation has many challenges. IA generally requires the perfect global channel knowledge of desired and interfering channels at the transmitter which involves excessive signal overheads although blind IA schemes [@J2012] without requiring channel knowledge have recently been proposed for some specific environments. Imperfect channel state information significantly degrades the gain of IA [@KV2009]. The large computation complexity necessitated is also regarded as a big challenge for practical implementation. The sub-optimality of IA in the practical operating SNR region is another problem [@GCJ2011]. Recently, IA techniques to ameliorate these problems have been investigated. Iterative IA algorithms were proposed to optimize precoding matrix and to reduce the global channel knowledge burden based on channel reciprocity [@GCJ2008; @PH2011]. To reduce computational complexity, Suh and Tse [@ST2008] proposed a subspace interference alignment technique for an uplink cellular network system. In [@PFLD2010], IA was opportunistically performed in MIMO cognitive radio networks, where secondary transmitters transmit their signals on only spatial dimensions not used by primary transmitters. IA with imperfect channel state information (CSI) was shown to achieve the same DoF as IA with perfect CSI if the feedback size per user is properly scaled [@TB2009; @KV2009; @AH2010]. Also, IA with imperfect CSI in correlated channel was studied in [@MAH2011]. Although there have been significant efforts to overcome the practical challenges, the inherent shortcomings of IA highly motivate the development of more practical techniques. It is desirable to attain the promised gain of IA with limited feedback and reduced computational complexity. In this context, interference management by user selection attracts attentions. The key idea behind this opportunistic interference management is to select and serve the user with the best channel or interference condition. The selection criteria include maximum signal-to-noise ratio (SNR), minimum interference-to-noise ratio (INR), maximum signal-to-interference-pulse-noise ratio (SINR), and so on [@SH2005; @YG2006; @CA2008; @RRCY2010]. Opportunistic Interference Alignment (OIA) ------------------------------------------ In this paper, we propose opportunistic interference alignment (OIA) schemes by interpreting the opportunistic interference management from a perspective of IA. In our proposed OIA, the user dimensions provided by multiple users is exploited to align interfering signals. Different forms of OIA have been proposed in $K$-user SISO IC using the random phase offset [@NELL2010] and in a cognitive radio network [@SF2011]. There are three transmitters and three user groups associated with the transmitters. Each user feeds one scalar value of an *interference alignment measure* back to the own transmitter, which indicates how well the interfering channels are well aligned. Based on the feedback information, each transmitter selects and serves only a single user whose interfering channels are most aligned so that a three-transmitter MIMO IC is opportunistically constructed. Thus, interference alignment is achieved by user selection rather than transmit beamforming. Collaboration and Information sharing among transmitters are not required. The proposed OIA combines the concepts of opportunistic beamforming and IA. Contrary to opportunistic beamforming in a MIMO broadcast channel [@HS2007; @KGS2008], each user only considers the interfering channels rather than the desired channel; the interference from one transmitter helps the other transmitter’s user selection. The basic concept of OIA was roughly introduced in 3-transmitter $2\times 2$ MIMO IC [@LC2010] and $M\times 2M$ MIMO IC [@LC2011]. However, the maximum achievable DoF by the OIA and the relationship between the achievable DoF and the required user scaling were not found. In this paper, we generalize our preliminary studies on OIA [@LC2010; @LC2011]. We consider the three-transmitter $N_T \times N_R$ MIMO IC where $(N_T, N_R) = (M, 2M)$ and show that each transmitter can obtain DoF up to $M$ via the proposed user selection. We also derive the required user scaling to obtain given DoF. For implementation, we propose two OIA schemes. In the first OIA scheme (OIA1), each user directly minimizes the rate loss induced by the interfering channels. Thus, each transmitter selects a user with the minimum rate loss. In the second OIA scheme (OIA2), aligned level of interfering channels is geometrically interpreted; the transmitter selects a user whose interfering channels span the closest subspaces. The complexity of OIA2 can be reduced compared to OIA1 through a geometric interpretation. Contributions ------------- We investigate the achievable DoF and user scaling law of the OIA scheme in a three-transmitter MIMO IC where $K$ users are associated with each transmitter, and the selected users together with their transmitters construct a three-transmitter MIMO IC. In our system model, each transmitter sends $M$ streams with $N_T (=M)$ antennas, and each receiver has $N_R(=2M)$ antennas. - We prove that each transmitter can achieve DoF $M$ by the OIA schemes without symbol extension and no cooperation. In this case, we show that the transmitter and the selected user act like interference-free $M\times M$ point-to-point MIMO system. For $M \times 2M$ MIMO IC composed of three transmitters and three users, $2M/3$ DoF per user is known to be achievable (with perfect CSIT and symbol extension) [@GJ2008]. Our result seems to be contradictory at first glance, but the required spatial dimensions for $M$ data streams are secured through the user dimensions provided by the $K$ users. This means that multiuser DoF are translated into IA spatial dimensions. - We show that the number of users associated with each transmitter, $K$, is enough to be scaled as $K \propto P^{mM}$ to achieve $m\in[0,M]$ DoF per transmitter. When $K$ is fixed, the achievable DoF by the OIA schemes is proved to be zero. - Finally, we look into the practical advantages of the proposed OIA schemes; we show that the OIA scheme based on geometric concept significantly reduces the computational complexity while achieving a notable rate improvement compared to conventional opportunistic user selection schemes. Organization ------------ The rest of this paper is organized as follows. Our system model is described in Section II. Preliminaries about the angles between two subspaces are provided in Section III. The proposed OIA schemes are described in Section IV, and the achievable rate and DoF are analyzed in Section V. Several conventional opportunistic user selection schemes are summarized and compared with OIA schemes in Section VI. The conclusions and comments on areas of future interest are given in Section VII. Notations --------- Throughout the paper, the notations $\mathbf{A}^{*}$, $\lambda_n(\mathbf{A})$, $\mathbf{v}_n(\mathbf{A})$, $tr(\mathbf{A})$ and $\Vert \mathbf{A} \Vert_F$ denote the conjugate transpose, $n$th largest eigenvalue, eigenvector corresponding to $\lambda_n(\mathbf{A})$, trace, and Frobenius norm of matrix $\mathbf{A}$, respectively. Also, the notations $\mathbf{I}_n$, $diag(\cdot)$, $\mathbb{C}^{n}$ and $\mathbb{C}^{m\times n}$ indicate the $n\times n$ identity matrix, a diagonal matrix whose diagonal elements are $(\cdot)$, the $n$-dimensional complex space, and the set of $m\times n$ complex matrices, respectively. System Model ============ Our system model is depicted in Fig. \[fig:system\_model\]. There are three transmitters having $N_T (= M)$ antennas, and each transmitter has its own user group consisting of $K$ users with $N_R (=2M)$ antennas each. Each transmitter selects a single user in its own user group and sends $M$ data streams to the selected user. Consequently, the transmitters and their selected users construct a three-transmitter MIMO IC. For user selection, each transmitter uses only partial information fed back from each user, which is a single scalar value. No collaborations and no information sharing are allowed among the transmitters. Our system operates with following four steps: - Step 1: Each transmitter broadcasts a reference signal. - Step 2: Each user feeds one analog value back to the own transmitter. - Step 3: Each transmitter selects one user in its user group. - Step 4: Each transmitter serves the selected user with the random beams. In Step 1, each transmitter broadcasts a reference signal. Thus, each user obtains the information of the desired channel and two interfering channels. In Step 2, each user generates the feedback information from the channel information, which is one scalar value. Various feedback information can be constructed according to the postprocessing and the user selection schemes. In Step 3, each transmitter selects a single user in its user group. Note that the user selection at each transmitter is independent of one another because there are no information sharing and collaboration among the transmitters. In Step 4, the transmitters serve the selected users with the random beams. Thus, the three-transmitter MIMO IC is opportunistically constructed. Since a user selection at each transmitter does not affect the performances of the other transmitters, without loss of generality, we only consider the user selection at the first transmitter. Other transmitters can achieve the same average achievable rate with the identical setting. At the $k$th user in the first user group, the received signal denoted by $\mathbf{y}_k$ is given by $$\begin{aligned} \mathbf{y}_k &=\mathbf{H}_{k,1} \mathbf{x}_1 + \sum_{i=2}^{3} \mathbf{H}_{k,i}\mathbf{x}_i + \mathbf{n}_k, \label{eqn:y_k}\end{aligned}$$ where $\mathbf{H}_{k,i} \in \mathbb{C}^{N_R\times M}$ is the channel matrix from transmitter $i$ to user $k$ in the first user group. The term $\mathbf{x}_i \in \mathbb{C}^{M\times 1}$ is the transmit signal of the $i$th transmitter. Since each transmitter does not have channel state information, we assume equal power allocation among $M$ data streams, i.e., $\mathbb{E}\{ \mathbf{x}_i \mathbf{x}_i ^{*}\} = (P/M) \mathbf{I}_M$. The random vector $\mathbf{n}_k \in \mathbb{C} ^{N_R\times 1}$ is Gaussian noise with zero mean and an identity covariance matrix, i.e., $\mathbf{n}_k \sim \mathcal{CN} (0,\mathbf{I}_{N_R})$. When $N_T>M$, the system model becomes statistically identical with $M\times 2M$ MIMO IC if each transmitter uses an arbitrary precoding matrix $\mathbf{W} \in \mathbb{C} ^{N_T\times M}$ such that $\mathbf{W}^{*}\mathbf{W}=\mathbf{I}_M$. From , the capacity at the $k$th user is given by [@SPB2008] $$\begin{aligned} C_k &=\log_2 \bigg\vert \mathbf{I}_{N_R} +\frac{P}{M} \mathbf{H}_{k,1} \mathbf{H}^{*}_{k,1} \bigg( \mathbf{I}_{N_R} + \frac{P}{M}\sum_{i=2}^3 \mathbf{H}_{k,i} \mathbf{H}^{*}_{k,i} \bigg)^{-1} \bigg\vert, \label{eqn:optimal_capacity_ik}\end{aligned}$$ which requires joint decoding and non-linear receivers. In our system model, we assume that each user adopts linear postprocessing. Half of receive antenna dimensions (i.e., $M$) are used for the desired $M$ data streams, and the remaining dimensions are used for interference suppression. The received signals are projected onto the $M$-dimensional subspace designated for the desired signals at each receiver. The $k$th user uses the postprocessing matrix $\mathbf{F}_k \in \mathbb{C}^{M\times N_R}$ to project the received signals onto the row space of $\mathbf{F}_k$ which is $M$-dimensional subspace designated for the desired signals in $\mathbb{C}^{N_R}$. Therefore, $\mathbf{F}_k$ consists of the bases of the $M$-dimensional subspace designated for the desired signals and satisfies $\mathbf{F}_k \mathbf{F}_k^{*}= \mathbf{I}_M$. In this way, when each transmitter selects the user who has perfectly aligned interfering signals, the selected user can obtain DoF $M$ by the postprocessing. At the $k$th user, the received signal after postprocessing becomes $$\begin{aligned} \mathbf{F}_k\mathbf{y}_k =\mathbf{F}_k\mathbf{H}_{k,1}\mathbf{x}_1 +\sum_{i=2}^3 \mathbf{F}_k\mathbf{H}_{k,i}\mathbf{x}_i + \mathbf{F}_k\mathbf{n}_k{\nonumber}\end{aligned}$$ and the achievable rate at the user $k$ denoted by ${R}_k$ is given by $$\begin{aligned} {R}_k &=\log_2 \bigg\vert \mathbf{I}_M +\frac{P}{M}\mathbf{F}_k \mathbf{H}_{k,1} \mathbf{H}^{*}_{k,1} \mathbf{F}_k^{*}\bigg( \mathbf{I}_M + \frac{P}{M}\sum_{i=2}^3 \mathbf{F}_k\mathbf{H}_{k,i} \mathbf{H}^{*}_{k,i} \mathbf{F}_k^{*}\bigg)^{-1} \bigg\vert{\nonumber\\}&= \log_2\frac{ \left\vert \mathbf{I}_M +\frac{P}{M}\sum_{i=1}^3 \mathbf{F}_k\mathbf{H}_{k,i} \mathbf{H}^{*}_{k,i}\mathbf{F}_k^{*}\right\vert} {\left\vert \mathbf{I}_M + \frac{P}{M}\sum_{i=2}^3 \mathbf{F}_k\mathbf{H}_{k,i} \mathbf{H}^{*}_{k,i}\mathbf{F}_k^{*}\right\vert}. \label{eqn:capacity_ik}\end{aligned}$$ Let $k^\star$ be the index of the selected user at the first transmitter. Then, the achievable rate of the first transmitter becomes ${R}_{k^\star}$. When the transmitter supports one of $K$ users, the average achievable rate at the first transmitter denoted by $\mathcal{R}_{[K]}$ becomes $$\begin{aligned} \mathcal{R}_{[K]} &\triangleq \mathbb{E}_{\mathbf{H}}[{R}_{k^\star}],\end{aligned}$$ In this case, the achievable DoF of the first transmitter becomes $$\begin{aligned} \mathcal{D} \triangleq {\lim_{P\to\infty}\frac{\mathcal{R}_{[K]}}{\log_2 P}}.\end{aligned}$$ Note that the average achievable rate and DoF of the system with all transmitters become $3\mathcal{R}_{[K]}$ and $3\mathcal{D}$, respectively. Throughout the paper, we assume that all channel matrices (i.e., $\mathbf{H}_{k,i}$ for all $k$ and $i$) have independent and identically distributed (i.i.d.) elements so that the interfering subspaces formed by the interfering channels are isotropic and independent of each other. Preliminaries – Angles between Two Subspaces ============================================ In our system, each user suffers from two interfering channels each of which constructs an $M$-dimensional subspace in $\mathbb{C} ^{N_R}$. Because the distance between the two subspaces can be measured in terms of angles between them, we shortly overview the angles between two subspaces. As a widely used geometric concept in wireless communications, the Grassmann manifold $\mathcal{G}_{N_R,M}(\mathbb{C})$ is defined as the set of all $M$-dimensional subspaces in an $N_R$-dimensional space, $\mathbb{C}^{N_R}$ [@ZT2002; @LHS2003; @DLR2008; @DLLR2009; @RJ2008]. Consider two $M$-dimensional subspaces $\mathcal{A}, \mathcal{B}$ in $N_R$-dimensional space, i.e., $\mathcal{A}, \mathcal{B} \in \mathcal{G}_{N_R,M}(\mathbb{C})$. The angles between the subspaces can be measured with the *principal angles* that is also called as the canonical angles. Since both $\mathcal{A}$ and $\mathcal{B}$ are $M$-dimensional subspaces, there are $M$ principal angles between them. Let $\theta_1, \ldots, \theta_M \in[0,\pi/2]$ be the $M$ principal angles such that $\theta_1< \ldots< \theta_M$, then we can find them recursively by searching $N_R$-dimensional unit vectors $\{\mathbf{a}_m, \mathbf{b}_m\}_{m=1}^M$ such that [@GL1989 Chap. 12] $$\begin{aligned} \cos\theta_m =\max_{\mathbf{a}\in\mathcal{A} \atop \mathbf{b}\in\mathcal{B}}~ \vert\mathbf{a}^{*}\mathbf{b}\vert =\vert\mathbf{a}_m^{*}\mathbf{b}_m\vert {\nonumber}\end{aligned}$$ subject to $\Vert\mathbf{a}\Vert=1$, $\Vert\mathbf{b}\Vert=1$, $\mathbf{a}^{*}\mathbf{a}_n=0$, $\mathbf{b} ^{*}\mathbf{b}_n=0$ ($1\le n \le m-1$). The vectors $\{\mathbf{a}_m\}_{m=1}^M$ and $\{\mathbf{b}_m\}_{m=1}^M$ become the *principal vectors* of $\mathcal{A}$ and $\mathcal{B}$, respectively. From the principal angles, we can define various distances between the subspaces. Arguably, the *chordal distance* is the most widely used one among them. The chordal distance between the subspaces $\mathcal{A}$ and $\mathcal{B}$ denoted by $d_c(\mathcal{A}, \mathcal{B})$ is defined as $$\begin{aligned} d_c(\mathcal{A},\mathcal{B}) \triangleq \sqrt{\sum_{m=1}^M \sin^2\theta_m}. \label{eqn:chordal_distance1}\end{aligned}$$ Alternatively, we can use the *generator matrices* to represent the chordal distance; a generator matrix of a subspace consists of orthonormal columns that span the subspace. For example, $\mathbf{A}, \mathbf{B}\in\mathbb{C}^{N_R\times M}$ are generator matrices of the subspace $\mathcal{A}, \mathcal{B} \in \mathcal{G}_{N_R, M}(\mathbb{C})$ when $\mathbf{A}^{*}\mathbf{A}= \mathbf{B}^{*}\mathbf{B} =\mathbf{I}_M$, and their columns span the subspaces $\mathcal{A}$ and $\mathcal{B}$, respectively. Although the generator matrices $\mathbf{A}$ and $\mathbf{B}$ are infinitely many, the chordal distance between two subspaces is uniquely obtained with any generator matrix pairs such that $$\begin{aligned} d_c(\mathcal{A},\mathcal{B}) &=\frac{1}{2}\Vert \mathbf{A}\mathbf{A}^{*}- \mathbf{B}\mathbf{B}^{*}\Vert_F {\nonumber\\}&=\sqrt{M-tr(\mathbf{A}^{*}\mathbf{B} \mathbf{B}^{*}\mathbf{A})}.\label{eqn:chordal_definition}\end{aligned}$$ Also, we can obtain the principal angles and the principal vectors from the generator matrices. Let the singular value decomposition (SVD) of $\mathbf{A}^{*}\mathbf{B}$ be [@GL1989 Chap. 12] $$\begin{aligned} \mathbf{A}^{*}\mathbf{B}=\mathbf{YDZ}^{*}, \label{eqn:YZ}\end{aligned}$$ where $\mathbf{Y},\mathbf{Z}\in\mathbb{C}^{M\times M}$ are unitary matrices and $\mathbf{D}=diag(\mu_1,\mu_2,\ldots,\mu_M)$ where $\mu_m$ is the $m$th largest singular value such that $\mu_1 \ge \mu_2 \ldots \ge \mu_M \ge 0$. Then, the $m$th largest singular value of $\mathbf{A}^{*}\mathbf{B}$ and the $m$th principal angle between $\mathcal{A}$ and $\mathcal{B}$ has the following relationship: $$\begin{aligned} \mu_m = \cos\theta_m. {\nonumber}\end{aligned}$$ Also, the corresponding principal vectors $\mathbf{a}_m$ and $\mathbf{b}_m$ can be obtained from $\mathbf{Y}$ and $\mathbf{Z}$ such that $$\begin{aligned} \mathbf{a}_m=\mathbf{A}\mathbf{y}_m,\quad \mathbf{b}_m=\mathbf{B}\mathbf{z}_m, {\nonumber}\end{aligned}$$ where $\mathbf{y}_m$ and $\mathbf{z}_m$ are the $m$th column vectors of $\mathbf{Y}$ and $\mathbf{Z}$, respectively. From the generator matrices and the principal angles, we obtain the following lemma needed to analyze the proposed OIA scheme. \[lemma:p\_angles\] When $\mathbf{A}, \mathbf{B} \in\mathbb{C}^{N_R\times M}$ are the generator matrices of the subspaces $\mathcal{A}, \mathcal{B} \in\mathcal{G}_{N_R, M}(\mathbb{C})$, the eigenvalues of $\mathbf{A} \mathbf{A}^{*}+ \mathbf{B} \mathbf{B} ^{*}$ can be represented in descending order as $$\begin{aligned} \underbrace{1+\cos^2\theta_1,\ldots,1+\cos^2\theta_M}_{M}, \underbrace{1-\cos^2\theta_M,\ldots,1-\cos^2\theta_1}_{M} \label{eqn:AA_BB_eigen}\end{aligned}$$ where $\theta_m$ is the $m$th principal angle between $\mathcal{A}$ and $\mathcal{B}$. Using the unitary matrices $\mathbf{Y}$ and $\mathbf{Z}$ in , $\mathbf{A} \mathbf{A}^{*}+ \mathbf{B} \mathbf{B} ^{*}$ can be rewritten as $$\begin{aligned} \mathbf{A} \mathbf{A}^{*}+\mathbf{B} \mathbf{B}^{*}&=\mathbf{AY} (\mathbf{AY})^{*}+ \mathbf{BZ}(\mathbf{BZ})^{*}{\nonumber\\}&=\sum_{m=1}^{M}\left(\mathbf{a}_m\mathbf{a}_m^{*}+\mathbf{b}_m\mathbf{b}_m^{*}\right). \label{eqn:AA_BB}\end{aligned}$$ Also, we decompose $\mathbf{b}_m$ as $$\begin{aligned} \mathbf{b}_m= \cos\theta_m\mathbf{a}_m+\sin\theta_m\mathbf{e}_m, \label{eqn:bb}\end{aligned}$$ where $\theta_m$ is the $m$th principal angle, and $\mathbf{e}_m$ is an unit vector orthogonal with $\mathbf{a}_m$ such that $\Vert\mathbf{e}_m\Vert=1$ and $\mathbf{a}_m \perp \mathbf{e}_m$. From the property of principal vectors, $\mathbf{a}_i \perp \mathbf{a}_j$ and $\mathbf{b}_i \perp \mathbf{b}_j$ for $i\ne j$. Also, from the relationship between the principal angle and the principal vector given in , it is satisfied that $$\begin{aligned} (\mathbf{AY})^{*}\mathbf{BZ} =[\mathbf{a}_1,\ldots,\mathbf{a}_{M}]^{*}[\mathbf{b}_1,\ldots,\mathbf{b}_{M}] =\mathbf{D}{\nonumber}\end{aligned}$$ which implies $\mathbf{a}_i \perp \mathbf{b}_j$ for $i\ne j$ because $\mathbf{D}$ is a diagonal matrix defined in . Since $\mathbf{a}_i \perp \{\mathbf{a}_j, \mathbf{b}_j\}$ and $\mathbf{b}_i \perp \{\mathbf{a}_j, \mathbf{b}_j\}$ for $i\ne j$, it is satisfied that $span(\mathbf{a}_i, \mathbf{b}_i) \perp span(\mathbf{a}_j, \mathbf{b}_j)$ for $i\ne j$, equivalently, $span(\mathbf{a}_i, \mathbf{e}_i) \perp span(\mathbf{a}_j, \mathbf{e}_j)$ for $i\ne j$. Also, from the fact that $\mathbf{a}_i \perp \mathbf{e}_i$, we can conclude that $\{\mathbf{a}_1, \ldots, \mathbf{a}_M, \mathbf{e}_1, \ldots, \mathbf{e}_M \}$ becomes $2M$ orthonormal bases of $\mathbb{C}^{2M}$. From , we have $$\begin{aligned} \mathbf{b}_m\mathbf{b}_m^{*}&=(\cos\theta_m\mathbf{a}_m+\sin\theta_m\mathbf{e}_m) (\cos\theta_m\mathbf{a}_m+\sin\theta_m\mathbf{e}_m)^{*}{\nonumber\\}&=\cos^2\theta_m\cdot\mathbf{a}_m\mathbf{a}_m^{*}+\sin^2\theta_m\cdot\mathbf{e}_m\mathbf{e}_m^{*},{\nonumber}$$ and can be rewritten by $$\begin{aligned} \mathbf{A} \mathbf{A}^{*}+\mathbf{B} \mathbf{B}^{*}&=\sum_{m=1}^{M}\left(\mathbf{a}_m\mathbf{a}_m^{*}+\mathbf{b}_m\mathbf{b}_m^{*}\right){\nonumber\\}&=\sum_{m=1}^{M}\left[(1+\cos^2\theta_m)\mathbf{a}_m\mathbf{a}_m^{*}+(1-\cos^2\theta_m)\mathbf{e}_m\mathbf{e}_m^{*}\right].{\nonumber}\end{aligned}$$ Thus, $\mathbf{A} \mathbf{A}^{*}+\mathbf{B} \mathbf{B}^{*}$ has the eigenvectors $\{\mathbf{a}_m\}_{m=1}^M$ and $\{\mathbf{e}_m\}_{m=1}^M$, and ordered eigenvalues given in . \[lemma:p\_angles\_sum\] When $\mathbf{A}, \mathbf{B} \in\mathbb{C}^{N_R\times M}$ are the generator matrices of the subspaces $\mathcal{A}, \mathcal{B} \in\mathcal{G}_{N_R, M}(\mathbb{C})$, sum of the $M$ smallest eigenvalues of $\mathbf{A} \mathbf{A}^{*}+ \mathbf{B} \mathbf{B} ^{*}$ becomes the squared chordal distance between $\mathcal{A}$ and $\mathcal{B}$. From Lemma \[lemma:p\_angles\], we can find that $$\begin{aligned} \sum_{m=M+1}^{2M} \lambda_m \left(\mathbf{A}\mathbf{A}^{*}+ \mathbf{B} \mathbf{B} ^{*}\right) &=\sum_{m=1}^{M} (1 - \cos^2\theta_m) {\nonumber\\}&\stackrel{(a)}{=} d_c^2\left(\mathcal{A}, \mathcal{B}\right){\nonumber},\end{aligned}$$ where the equality $(a)$ is from the definition of the chordal distance given in . Opportunistic Interference Alignment ==================================== What is the Opportunistic Interference Alignment? ------------------------------------------------- The basic concept of interference alignment is to minimize the dimensions occupied by interfering signals. Although the dimensions of *each interfering signal* are irreducible, the dimensions occupied by *all interfering signals* can be minimized by aligning them into the same subspace. When the number of users is finite, it is obvious that two interfering channels at each user are not aligned because two interfering transmitters cannot access a common subspace at each receiver. However, as the number of users increases, we can find the user whose interfering channels are more overlapped with each other. In the proposed OIA, we exploit the multiuser dimensions to align the interfering signals. By opportunistic user selection, two irreducible $M$-dimensional interfering signals can be aligned in an $M$-dimensional subspace. In this section, we propose two different OIA schemes. In the first OIA scheme, the transmitter selects a user whose rate loss caused by interference is minimum. In the second OIA scheme, the transmitter selects a user who has the minimum distance between the interfering signals. Now, we assume that the elements of all channel matrices are i.i.d. circularly symmetric complex Gaussian random variables with zero mean and unit variance. We decompose the achievable rate at each user given in into two terms ${R}_k^+$ and ${R}_k^-$ given by $$\begin{aligned} {R}_k^{+} &=\log_2 \bigg\vert \mathbf{I}_M +\frac{P}{M}\sum_{i=1}^{3} \mathbf{F}_k\mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}\mathbf{F}_k^{*}\bigg\vert \label{eqn:capacity_gain}\\ {R}_k^{-} &=\log_2\bigg\vert \mathbf{I}_M + \frac{P}{M}\sum_{i=2}^{3} \mathbf{F}_k\mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}\mathbf{F}_k^{*}\bigg\vert, \label{eqn:capacity_loss}\end{aligned}$$ respectively, so that ${R}_k = {R}_k^{+} - {R}_k^{-}$. We call ${R}_k^-$ as *rate loss term*. In the same way, we can rewrite the average achievable rate at the selected user among $K$ users as $$\begin{aligned} \mathcal{R}_{[K]} = \mathcal{R}_{[K]}^+ - \mathcal{R}_{[K]}^-,\end{aligned}$$ where $\mathcal{R}_{[K]}^+ = {\mathbb{E}\left[{R}_{k^\star}^+\right]}$ and $\mathcal{R}_{[K]}^- = {\mathbb{E}\left[{R}_{k^\star}^-\right]}$, respectively. Our proposed OIA schemes aim at minimizing the dimension occupied by the interfering signals and hence maximizing the achievable DoF at the transmitter. Since it is straightforward that $\lim_{P\to\infty} (\mathcal{R}_{[K]}^+/\log_2P)=M$, the achievable DoF of the first transmitter using OIA can be expressed by $$\begin{aligned} {\lim_{P\to\infty}\frac{\mathcal{R}_{[K]}}{\log_2 P}} &= M - {\lim_{P\to\infty}\frac{\mathcal{R}_{[K]}^-}{\log_2 P}}. \label{eqn:DoF_OIA}\end{aligned}$$ Thus, we minimize the DoF loss caused by interference, $\lim_{P\to\infty} (\mathcal{R}_{[K]}^-/\log_2P)$. In next two subsections, we propose the OIA schemes to reduce the DoF loss coming from the interferences. OIA via Rate Loss Minimization (OIA1) ------------------------------------- Firstly, we directly minimize the average rate loss term at the selected user via the postprocessing matrix design and user selection. In this case, the average rate loss term becomes $$\begin{aligned} \mathbb{E}_{\mathbf{H}}\big[ \min_{k, \mathbf{F}_k}~{R}_k^- \big] =\mathbb{E}_{\mathbf{H}}\bigg[\underset{k, \mathbf{F}_k}{\min} ~\log_2\bigg\vert \mathbf{I}_M + \frac{P}{M}\sum_{i=2}^{3} \mathbf{F}_k \mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}\mathbf{F}_k^{*}\bigg\vert \bigg]. \label{eqn:avr_rate_loss}\end{aligned}$$ For each channel realization, the user $k$ minimizes the rate loss term by using the postprocessing matrix given by $$\begin{aligned} \mathbf{F}_k ^{\mathsf{\scriptscriptstyle OIA}}&\triangleq {\underset{\mathbf{F}_k}{\arg\min}} ~\log_2\bigg\vert \mathbf{I}_M + \frac{P}{M}\sum_{i=2}^{3} \mathbf{F}_k \mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}\mathbf{F}_k^{*}\bigg\vert {\nonumber\\}&={\underset{\mathbf{F}_k}{\arg\min}}~ \big\vert\mathbf{F}_k(\mathbf{H}_{k,2} \mathbf{H}_{k,2} ^{*}+ \mathbf{H}_{k,3} \mathbf{H}_{k,3}^{*})\mathbf{F}_k^{*}\big\vert {\nonumber\\}&= \left[ \mathbf{v}_{M+1} (\mathbf{B}_k),\ldots, \mathbf{v}_{2M} (\mathbf{B}_k) \right] ^{*}, \label{eqn:OIA_FB_F}\end{aligned}$$ where $\mathbf{B}_k= \mathbf{H}_{k,2} \mathbf{H}_{k,2} ^{*}+ \mathbf{H}_{k,3} \mathbf{H}_{k,3} ^{*}$, and the corresponding rate loss term becomes $\log_2\prod_{m=M+1}^{2M}\big(1 + \frac{P}{M} \lambda_{m} \left(\mathbf{B}_k \right) \big)$. Thus, the required feedback information for the $k$th user becomes $$\begin{aligned} \prod_{m=M+1}^{2M}\left(1 + \frac{P}{M} \lambda_{m} \left(\mathbf{B}_k \right) \right), \label{eqn:feedback_OIA1}\end{aligned}$$ and the selected user at the transmitter denoted by $k_{{\mathsf{\scriptscriptstyle OIA1}}}^\star$ becomes $$\begin{aligned} k_{{\mathsf{\scriptscriptstyle OIA1}}}^\star &= {\underset{k}{\arg\min}} \prod_{m=M+1}^{2M}\bigg(1 + \frac{P}{M} \lambda_{m} \left(\mathbf{B}_k \right) \bigg).\end{aligned}$$ OIA via Chordal Distance Minimization (OIA2) -------------------------------------------- As an alternative implementation, the transmitter can select a user whose interfering channels are closest. The chordal distance is used to measure the distance between the interfering channels at each user. Firstly, we find the upper bound of in the following lemma. \[lemma:rate\_loss\_bound\] The minimized average rate loss term given in is upper bounded by $$\begin{aligned} \mathbb{E}_{\mathbf{H}}\big[ \min_{k, \mathbf{F}_k}~{R}_k^- \big] \le\mathbb{E}_{\tilde{\mathbf{H}}} \bigg\{\min_k~ M\log_2\bigg[ 1 + \frac{P}{M} d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) \bigg] \bigg\}, \label{eqn:rate_loss_bound}\end{aligned}$$ where $\tilde{\mathbf{H}}_{k,i} \in \mathbb{C}^{N_R\times M}$ is an arbitrary generator matrix of the subspace spanned by $\mathbf{H}_{k,i}$. Since $\mathbf{H}_{k,i} \in \mathbb{C}^{N_R\times M}$, the matrix $\mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}$ has $M$ non-zero eigenvalues. Thus, it can be decomposed by $$\begin{aligned} \mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}= \mathbf{U}_{k,i} \mathbf{\Lambda}_{k,i} \mathbf{U}_{k,i}^{*},\end{aligned}$$ where $\mathbf{\Lambda}_{k,i} \in \mathbb{C}^{M \times M}$ is a diagonal matrix whose diagonal elements are the non-zero eigenvalues of $\mathbf{H}_{k,i} \mathbf{H}_{k,i}^{*}$, and $\mathbf{U}_{k,i} \in \mathbb{C}^{N_R \times M}$ consists of the corresponding eigenvectors to the non-zero eigenvalues which becomes a semi-orthogonal matrix such that $\mathbf{U}_{k,i}^{*}\mathbf{U}_{k,i} =\mathbf{I}_M$ but $\mathbf{U}_{k,i} \mathbf{U}_{k,i}^{*}\ne \mathbf{I}_{N_R}$ [^2]. Using this decomposition, we can bound as follows: $$\begin{aligned} \mathbb{E}_{\mathbf{H}}\big[ \min_{k, \mathbf{F}_k}~{R}_k^- \big] &\stackrel{(a)}{=}\mathbb{E}_{\mathbf{U}}\Big\{~ \mathbb{E}_{\mathbf{\Lambda}}\big[ \min_{k, \mathbf{F}_k}~{R}_k^- \big]~\Big\}{\nonumber\\}&\stackrel{(b)}{\le}\mathbb{E}_{\mathbf{U}}\Big\{~ \min_{k, \mathbf{F}_k}~ \mathbb{E}_{\mathbf{\Lambda}}\big[{R}_k^- \big] ~\Big\}{\nonumber\\}&\stackrel{(c)}{\le}\mathbb{E}_{\mathbf{U}} \bigg\{\min_k~ M\log_2\bigg[ 1 + \frac{P}{M} d_c^2(\mathbf{U}_{k,2}, \mathbf{U}_{k,3}) \bigg]\bigg\},\label{eqn:rate_loss_bound2}\end{aligned}$$ where the equality $(a)$ holds from the fact that $\mathbf{U}_{k,i}$ and $\mathbf{\Lambda}_{k,i}$ are independent of each other [@RJ2008], and the inequality $(b)$ is because the average of the minimum values is smaller than the minimum of the average values. The inequality $(c)$ holds because $$\begin{aligned} \min_{k, \mathbf{F}_k}~ \mathbb{E}_{\mathbf{\Lambda}}\big[{R}_k^- \big] &= \min_{k, \mathbf{F}_k}~ \mathbb{E}_{\mathbf{\Lambda}} \log_2\bigg\vert\mathbf{I}_M + \frac{P}{M} \mathbf{F}_k \Big( \sum_{i=2}^{3} \mathbf{U}_{k,i}\mathbf{\Lambda}_{k,i} \mathbf{U}_{k,i}^{*}\Big) \mathbf{F}_k^{*}\bigg\vert {\nonumber\\}&\stackrel{(c_1)}{\le} \min_{k, \mathbf{F}_k}~ \log_2\bigg\vert\mathbf{I}_M + P \mathbf{F}_k \Big( \sum_{i=2}^{3} \mathbf{U}_{k,i} \mathbf{U}_{k,i}^{*}\Big) \mathbf{F}_k^{*}\bigg\vert {\nonumber\\}&\stackrel{(c_2)}{=}\min_k~ \log_2\prod_{m=M+1}^{2M}\bigg(1 + P \lambda_{m} \left(\mathbf{C}_k \right) \bigg){\nonumber\\}&\stackrel{(c_3)}{\le} \min_k~M\log_2\bigg[ 1 + \frac{P}{M} \sum_{m=M+1}^{2M}\lambda_{m}\left(\mathbf{C}_k \right) \bigg] {\nonumber\\}&\stackrel{(c_4)}{=} \min_k~ M\log_2\bigg[ 1 + \frac{P}{M} d_c^2(\mathbf{U}_{k,2}, \mathbf{U}_{k,3}) \bigg],{\nonumber}\end{aligned}$$ where $\mathbf{C}_k = \sum_{i=2}^{3} \mathbf{U}_{k,i} \mathbf{U}_{k,i}^{*}$. The inequality $(c_1)$ is from the Jensen’s inequality and $\mathbb{E} [\mathbf{\Lambda}_{k,i}] = M\mathbf{I}_M$ [@RJ2008]. The equality $(c_2)$ is obtained by applying $\mathbf{F}_k = \left[ \mathbf{v}_{M+1} (\mathbf{C}_k),\ldots, \mathbf{v}_{2M} (\mathbf{C}_k) \right] ^{*}$. Also, the inequality $(c_3)$ is from the concavity of a logarithm function with the Jensen’s inequality. Finally, the equality $(c_4)$ is satisfied from Lemma \[lemma:p\_angles\_sum\]. Although $\mathbf{U}_{k,i}$ is one of the generator matrices of the subspace formed by $\mathbf{H}_{k,i}$, it can be replaced by any arbitrary generator matrices denoted by $\tilde{\mathbf{H}}_{k,i}$ because the chordal distance is uniquely defined for any generator matrices. Thus, the bound can by equivalently rewritten by . In OIA2, we minimize instead of . Thus, the feedback information at user $k$ becomes $d_c^2 (\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3})$ given by $$\begin{aligned} d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) &=\frac{1}{2}\Vert \tilde{\mathbf{H}}_{k,2}\tilde{\mathbf{H}}_{k,2}^{*}- \tilde{\mathbf{H}}_{k,3}\tilde{\mathbf{H}}_{k,3}^{*}\Vert_F {\nonumber\\}&= M-tr(\tilde{\mathbf{H}}_{k,2}^{*}\tilde{\mathbf{H}}_{k,3} \tilde{\mathbf{H}}_{k,3}^{*}\tilde{\mathbf{H}}_{k,2}), \label{eqn:feedback_OIA2}\end{aligned}$$ and the index of the selected user denoted by $k_{{\mathsf{\scriptscriptstyle OIA2}}}^\star$ becomes $$\begin{aligned} k_{\mathsf{\scriptscriptstyle OIA2}}^\star = {\underset{k}{\arg\min}} ~ d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}). \label{eqn:k1_OIA}\end{aligned}$$ In OIA1, each user requires SVD to find the feedback information , and concurrently the postprocessing matrix is obtained. In OIA2, however, each user only needs to find the generator matrices of the interfering channels for the feedback information given in . Although the generator matrix can be obtained by various ways such as SVD and QR decomposition, each user adopts the QR decomposition to find the generator matrix since it is simpler than SVD. Thus, we can greatly reduce the computational complexity of OIA2 compared with OIA1. We describe details on this in Section \[subsection:complexities\]. To quantify the rate loss at the selected user, we should find the relationship between feedback value from the selected user and the number of total users, i.e., the relationship between $\mathbb{E}\Big[\underset{1\le k\le K}{\min} d_c^2( \tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3})\Big]$ and $K$. The following lemma helps us to obtain the average feedback value from the selected user. \[lemma:distortion\_bound\] The average feedback value from the selected user is equivalent to the average of the minimum chordal distance when we quantize an arbitrary subspace $\mathcal{A} \in \mathcal{G}_{N_R,M}(\mathbb{C})$ with one of the $K$ random subspaces $\mathcal{C}_{\mathrm{rnd}} \subset \mathcal{G}_{N_R,M} (\mathbb{C})$ such that $$\begin{aligned} {\mathbb{E}\left[\min_k~d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3})\right]} = \mathbb{E}_{\mathcal{C}_{\mathrm{rnd}}}\left[ \min_{\mathbf{W} \in \mathcal{C}_{\mathrm{rnd}}} d_c^2(\mathbf{A}, \mathbf{W})\right]. \label{eqn:distortion}\end{aligned}$$ Consider an arbitrary subspace $\mathcal{A}\in\mathcal{G}_{N_R, M}(\mathbb{C})$ and its generator matrix $\mathbf{A}\in\mathbb{C}^{N_R\times M}$. Then, we define the rotation matrix $\mathbf{R}_k \in \mathbb{C}^{N_R\times N_R}$ at the $k$th user, which rotates $\tilde{\mathbf{H}}_{k,2}$ to $\mathbf{A}$ such that $\mathbf{R}_k \tilde{\mathbf{H}} _{k,2} = \mathbf{A}$. If we denote the generator matrix of the null space of $\mathcal{A}$ by $\mathbf{A} ^\perp \in \mathbb{C}^{N_R\times M}$, the matrix $\mathbf{R}_k$ can be represented by $$\begin{aligned} \mathbf{R}_k = \left[\mathbf{A}, \mathbf{A}^\perp\right] \left[\tilde{\mathbf{H}} _{k,2}, \tilde{\mathbf{H}} _{k,2} ^\perp\right]^{*},\end{aligned}$$ which becomes a unitary matrix, i.e., $\mathbf{R}_k^{*}\mathbf{R}_k = \mathbf{R}_k \mathbf{R}_k^{*}= \mathbf{I}_{N_R}$. Since the chordal distance is invariant with a rotation, the chordal distance at the $k$th user satisfies $$\begin{aligned} d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) = d_c^2(\mathbf{R}_k\tilde{\mathbf{H}}_{k,2}, \mathbf{R}_k\tilde{\mathbf{H}}_{k,3}) = d_c^2(\mathbf{A}, \mathbf{R}_k\tilde{\mathbf{H}}_{k,3}).\end{aligned}$$ The chordal distance at the selected user becomes $$\begin{aligned} \min_k~d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) &=\min_k~d_c^2(\mathbf{R}_k\tilde{\mathbf{H}}_{k,2}, \mathbf{R}_k\tilde{\mathbf{H}}_{k,3}){\nonumber\\}&=\min_{\mathbf{W} \in \mathcal{C}_{\mathrm{rnd}}} ~d_c^2(\mathbf{A}, \mathbf{W})\end{aligned}$$ where $\mathcal{C}_{\mathrm{rnd}} \subset \mathcal{G}_{N_R, M} $ is a set of $K$ random subspaces such that $\mathcal{C}_{\mathrm{rnd}} = \{\mathbf{R}_k \tilde{\mathbf{H}}_{k,3}\} _{k=1}^K$. Thus, the average chordal distance at the selected user can be given by the average of the minimum chordal distance between an arbitrary subspace and its quantized subspace by one of the $K$ random subspaces as in . It has been shown that the average quantization error when an arbitrary source on the Grassmann manifold $\mathcal{G}_{N_R, M} (\mathbb{C})$ is quantized with the random codebook $\mathcal{C}_{\mathrm{rnd}} \subset \mathcal{G}_{N_R, M} (\mathbb{C})$ of size $K$ is upper bounded by $D$ [@DLR2008], i.e., $$\begin{aligned} \mathbb{E} \Big[ \min_{ \mathbf{W} \in \mathcal{C}_{\mathrm{rnd}}} d_c^2 (\mathbf{H}, \mathbf{W}) \Big] \le D, \label{eqn:QE_bound}\end{aligned}$$ where $D$ is given by $$\begin{aligned} D= &\frac{\Gamma\left(\frac{1}{M^2}\right)}{M^2} ( \eta K )^{-\frac{1}{M^2}} +M\exp\left[ -\left(\eta K\right)^{1-a} \right] \label{eqn:D_bar}\end{aligned}$$ with $\eta = \frac{1}{\Gamma(M^2+1)} \prod_{i=1} ^{M} \frac{\Gamma(2M-i+1)} {\Gamma(M-i+1)}$, and $a\in (0,1)$ is a real number chosen to satisfy $(\eta K)^{\frac{-a}{M^2}}\le 1$. Thus, from Lemma \[lemma:distortion\_bound\] and , we can conclude that the average feedback value from the selected user is upper bounded as $$\begin{aligned} {\mathbb{E}\left[\min_k~d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3})\right]} \le D. \label{eqn:D_le_Dbar}\end{aligned}$$ Note that the second term in can be negligible compared to the first term for large $K$[@DLR2008], and the main order term of is sufficiently accurate[@DLR2008; @DLLR2009; @RJ2008]. Once a user is selected at the transmitter, the selected user only finds the postprocessing matrix to minimize the rate loss term, which is given in , i.e., only the user $k_2^\star$ finds the postprocessing matrix $\mathbf{F} _{k_2^\star}^{\mathsf{\scriptscriptstyle OIA}}$. Achievable Rate and Degrees of Freedom (DoF) ============================================ This section analyzes the achievable rate of the proposed OIA schemes and their DoF. Without loss of generality, the average achievable rate and a DoF at the first transmitter are derived as in the previous section. We start from the following lemma. \[lemma:M-alphaM\] When the number of users (i.e., $K$) is fixed and invariant to $P$, the achievable DoF by the proposed OIA schemes becomes zero such that $$\begin{aligned} \lim_{P\to\infty \atop \textrm{\normalfont Fixed } K } \frac{\mathcal{R}_{[K]}}{\log_2 P} = 0. {\nonumber}\end{aligned}$$ We can directly derive the achievable DoF from . At the user $k$, the matrix $\sum_{i=2}^3\mathbf{H}_{k, i} \mathbf{H}_{k,i} ^{*}$ has $2M$ non-zero eigenvalues with probability one. At the selected user $k^\star$ ($k^\star = k_1^\star$ or $k_2^\star$ using OIA1 or OIA2), the matrix $\sum_{i=2}^3 \mathbf{H}_{k^\star, i} \mathbf{H} _{k^\star,i} ^{*}$ also has $2M$ eigenvalues so that $\sum_{i=2}^3 \mathbf{F}_{k^\star}^{\mathsf{\scriptscriptstyle OIA}}\mathbf{H}_{k^\star, i} \mathbf{H} _{k^\star,i} ^{*}\mathbf{F}_{k^\star} ^{{\mathsf{\scriptscriptstyle OIA}}{*}}$ becomes a full rank matrix having $M$ non-zero eigenvalues. Thus, when $K$ is fixed (invariant with $P$), one can easily find that $\underset{P\to\infty}{\lim} \frac{\mathcal{R}_{[K]} ^-}{\log_2 P} = M$. Substituting this into , we complete the proof. Fig. \[fig:varying\_K\] shows the average achievable rates of each user with the proposed OIA2 scheme for $K=10$ and $K=50$, respectively, when $(N_T, M, N_R)=(2, 2, 4)$. As stated in Lemma \[lemma:M-alphaM\], the achievable DoF of each user becomes always zero when the number of users is finite. On the other hand, by increasing the number of users we can reduce the rate loss term so that the positive DoF can be obtained at the first transmitter. In the next lemma, we find the upper bound of the rate loss term as a function of the number of users. \[lemma:capacity\_loss\_bound\] When the first user group has $K$ users, the average rate loss term at the selected user is bounded by $$\begin{aligned} \mathcal{R}_{[K]}^{\textrm{loss}}\le M\log_2\left(1+\frac{P}{M}D \right),\label{eqn:R_loss_bound}\end{aligned}$$ where $D$ is given in . The inequality in Lemma \[lemma:rate\_loss\_bound\] can be further bounded by $$\begin{aligned} \mathbb{E}_{\mathbf{H}}\big[ \min_{k, \mathbf{F}_k}~R_k^- \big] &\le\mathbb{E}_{\tilde{\mathbf{H}}} \bigg\{\min_k~ M\log_2\bigg[ 1 + \frac{P}{M} d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) \bigg] \bigg\}{\nonumber\\}&=\mathbb{E}_{\tilde{\mathbf{H}}} \bigg\{ M\log_2\bigg[ 1 + \frac{P}{M}\Big[ \min_k d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) \Big]\bigg] \bigg\}{\nonumber\\}&\stackrel{(a)}{\le} M\log_2\bigg[ 1 + \frac{P}{M} \mathbb{E}_{\tilde{\mathbf{H}}}\left[ \min_k~ d_c^2(\tilde{\mathbf{H}}_{k,2}, \tilde{\mathbf{H}}_{k,3}) \right] \bigg] {\nonumber\\}&\stackrel{(b)}{\le} M\log_2\left( 1 + \frac{P}{M} D \right){\nonumber}\end{aligned}$$ where the inequality $(a)$ is from the Jensen’s inequality, and the inequality $(b)$ is from . \[theorem:OIA\_infinite\_K\] When the transmit power is fixed and the number of users goes to infinity, i.e., $K\to\infty$, the achievable rate at the selected user becomes the ergodic capacity of the $M\times M$ point-to-point MIMO system without interference. When the transmit power is fixed, the rate loss term becomes zero as the number of users goes to infinity. This can be obtained from Lemma 6 using $\lim_{K\to\infty} D = 0$. Thus, when the number of users increases and the transmit power is fixed, the achievable rate using OIA2 becomes $$\begin{aligned} \lim_{K\to\infty \atop \textrm{Fixed~} P} \mathcal{R}_{[K]} &= \mathbb{E}\log_2 \bigg\vert \mathbf{I}_M + \frac{P}{M} \mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star}\mathbf{H}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star,1} \mathbf{H}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star,1}^{*}\mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star}^{*}\bigg\vert{\nonumber\\}&= \mathbb{E}\log_2 \bigg\vert \mathbf{I}_M + \frac{P}{M} \hat{\mathbf{H}} \hat{\mathbf{H}}^{*}\bigg\vert, \label{eqn:MM_MIMO}\end{aligned}$$ where $\hat{\mathbf{H}} \triangleq \mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star} \mathbf{H}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star,1}$ becomes an $M\times M$ matrix whose elements are i.i.d. Gaussian random variables with zero mean and unit variance. This is because $\mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star} \in \mathbb{C}^{M\times N_R}$ is a semi-unitary matrix independently chosen on $\mathbf{H}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star,1}$ such that $\mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star} \mathbf{F}_{k_{\mathsf{\scriptscriptstyle OIA2}}^\star}^{*}= \mathbf{I}_M$. The result in implicates that when the number of users goes to infinity, each transmitter achieves the same ergodic rate as the ergodic capacity of an interference-free $M\times M$ point-to-point MIMO system. Proof for the OIA1 case is trivial. In Lemma \[lemma:M-alphaM\], we showed that the achievable DoF by OIA becomes zero when the number of users is fixed. Theorem 1 implicates that the achievable rate by the proposed OIA schemes becomes the same as that of an $M\times M$ point-to-point MIMO system when the number of users increases under a fixed power. Based on these results, we can conjecture that the achievable DoF by the proposed OIA schemes will be ranged in $[0,M]$ if the number of users is sufficiently large, i.e., $$\begin{aligned} \lim_{P\to\infty} \left[ \lim_{K\to\infty} \frac{\mathcal{R}_{[K]}}{\log_2 P} \right] \in [0, M]. {\nonumber}\end{aligned}$$ The increasing speeds of $K$ and $P$ will determine the value of achievable DoF and Theorem 2 establishes the relationship between achievable DoF and the required number of users. \[theorem:required\_K\] At each transmitter, DoF $m\in[0,M]$ is obtained when the number of users is scaled as $$\begin{aligned} K\propto P^{mM}.\nonumber\end{aligned}$$ Because the achievable DoF using OIA is given by $M - {\lim_{P\to\infty}\frac{\mathcal{R}_{[K]}^-}{\log_2 P}}$, the equivalent condition for DoF $m$ is $$\begin{aligned} {\lim_{P\to\infty}\frac{\mathcal{R}_{[K]}^-}{\log_2 P}} = M-m. \label{eqn:DoF_M-m}\end{aligned}$$ Using the upper bound given in , we obtain the sufficient scaling for such that $$\begin{aligned} {\lim_{P\to\infty}\frac{M\log_2\left(1+\frac{P}{M} D \right)}{\log_2 P}}=M-m.\end{aligned}$$ Substituting into above equation, we obtain the required user scaling $K \propto P^{mM}$ to obtain DoF of $m$ at each transmitter. In Theorem \[theorem:OIA\_infinite\_K\], we have shown that each transmitter and the selected user communicate like an interference free $M\times M$ MIMO system as the number of users goes to infinity for fixed SNR. Theorem \[theorem:required\_K\] implicates that the transmitter can asymptotically achieve the same rate as the capacity of an interference free $M\times M$ MIMO system with user scaled as $K \propto P^{M^2}$ in high SNR region. In Fig. \[fig:user\_scaled\_alpha10\], the achievable rate per transmitter with OIA2 scheme is plotted when $(N_T,M,N_R)=(1,1,2)$. With the user scaling $K \propto P^{M^2}$, the achievable DoF is maintained as $M$ as predicted in Theorem \[theorem:required\_K\]. Comparison with Conventional Opportunistic User Selection ========================================================= In this section, we compare the proposed OIA schemes with conventional user selection schemes in terms of computational complexities and achievable rate. Maximum SNR User Selection (MAX-SNR) ------------------------------------ Firstly, we consider the maximum SNR user selection scheme (MAX-SNR). In this scheme, each user maximizes the achievable rate ignoring the effects of the interfering channels. At the $k$th user, the postprocessing matrix is designed by $$\begin{aligned} \mathbf{F}_k^{\mathsf{\scriptscriptstyle SNR}}&\triangleq {\underset{\mathbf{F}_k}{\arg\max}}~ \log_2 \left\vert \mathbf{I}_M + \frac{P}{M}\mathbf{F}_k \mathbf{H}_{k,1} \mathbf{H}_{k,1}^{*}\mathbf{F}_k^{*}\right\vert, \label{eqn:postprocessing_SNR}\end{aligned}$$ and thus $\mathbf{F}_{k}^{\mathsf{\scriptscriptstyle SNR}}=\left[\mathbf{v}_{1} (\mathbf{A}_k),\ldots, \mathbf{v}_{M}(\mathbf{A}_k)\right]^{*}$ where $\mathbf{A}_k = \mathbf{H}_{k,1} \mathbf{H}_{k,1}^{*}$. The corresponding achievable rate at the $k$th user becomes $\log_2 \prod_{m=1}^{M}\left( 1+\frac{P}{M} \lambda_m( \mathbf{H}_{k,1} \mathbf{H}_{k,1}^{*})\right)$. Thus, the feedback information from the $k$th user becomes $\prod_{m=1}^{M}\left( 1+\frac{P}{M} \lambda_m( \mathbf{H}_{k,1} \mathbf{H}_{k,1}^{*})\right)$, and the index of the selected user denoted by $k_{\mathsf{\scriptscriptstyle SNR}}^\star$ becomes $$\begin{aligned} k_{\mathsf{\scriptscriptstyle SNR}}^\star = {\underset{k}{\arg\max}} ~ \prod_{m=1}^{M} \left( 1+\frac{P}{M} \lambda_m (\mathbf{H}_{k,1} \mathbf{H}_{k,1}^{*})\right).\end{aligned}$$ Time Division Multiplexing -------------------------- In this subsection, we consider two time division multiplexing schemes. In the first time division multiplexing scheme (TDM1), only one of the three transmitters serves its selected user at any time instance. Therefore, the selected user does not receive any interference from other transmitters. Each user finds the postprocessing matrix to maximize the achievable rate, so the postprocessing matrix at the transmitter is the same as that of the MAX-SNR scheme given in . Also, the feedback information from each user and the user selection criterion are exactly the same as those of the MAX-SNR scheme. Because only one selected user is exclusively served by the TDM approach, the achievable DoF per transmitter becomes $M/3$. We also consider another time division multiplexing scheme (TDM2) where only two of three transmitters serve their selected users. Since each user has $2M$ antennas, three transmitters can obtain $2M$ DoF for each channel realization, i.e., each transmitter can achieve $2M/3$ DoF. In TDM2, each transmitter selects a user who has the minimum rate loss term. When the first and the second transmitters simultaneously transmit, the rate loss term of the $k$th user of the first transmitter is minimized as $$\begin{aligned} \min_{\mathbf{F}_k}\log_2\left\vert \mathbf{I}_M + \frac{P}{M} \mathbf{F}_k \mathbf{H}_{k,2} \mathbf{H} _{k,2} ^{*}\mathbf{F}_k ^{*}\right\vert =\prod_{m=M+1}^{2M}\left(1 + \frac{P}{M} \lambda_{m} \big( \mathbf{H}_{k,2} \mathbf{H} _{k,2} ^{*}\big) \right).{\nonumber}$$ Therefore, the required feedback information at the $k$th user becomes the right-hand-side of the equality, and the selected user at the first transmitter denoted by $k_{{\mathsf{\scriptscriptstyle TDM2}}}^\star$ becomes $$\begin{aligned} k_{{\mathsf{\scriptscriptstyle TDM2}}}^\star &= {\underset{k}{\arg\min}} \prod_{m=M+1}^{2M}\bigg(1 + \frac{P}{M} \lambda_{m} \left(\mathbf{H}_{k,2} \mathbf{H} _{k,2} ^{*}\right) \bigg).{\nonumber}\end{aligned}$$ Complexity Analysis {#subsection:complexities} ------------------- In this subsection, the computational complexity of each scheme is represented by the number of floating point operations (flops) [@GL1989 Chap. 1]. An addition, multiplication, or division of real numbers is counted as one flop, so a complex addition and multiplication are counted as two flops and six flops, respectively. For an $m\times n$ complex matrix $\mathbf{G}\in\mathbb{C}^{m\times n}$ $(m\ge n)$, the flops required for several matrix operations are summarized in Table \[tab:complexity\] where the operation $\otimes$ is defined as $\mathbf{G}\otimes\mathbf{G} = \mathbf{G}\mathbf{G}^{*}$. In the MAX-SNR scheme, each user requires one $\otimes$ operation, a single SVD, $2M$ real additions and $M$ real multiplications to find feedback information. Correspondingly, the total computational complexity becomes $K\times(8N_RM^2-2N_RM)+K\times (24N_R^3 + 48N_R^3 + 54N_R^3) + K \times 3M = K \times (128N_R^3-N_R^2+\tfrac{3}{2}N_R)$ flops. In OIA1 scheme, two $\otimes$ operations, two matrix scaling, a single matrix addition, a single SVD, $2M$ real additions, and $M$ real multiplications are required at each user to find the feedback information, so the total computational complexity becomes $K\times 2 (8N_RM^2-2N_RM) + K\times 2N_R^2 + K\times 2N_R^2 + K\times (24N_R^3+48N_R^3+54N_R^3) + K \times 3M= K\times (130N_R^3 +3N_R^2 +\tfrac{3}{2}N_R)$ flops. Note that the postprocessing matrix should be calculated to find feedback information both in the MAX-SNR and the OIA1 schemes. On the other hand, the OIA2 scheme requires two Gram-Schmidt orthogonalization, two $\otimes$ operations, one matrix addition, and a single $\Vert\cdot\Vert_F$ operation to construct feedback information. The selected user needs $130N_R^3 + 3N_R^2$ additional complexity to find the postprocessing matrix. Therefore, the total complexity of the OIA2 scheme becomes $K \times 4 (8N_RM^2-2N_RM) + K\times 2N_R^2 + K\times 4N_R^2 + (130N_R^3 + 3N_R^2) = K \times (8N_R^3+2N_R^2) + (130N_R^3 + 3N_R^2)$. The computational complexities of various schemes are summarized in Table \[tab:comparison\]. When $N_R=4$, the required computational complexities according to the number of users are plotted in Fig. \[fig:complexity\]. We can observe that the complexity of the OIA2 scheme is about 6.15% of OIA1 scheme’s complexity when the number of receive antennas, $N_R$, and the number of users, $K$, are sufficiently large. Performance Comparison ---------------------- In Fig. \[fig:user50\_M1\_alpha10\_new\], the proposed OIA schemes are compared with other user selection schemes in terms of achievable rate per transmitter when $(N_T,M,N_R)=(1,1,2)$ and $K=50$. In this case, the optimal user selection scheme is to maximize the capacity based on . The proposed OIA schemes significantly outperform the MAX-SNR scheme in the high SNR region. It is shown that the proposed OIA2 scheme achieves a similar rate to the OIA1 scheme but it requires much less computational complexity. It is also shown that the proposed OIA schemes significantly outperform the conventional MAX-SNR scheme in the high SNR region. For a finite number of users, the achievable rates using the optimal scheme, OIA1, and OIA2 schemes are saturated in the high SNR region. On the other hand, the TDM1 and the TDM2 schemes achieve a DoF of $1/3$ and $2/3$, respectively, and outperform the OIA schemes above 50dB and 30dB SNR, respectively. To evaluate practical gains of the proposed OIA schemes, Fig. \[fig:user50\_M1\_alpha10\_new\] compares the achievable rate of the proposed OIA scheme with those of two well-known IA techniques – Gomadam’s MAX-SINR scheme [@GCJ2011] and the altering minimization scheme [@PH2009]. The antenna configuration $(N_T,M,N_R)=(2,1,2)$ is used for both Gomadam’s MAX-SINR scheme and the altering minimization scheme because DoF of 1 cannot be achieved under the antenna configuration $(N_T,M,N_R)=(1,1,2)$ used in our system model. In Gomadam’s MAX-SINR scheme [@GCJ2011], the precoding and the postprocessing matrices are iteratively optimized assuming the reciprocity of the uplink and downlink channels. In the altering minimization scheme [@PH2009], perfect CSIT is assumed and information sharing among the transmitters is allowed. However, it should be noted that our OIA scheme do not requires perfect CSIT and transmitter cooperation contrary to [@GCJ2011] and [@PH2009]. Conclusion ========== We have interpreted the opportunistic interference management from a perspective of IA and proposed a novel opportunistic interference alignment (OIA) and analyzed its achievable DoF and its user scaling law in a three-transmitter $M\times 2M$ MIMO IC channel. The proposed OIA schemes have been shown to achieve $M$ DoF per transmitter by opportunistically selecting the user whose received interference signals are most aligned with each other. Thus, in the high SNR region (i.e., from a DoF aspect), each transmitter should select the user whose associated interfering channels are aligned as much as possible. Contrary to conventional IA which is known to achieve $2M/3$ DoF per user in a three-transmitter $M \times 2M$ MIMO IC, the proposed OIA schemes do not sacrifice the spatial dimensions in aligning interference signals and secure the full spatial DoF by exploiting the user dimensionality. Furthermore, the proposed OIA schemes do not require global channel knowledge at the transmitters but need only scalar value feedback from each user for user selection. We have also proved that the full DoF of $M$ can be achieved when the number of users grows with an appropriate scale. Finally, we have compared our proposed scheme with the conventional schemes. Our proposed OIA schemes have been shown to have advantages over conventional user selection schemes for interference mitigation in terms of both computational complexity and achievable rate. [10]{} V. R. Cadambe and S. A. Jafar, “Interference alignment and the degrees of freedom for the k user interference channel,” *[IEEE]{} Trans. Inf. Theory*, vol. 54, no. 8, pp. 3425–3441, Aug. 2008. R. Etkin, D. Tse, and H. Wang, “Gaussian interference channel capacity to within one bit,” *[IEEE]{} Trans. Inf. Theory*, vol. 54, no. 12, pp. 5534–5562, Dec. 2008. S. A. Jafar and S. Shamai, “Degrees of freedom region for the [MIMO]{} [X]{} channel,” *[IEEE]{} Trans. Inf. Theory*, vol. 54, no. 1, pp. 151–170, Jan. 2008. T. Gou and S. A. Jafar, “Degrees of freedom of the [$K$]{} user [$M\times N$]{} [MIMO]{} interference channel,” *[IEEE]{} Trans. Inf. Theory*, vol. 56, no. 12, pp. 6040–6057, Dec. 2010. S. A. Jafar, “Blind interference alignment,” *IEEE J. Sel. Topics Signal Process.*, vol. 6, no. 3, pp. 216–227, June 2012. R. T. Krishnamachari and M. K. Varanasi, “Interference alignment under limited feedback for [MIMO]{} interference channels,” \[Online\]. Available: http://arxiv.org/abs/0911.5509 K. Gomadam, V. R. Cadambe, and S. A. Jafar, “A distributed numerical approach to interference alignment and applications to wireless interference networks," *IEEE Trans. on Inf. Theory*, vol. 57, no. 6, pp. 3309–3322, June 2011. S. W. Peters and R. W. Heath, Jr., “Cooperative Algorithms for MIMO Interference Channels”, IEEE Trans. on Veh. Tech., vol. 60, no. 1, pp. 206-218, Jan. 2011. K. Gomadam, V. R. Cadambe, and S. A. Jafar, “Approaching the capacity of wireless networks through distributed interference alignment,” in *Proc. of [IEEE]{} Global Telecommunications Conference*, pp. 1–6, Dec. 2008. C. Suh and D. Tse, “Interference alignment for cellular networks," in *Proc. of Allerton Conference on Communication, Control, and Computing*, pp. 1037–1044, Sep. 2008. S. M. Perlaza, N. Fawaz, S. Lasaulce, and M. Debbah, “From spectrum pooling to space pooling: opportunistic interference alignment in [MIMO]{} cognitive networks,” *[IEEE]{} Trans. Signal Process.*, vol. 58, no. 7, pp. 3728–3741, 2010. J. Thukral and H. B[ö]{}lcskei, “Interference alignment with limited feedback,” in *Proc. of IEEE Int. Symposium on Inf. Theory*, 2009. O. E. Ayach and  R. W. Heath, Jr., “Interference alignment with analog channel state feedback,” to appear in *[IEEE]{} Trans. Wireless Commun.*, vol. 11, no. 2, pp. 626–636, Feb. 2012. B. Nosrat-Makouei, J. G. Andrews, and R. W. Heath, Jr., “MIMO interference alignment over correlated channels with imperfect CSI,” *[IEEE]{} Trans. Signal Process.*, vol. 59, no. 6, pp. 2783-2794, June 2011. M. Sharif and B. Hassibi, “On the capacity of MIMO BC channel with partial side information," *IEEE Trans. Inf. Theory*, vol. 51, no. 2, pp. 506–522, Feb. 2005. T. Yoo and A. Goldsmith, “On the optimality of multiantenna broadcast scheduling using zero-forcing beamforming," *IEEE J. Sel. Areas Commun.*, vol. 24, no. 3, pp. 528–541, Mar. 2006. W. Choi and J. G. Andrews, “The capacity gain from intercell scheduling in multi-antenna systems," *IEEE Trans. Wireless Comm.*, vol. 7, no. 2, pp. 714–725, Feb. 2008. A. Razi, D. J. Ryan, I. B. Collings, and J. Yuan, “Sum rates, rate allocation, and user scheduling for multi-user MIMO vector perturbation precoding," *IEEE Trans. Wireless Comm.*, vol. 9, no. 1, pp. 356–365, Jan. 2010. H. Ning, M. Estela, C. Ling, and K. K. Leung, “Opportunistic interference alignment for $K$-user interference networks," \[Online\]. Available: http://arxiv.org/abs/1009.5121 C. Shen and M. P. Fitz, “Opportunistic spatial orthogonalization and its application in fading cognitive radio networks," *IEEE J. Sel. Areas Commun.*, vol. 5, no. 1, pp. 182–189, Feb. 2011. J. He and M. Salehi, “Low-complexity coordinated interference-aware beamforming for [MIMO]{} broadcast channels,” in *Proc. of IEEE Veh. Technol. Conf.*, pp. 685–689, Sep. 2007. M. Kountouris, D. Gesbert, and T. S[ä]{}lzer, “Enhanced multiuser random beamforming: Dealing with the not so large number of users case,” *[IEEE]{} J. Sel. Areas Commun.*, vol. 26, no. 8, pp. 1536–1545, Oct. 2008. J. H. Lee and W. Choi, “Opportunistic interference aligned user selection in multi-user [MIMO]{} interference channels,” in *Proc. of [IEEE]{} Global Telecommunications Conference*, Dec. 2010. J. H. Lee and W. Choi, “Interference alignment by opportunistic user selection in 3-User MIMO interference channels," in *Proc. of [IEEE]{} Intl. Conf. on Commun.*, Kyoto, Japan, June 2011. G. Scutary, D. P. Palomar, and S. Barbarossa, “Competitive design of multiuser MIMO systems based on game theory: a unified view," *IEEE J. Sel. Areas Commun.*, vol. 6, no. 7, pp. 1089–1103, Sep. 2008. L. Zheng and D. N. C. Tse, “Communication on the [G]{}rassmann manifold: a geometric approach to the noncoherent multiple-antenna channel,” *[IEEE]{} Trans. Inf. Theory*, vol. 48, no. 2, pp. 359–383, Feb. 2002. D. J. Love,  R. W. Heath, Jr., and T. Strohmer, “Grassmannian beamforming for multiple-input multiple output wireless systems,” *[IEEE]{} Trans. Inf. Theory*, vol. 49, no. 10, pp. 2735–2747, Oct. 2003. W. Dai, Y. Liu, and B. Rider, “Quantization bounds on [Grassmann]{} manifolds and applications to [MIMO]{} systems,” *[IEEE]{} Trans. Inf. Theory*, vol. 54, no. 3, pp. 1108–1123, Mar. 2008. W. Dai, Y. Liu, B. Rider, and V. K. N. Lau, “On the information rate of [MIMO]{} systems with finite rate channel state feedback using beamforming and power on/off strategy,” *[IEEE]{} Trans. Inf. Theory*, vol. 55, no. 11, pp. 5032–5047, Nov. 2009. N. Ravindran and N. Jindal, “Limited feedback-based block diagonalization for the [MIMO]{} broadcast channel,” *[IEEE]{} J. Sel. Areas Commun.*, vol. 26, no. 8, pp. 1473–1482, Oct. 2008. G. H. Golub and C. F. V. Loan, *Matrix Computations*.1em plus 0.5em minus 0.4emJohns Hopkins University Press, 1996. S. W. Peters and R. W. Heath, Jr. “Interference alignment via altering minimization," in *Proc. of IEEE Int. Conf. Acoustics, Speech Signal Processing*, Taipei, Taiwan, Apr. 2009, pp. 2445–2448. ![System model. Each transmitter selects one user from its group. []{data-label="fig:system_model"}](3TX_IA_N.eps "fig:"){width=".5\columnwidth"}\ ![The achievable rate per transmitter using various schemes for $K=10, 50$ when $(N_T,M,N_R)=(2,2,4)$.[]{data-label="fig:varying_K"}](M2K1050_new.eps "fig:"){width=".60\columnwidth"}\ ![The achievable rate per transmitter of OIA2 scheme with scaling $K\propto P$ when $(N_T,M,N_R)=(1,1,2)$.[]{data-label="fig:user_scaled_alpha10"}](NT1_M1_NR2_scaledK.eps "fig:"){width=".60\columnwidth"}\ Operation Complexity (flops) -------------------------------------------------------- ----------------------- $\alpha\mathbf{G}$, $\mathbf{G}+\mathbf{G}$ $2mn$ $\Vert\mathbf{G}\Vert_F$ $4mn$ $\mathbf{G}\otimes\mathbf{G}=\mathbf{G}\mathbf{G}^{*}$ $8mn^2-2mn$ Gram-Schmidt Ortho. $8mn^2-2mn$ Singular Value Decomp. $24m^2n+48mn^2+54n^3$ : The complexity of various operations for $\mathbf{G}\in\mathbb{C}^{m\times n}$ \[tab:complexity\] Scheme Complexity $\underset{(K, N_R\to \infty)}{\textrm{Ratio}}$ --------- -------------------------------------------------- ------------------------------------------------- MAX-SNR $K \times (128N_R^3-N_R^2+\tfrac{3}{2}N_R)$ 98.4% OIA1 $K\times (130N_R^3 +3N_R^2 +\tfrac{3}{2}N_R)$ 100 % OIA2 $K \times (8N_R^3+2N_R^2) + (130N_R^3 + 3N_R^2)$ 6.15% : The complexity of various schemes \[tab:comparison\] ![Complexities of various user selection schemes according to the number of users $K$ when $N_R=4$.[]{data-label="fig:complexity"}](computaional_complexities_new.eps "fig:"){width=".60\columnwidth"}\ ![The achievable rate per transmitter of various user selection schemes when $(N_T,M,N_R)=(1,1,2)$ and $K=50$.[]{data-label="fig:user50_M1_alpha10_new"}](NT1_M1_NR2_K50_newnew.eps "fig:"){width=".60\columnwidth"}\ [^1]: J. H. Lee and W. Choi are with Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701, Korea (e-mail: tantheta@kaist.ac.kr, wchoi@ee.kaist.ac.kr). [^2]: Sometimes this decomposition is referred to compact SVD or thin SVD.
--- abstract: | Privacy has been frequently identified as a main concern for system developers while dealing with/managing personal information. Despite this, most existing work on privacy requirements deals with them as a special case of security requirements. Therefore, key aspects of privacy are, usually, overlooked. In this context, wrong design decisions might be made due to insufficient understanding of privacy concerns. In this paper, we address this problem with a systematic literature review whose main purpose is to identify the main concepts/relations for capturing privacy requirements. In addition, the identified concepts/relations are further analyzed to propose a novel privacy ontology to be used by software engineers when dealing with privacy requirements. Keywords {#keywords .unnumbered} ======== Privacy Ontology, Privacy Requirements, Privacy by Design (PbD), Requirements Engineering author: - Mohamad Gharib - Paolo Giorgini - John Mylopoulos title: 'Ontologies for Privacy Requirements Engineering: A Systematic Literature Review' --- Introduction ============ Increasing numbers of today’s systems deal with personal information (e.g., information about citizens, customers, etc.), where such information is protected by privacy laws [@gharibre2016]. Therefore, privacy has become a main concern for system designers. In other words, dealing with privacy related concerns is a must these days because privacy breaches may result in huge costs as well as a long-term consequences [@acquisti2006there; @gellman2002privacy; @hong2004privacy; @camp2002designing; @campbell2003economic]. Privacy breaches might be due lack of appropriate security policies, bad security practices, attacks, data thefts, etc. [@acquisti2006there; @labda2014modeling]. However, most of these breaches can be avoided if privacy requirements of the system-to-be were captured properly during system design (e.g., Privacy by Design (PbD)) [@cavoukian2009privacy; @cavoukian2011privacy; @labda2014modeling], where privacy requirements aim to capture the types and levels of protection necessary to meet the privacy needs of the users. Nevertheless, just few works focused on considering privacy during the system design [@Gurses2011]. More specifically, most existing work on privacy requirements often deal with them either as non-functional requirements (NFRs) with no specific criteria on how such requirements can be met [@anton2002analyzing; @yu2002designing; @mouratidis2007secure], or as a part of security [@zannone2006requirements; @kalloniatis2008addressing], i.e., focusing mainly on confidentiality and overlooking important privacy aspects such as anonymity, pseudonymity, unlinkability, unobservability, etc. On the other hand, privacy is an elusive and vague concept [@solove2002conceptualizing; @solove2006taxonomy; @kalloniatis2008addressing]. Although several efforts have been made to clarify the privacy concept by linking it to more refined concepts such as secrecy, person-hood, control of personal information, etc., there is no consensus on the definition of these concepts or which of them should be used to analyze privacy [@solove2006taxonomy]. This has resulted in a lot of confusion among designers and stakeholders, which has led in turn to wrong design decisions. In this context, a well-defined privacy ontology that captures privacy related concepts along with their interrelations would constitute a great step forward in designing privacy-aware systems. Ontologies have been proven to be a key success factor for eliciting high-quality requirements, and it can facilitate and improve the job of requirements engineers [@souag2012towards; @kaiya2006using; @dzung2009ontology], since it can reduce the conceptual vagueness and terminological confusion by providing a shared understanding of the related concepts between designers and stakeholders [@uschold1996ontologies]. In addition, the ontology should capture privacy requirements in their social and organizational context. Since most complex systems these days (e.g., healthcare systems, smart cities, etc.) are socio-technical systems [@emery1960socio], which consist not only of technical components but also of humans along with their interrelations, where different kinds of vulnerabilities might manifest themselves [@liu2003security; @gharibre2016]. Focusing on the technical aspects and leaving the social and organizational aspects outside the system’s boundary leaves the system open to different kinds of vulnerabilities that might manifest themselves in the social interactions and/or the organizational structure of the system [@liu2003security]. The Flash Crash [@sommerville2012large] and the Allied Irish Bank scandal [@massacci2008detecting] are good examples, where problems were not caused by mere technical failures, but it were also due to several socio-technical related vulnerabilities of the system. This paper applies systematic review techniques to survey available literature to identify the most mature studies that propose privacy ontologies/concepts. In addition, we further analyze the selected privacy related concepts/relations to identify the main ones in order to propose a novel ontology that can be used to capture privacy requirements. This paper is therefore intended to be a starting point to address the problem of identifying a core privacy ontology. The rest of the paper is organized as follows; Section (2) describes the review process and the protocol underlining this systematic review. We present and discuss the review results and findings in Section (3). In Section (4) we propose a novel ontology for privacy requirements engineering. We discuss threats to validity in Section (5). Related work is presented in Section (6), and we conclude and discuss future work in Section (7). Review Process ============== A systematic review can be defined as a systematic process for defining research questions, searching the literature for the best available resources to answer such questions, and collecting available data from the resources for answering the research questions. Following [@kitchenham2004procedures; @keele2007guidelines], the review process (depicted in Figure \[fig:plan\]) consists of three main phases: 1. Planning the review, in which we formulate the research questions and we define the review protocol. 2. Conducting the review, in which we conduct the search process after identifying the search terms and the literature sources, and then we perform the study selection activity. 3. Reporting the results of the review, in which we collect detailed information from the selected studies in order to answer the research questions, and then we use the obtained data to answer the research questions, which we discuss in the following section. Planning the review ------------------- This phase is very important for the success of the review, for it is here that we define the research objectives and the way in which the review will be carried out. This includes two main activities: (1) formulating the research questions that the systematic review will answer; and (2) defining the review protocol that specifies the main procedures to be taken during the review. ### Research questions Formulating the review questions is a very critical activity since these questions are used to derive the entire systematic review methodology [@kitchenham2004procedures]. Therefore, we formulate the following four Research Questions (RQ) to identify the main privacy concepts that have been presented in the literature: RQ1 : What are the privacy concepts/relations that have been used to capture privacy concerns? RQ2 : What are the main concepts/relations that have been used for capturing privacy requirements? RQ3 : Do existing privacy studies cover the main privacy concepts/relations? RQ4 : What are the limitations of existing privacy studies? ### Define the review protocol The review protocol specifies the methods to be followed while conducting the systematic review. Based on [@kitchenham2004procedures; @keele2007guidelines], a review protocol should specify the following: the strategy that will be used to search for primary studies selection; study selection criteria; study quality assessment criteria; data extraction and dissemination strategies. In the rest of this section, we discuss how we specify and perform each of these activities. Conducting the review --------------------- This phase is composed of two main activities: 1- search strategy; and 2- study selection, where each of them is composed of several sub-activities. In what follows, we discuss them. ### Search strategy The search strategy aims to find as many studies relating to the research questions as possible using an objective and repeatable search strategy [@kitchenham2004procedures]. The search activity consists of three main sub-activities: 1- identify the search terms, 2- identify the literature resources, and 3- conduct the search process. **Identify the search terms.** Following [@kitchenham2004procedures; @keele2007guidelines], we derived the main search terms from the research questions. In particular, we used the Boolean AND to link the major terms, and we use the Boolean OR to incorporate alternative synonyms of such terms. The resulting search terms are: (Privacy AND (ontology OR ontologies OR taxonomy OR taxonomies ) OR (Privacy requirements). **Identify the literature resources.** Six electronic database resources were used to primarily extract data for this research. These include: IEEE Xplore, ACM Digital Library, Springer, ACM library, Google Scholar, and Citseerx. **Conduct the search process.** The search process (shown in Figure \[fig:process\]) consists of two main stages: Search stage 1. : We have used the search terms to search the six electronic database sources, and only papers with relevant titles have been selected; Search stage 2. : The reference lists of all primary selected papers were carefully checked, and several relevant papers (25 papers) were identified and added to the list of the primary selected papers. **Study selection.** The selection process (shown in Figure \[fig:process\]) consists of two main stages. Selection stage 1 (primary selection). : Searching the electronic database source returned 240 relevant papers, among which we have identified and removed 33 duplicated papers. Next, we have applied the primary selection criteria on the remaining 207 papers. In particular, we have read the abstract, introduction, and then we skimmed through the rest of paper. We removed all the papers that are not published in the English language, and we excluded all papers that are not related to any of our research questions. Moreover, when we were able to identify multiple version of the same paper, only the most complete one was included. Finally, we excluded any paper that has been published before 1996, since we were not able to find any concrete work related to our research before 1996. The primary selection inclusion and exclusion criteria are shown in Table \[table:inexcriteria\]. The outcome of this selection stage was 107 papers, i.e., we have excluded 100 papers. Selection stage 2 (Quality Assessment (QA)). : At this stage, the QA criteria has been applied to the papers that have resulted from the first selection stage (107 papers) along with the papers that have resulted from the second search stage (25 papers), for a total of 132 papers. In order to identify the most relevant studies that can be used to answer our research questions, we formulated five QA questions (shown in Table \[table:qualityassessment\]) to evaluate the relevance, completeness, and quality of the studies, where each question has only two answers: Yes = 1 or No= 0. The quality score for each study is computed by summing the scores of its QA questions, and the paper is selected only if it scored at least 4. As a result, 98 papers were excluded and 34 studies were selected. The result of the QA of the studies is presented in Table \[table:Quality\] in Appendix A. \[table:inexcriteria\] \[table:qualityassessment\] Reporting the results --------------------- The final phase of the systematic review involves summarizing the results, and it consists of two main activities: 1- data synthesis; and 2- results and discussion. ### Data synthesis In what follows, we describe how data syntheses were executed: Data related to *RQ1* can be extracted directly from the list of selected papers (shown in Table \[table:selected\]). To answer *RQ2*, the contents of the 34 selected studies were further analyzed to identify privacy related concepts along with their interrelations, and list them in a comprehensive table (Table \[table:rq\]). Moreover, we identify the main concepts/relations for capturing privacy requirements based on Table \[table:rq\] & Table \[table:iteration\] that shows the frequency of concepts/relations appearance in the selected studies. To answer *RQ3* data can be derived from Table \[table:limitation\], which summaries the percentage of the main concepts/relations categories that each selected study cover. *RQ4* can be answered by categorizing the studies into four group based on the concepts categories they do not cover. Review results and discussion ============================= This section presents and discusses the findings of this review. First, we start by presenting an overview of the selected studies, and then, we present the findings of this review concerning the research questions. **Overview of selected studies[^1].** 34 studies were selected, where 5 studies were from book chapters, 10 papers were published in journals, 11 papers appeared in conference proceedings, 6 papers came from workshops, and 2 papers were extracted from symposiums. The number of papers by year of publication is presented in Figure \[fig:pubyear\]; while the percentages of the selected studies based on their publishing type are represented in Figure \[fig:pie\]. ![Number of papers by year of publication[]{data-label="fig:pubyear"}](pubyear.eps){width="70.00000%"} **RQ1:** *What are the privacy concepts/relations that have been used to capture privacy concerns?* The review has identified 34 studies that provide concepts and relations that can be used for capturing privacy requirements. The list of the selected studies that answers our first research question (*RQ1*) is presented in Table \[table:selected\], where each paper is described by its identifier, title, author(s), publication year and number of citation. In what follows, we present a summary of the contributions of each selected study. \[table:selected\] **ACM\_03 [@van2004elaborating],** : “*Elaborating Security Requirements by Construction of Intentional Anti-Models*”. Lamsweerde [@van2004elaborating] proposed a goal-oriented approach that extends the KAOS framework for modeling and analyzing security requirements. The framework focus on generating and resolving obstacles/anti-goals to goal satisfaction, i.e., it addresses malicious obstacles/anti-goals (threats) set up by attackers to threaten security goals, and the new security requirements are obtained as countermeasures to resolve these obstacles/anti-goals (threats). The framework adopts several main concepts from KAOS (e.g., agents, goals, etc.) and proposes concepts for building intentional threat models (e.g., obstacles, anti-goal, anti-requirements, attacker, etc.). **ACM\_14 [@labda2014modeling],** : “*Modeling of Privacy-aware Business Processes in BPMN to Protect Personal Data*”. Labda et al. [@labda2014modeling] propose a privacy-aware Business Processes (BP) framework for modeling, reasoning and enforcing privacy constraints. They have identified several privacy-related concepts, including: *Data*, *User*, *Action*, *Purpose*, and *Permissions*. In addition, they identify five concepts that can be used for analysis privacy in BP: (1) *Access control*, (2) *Separation of Tasks (SoT)*, (3) *Binding of Tasks (BoT)*, (4) *User consent*, (5) *Necessity to know (NtK)*. **ACM\_16 [@braghin2008introducing],** : “*Introducing Privacy in a Hospital Information System*”. Braghin et al. [@braghin2008introducing] presented an approach that supports expressing and enforcing privacy-related policies. The approach extends the conceptual model of an open source hospital information system (Care2x) with concepts for role-based privacy management (e.g., subject, processor, and controller), and concepts for supporting the privacy enforcement mechanisms (actions), where such actions can be either inactive or declarative, where the former includes actions that require to access and process data, while the latter includes simple statements representing activities that do not require to interact with the system. **ACM\_35 [@singhal2010ontologies],** : “*Ontologies for Modeling Enterprise Level Security Metrics*”. Singhal and Wijesekera [@singhal2010ontologies] provide a security ontology that supports IT security risk analysis. The ontology identifies which threats endanger which assets and what countermeasures can reduce the probability of the occurrence of a related attack. The concepts of the ontology, includes: *threat*, a potential violation of security, an *attack* exploits vulnerabilities to realize a threat, where *vulnerabilities* are characteristics of target assets that make them prone to attack, and a *risk* is an expectation of loss expressed as a probability that a particular threat will exploit a certain vulnerability, which will result in a harmful result. Finally, *security mechanisms* are designed to prevent threats from happening or mitigating their impact when they occur. **ACM\_40 [@wang2009ovm],** : “*OVM: An Ontology for Vulnerability Management*”. Wang and Guo [@wang2009ovm] propose an ontology for vulnerability management (OVM) that capture the fundamental concepts in information security and their relationship, retrieve vulnerable assets (data) and reason about the cause and impact of such vulnerabilities. The ontology has been built based on the Common Vulnerabilities and Exposures (CVE), Common Weakness Enumeration (CWE), Common Platform Enumeration (CPE), and Common Attack Pattern Enumeration and Classification (CAPEC). The top level concepts of the ontology includes, a *Vulnerability* existing in an *IT\_Product* that can be exploited by an *Attacker* through an *Attack* that compromises the *IT\_Product* and cause *Consequence*. Moreover, *Countermeasures* can be used to protect the *IT\_Product* through mitigating the *Vulnerability*. **CIT\_07 [@velasco2009modelling],** : “*Modeling Reusable Security Requirements Based on an Ontology Framework*”. Velasco et al. [@velasco2009modelling] propose an ontology-based framework for representing and reusing security requirements based on risk analysis. The ontology is based on two ontologies: 1- the risk analysis ontology that is developed based on MAGERIT [@magerit2006methodology], and identifies five types of risk elements: *asset*, *threat*, *safeguard*, *valuation dimension*, *valuation criteria*, and 2- the requirements ontology that models reusable requirements along with their relationships. **CIT\_33 [@liu2003security],** : “*Security and Privacy Requirements Analysis within a Social Setting*”. Liu et al. [@liu2003security] propose a framework for dealing with security and privacy requirements within an agent-oriented modeling framework. They extend *i*\* modeling language to deal with security and privacy requirements, where *i*\* language allows for analyzing security/privacy issues within their social context, which enables for a systematic way of deriving vulnerabilities and threats. Moreover, *i*\* models make it possible to conduct different countermeasure analyses for addressing vulnerabilities and suggesting countermeasures for them. **IEEE\_12 [@souag2013using],** : “*Using Security and Domain ontologies for Security Requirements Analysis*”. Souag et al. [@souag2013using] introduce an ontology-based method for discovering Security Requirements (SR). The process that underlies this method has three main steps, and it starts with the elicitation step that constructs an initial *i*\* requirements model from the stakeholders’ needs/concerns about security. The second step is the SR analysis that depends on production rules to exploit the security-specific ontology to discover threats, vulnerabilities, countermeasures, and resources. These concepts are used to enrich the requirements model by adding new elements (malicious tasks, vulnerability points, etc.). Finally, the domain specific SR analysis step, in which another set of rules explores the domain ontology to improve the requirements model with resources, actors and other concepts that are more specific to the domain at hand. **IEEE\_15 [@tsoumas2006towards],** : “*Towards an Ontology-based Security Management*”. Tsoumas and Gritzalis [@tsoumas2006towards] introduce a security management framework that proposes a Security Ontology (SO), which contains the following concepts, a *stakeholder* possesses an *asset*, which in turn can be compromised by a *vulnerability*. While a *threat* initiated by a *threat agent* targets an *asset* and exploits a *vulnerability* of the asset in order to achieve its goal. Exploitation of a *vulnerability* leads to the realization of an unwanted *incident*, which has a certain *impact*. Furthermore, *countermeasures* reduce the impact of the *threat* with the use of *controls*. Finally, *security policy* formulates the *controls* into a manageable security framework possessed by *stakeholders*. **IEEE\_50 [@Giorgini2005],** : “*Modeling Security Requirements through Ownership, Permission and Delegation*”. Giorgini et al. [@Giorgini2005] introduce Secure Tropos, a formal framework for modeling and analyzing security requirements in their social and organizational context. Secure Tropos proposes several concepts including, an *actor* that covers two concepts (a *role* and an *agent*), a *goal* that can be refined through and/or-decompositions of a root *goal* into finer *sub-goals*, a *task*, and a *resource*. Secure Tropos adopts the notion of *delegation* to model the transfer of objectives (*goals* and *tasks*) from one actor to another, and it adopts *resource provision* among actors. Moreover, it introduces the *ownership* concept that capture the relation between *actors* and *resources* they own. Finally, it provides the *trust* concept to capture the *actors’* expectations in one another concerning their social dependencies, and it introduce the *monitoring* concept to compensate the lack of trust/distrust among *actors* concerning social dependencies. **IEEE\_57 [@kang2013security],** : “*A Security Ontology with MDA for Software Development*”. Kang and Liang [@kang2013security] propose security ontology for software development based on Model Driven Architecture (MDA) paradigm. The ontology includes most popular security concerns mentioned in literature such as *auditing*, *threats*, *accountability*, *non-repudiation*, *risk*, *attacks*, *availability*, *frauds*, *confidentiality*, *asset*, *integrity*, *prevention*, and *Reputation*. **SCH\_18 [@kang2013security],** : “*Eliciting Security Requirements with Misuse Cases*”. Sindre and Opdahl [@kang2013security] present a systematic approach to eliciting security requirements based on *use cases*. They extend the traditional *use case* approach to also consider *misuse cases* that represent unwanted behavior in the system to be developed. In particular, a *use case* diagram contains both, *use cases* and *actors*, as well as *misuse cases* and *misusers*. In addition, *misuse cases* adopts the ordinary *use case* relationships such as *include*, *extend*, and *generalize*. A *use case* is related to a *misuse case* using a directed *association*, which means that a *misuse case* *threatens* the *use case*. Moreover, a use case diagram can contain *security use cases*, which are special *use cases* that can *mitigate* *misuse cases*. In summary, an ordinary *use cases* represent requirements, *security cases* represent security requirements, and *misuse cases* represent security *threats*. **SCH\_24 [@kalloniatis2008addressing],** : “*Addressing Privacy Requirements in System Design. the PriS Method*”. Kalloniatis et al. [@kalloniatis2008addressing] introduce PriS, a security requirements engineering method that consider users’ privacy requirements. PriS considers privacy requirements as business goals and provides a methodological approach for analysing their effect onto the organizational processes. The conceptual model of PriS is based on the Enterprise Knowledge Development (EKD) framework [@loucopoulos1999enterprise], and it includes a set of concepts for modeling privacy requirements, such as: *stakeholders*, *goals* that can be either *strategic goals* or *operational goals*, and *goals* can be *realized* by *processes*. On the other hand, *privacy requirements* are a special type of *goals* (*privacy goals*), which constraint the causal transformation of organizational goals into processes. *Privacy goals* may be decomposed in simpler goals or may *support*/ *conflict* the achievement of other *goals*. Moreover, eight types of *privacy goals* have been identified corresponding to the eight privacy concerns namely, authentication, authorisation, identification, data protection, anonymity, pseydonymity, unlinkability, and unobservability. **SCH\_28 [@mouratidis2007secure],** : “*Secure Tropos: a Security-oriented Extension of the Tropos Methodology*”. Mouratidis and Giorgini [@mouratidis2007secure] introduce extensions to the Tropos methodology [@bresciani2004tropos] to model security concerns throughout the whole development process. Secure Tropos adopts from Tropos methodology concepts for modeling *actors*, *goals*, *resources*, along with their different relations and social dependencies. In addition, it introduces concepts for modeling security requirements, such as a *security constraint* (e.g., privacy, integrity, and availability), which can be decomposed into one or more security sub-constraints. *Security constraint* modeling is divided into *security constraint delegation*, *security constraint assignment*, and *security constraint analysis*. Secure Tropos also introduces *secure entity*, *security features*, *security mechanisms*, a *secure capability*, a *secure dependency*, and the *threat* concept. **SCH\_41 [@solove2006taxonomy],** : “*A Taxonomy of Privacy*”. Solove [@solove2006taxonomy] provides taxonomy for understanding a wide range of privacy related problems. The taxonomy specifies four main groups of possible harmful activities: **(i) information collection**: creates disruption based on the process of data gathering Two sub-classifications of information collection have been identified, *surveillance* and *interrogation*. **(ii) information processing**: refers to the use, storage, and manipulation of data that has been collected. Five different sub-classifications of information processing have been identified: *aggregation*, *identification*, *insecurity*, *secondary use*, and *Exclusion*. **(iii) information dissemination**: in which the data holders transfer the information to others. Seven different sub-classifications of information dissemination have been identified: *breach of confidentiality*, *disclosure*, *exposure*, *increased accessibility*, *blackmail*, *appropriation*, and 7- *distortion*. **(iv) invasion**: involves impingements directly on the individual. Two different sub-classifications of information invasion have been identified: *intrusion* and 2-*decisional interference*. **Spgr\_07 [@massacci2011extended],** : “*An Extended Ontology for Security Requirements*. Massacci et al. [@massacci2011extended] propose ontology for security requirements engineering, the ontology adopts concepts from Secure Tropos methodology [@massacci2007computer], Problem Frame [@haley2008security], and several industrial case studies. The most general concept in the ontology is *Thing*. An *object* is a *thing* that persists, and an *event* is an instantaneous happening that changes some *objects*. The *object* concept can be specialized into *proposition*, *situation*, *entity* and *relationship*. A *proposition* is an *object* representing a true/false statement. A *situation* is a partial world described by a *proposition*. An *entity* is an *object* that has a distinct, separate existence from all other *things*, though that existence need not be material. *Entity* is specialized into *Actor*, *Action*, *Process*, *Resource*, and *Asset*. *Relationship* can be specialized into *do-dependency*, *can-dependency*, *trust-dependency*, *and/or* refinement, *contributes*, *provides*, *uses*. In addition, *damages* is a *relationship* between an *attack* and an *asset*, where the *attack* causes *harm* to the *asset*. *Exploits* is a *relationship* between *attack* and *vulnerability*. *Protects* relates a *security goal* to an *asset*. Finally, *denies* relates an *anti-goal* to a *requirement*. Finally, a *specification* is an *entity* consisting of *actions*, quality propositions, and domain assumptions. *Vulnerability* is a specialization of Situation and is adopted from the Security domain. While a *threat*consists of a situation that includes an attacker and one or more vulnerabilities. **Spgr\_13 [@elahi2009modeling],** : “*A Modeling Ontology for Integrating Vulnerabilities into Security Requirements Conceptual Foundation*”. Elahi et al. [@elahi2009modeling] propose a vulnerability-centric modeling ontology, which integrates empirical knowledge of vulnerabilities into the system development process. They identify a set of core concepts for security requirements elicitation, and they identify another set of concepts for capturing vulnerabilities and their effects on the system. The ontology contains several concepts, including: a *concrete element* that is a tangible entity (e.g., an *activity*, *task*, etc.), and it may *bring* a *vulnerability* into the system. *Exploitation* of *vulnerabilities* can have effects on other elements (*affected elements*), where the *effect* relation is characterized by the *severity* attribute. An *attack* involves the execution of *malicious actions* that one or more *actors* perform to satisfy some *malicious goal*. A *concrete element* may have a *security impact* on *attacks*, which can be interpreted as a *security countermeasure* that can be used to patch *vulnerabilities*. **Spgr\_02\_01 [@asnar2007trust],** : “*From Trust to Dependability Through Risk Analysis*”. Asnar et al. [@asnar2007trust] present an extension of the Tropos Goal-Risk framework. In particular, they introduce an approach to assess risk on the basis of trust relations among actors. In particular, they introduce the notion of trust to extend the risk assessment process. Using this framework, an actor can assess the risk in delegating the fulfillment of his objectives and decide whether or not the risk is acceptable. They also introduce the notion of trust level proposing three trust levels: *Trust*, *Distrust*, and *NTrust* (i.e., neither trust nor distrust), where a low level of trust increases the risk perceived by the depender about the achievement of his objectives. **Spgr\_02\_02 [@asnar2006risk],** : “*Risk Modeling and Reasoning in Goal Models*”. Asnar et al. [@asnar2006risk] propose a goal-oriented approach for modeling and reasoning about risks at requirements level, where risks are introduced and analyzed along the stakeholders’ goals and countermeasures. Their proposed framework is based on the Tropos methodology and extends it with new concepts and qualitative reasoning mechanisms to consider risks since the early phases of the requirements analysis. In their framework, a *risk* is an *event* that has a negative impact on the satisfaction of a *goal*, while a *treatment* is a *countermeasure* that can be adopted in order to mitigate the effects of the *risk*. Moreover, they consider *likelihood* as a property of the *event*, and they capture the *likelihood* by the level of evidence that supports and prevents the occurrence of the *event* (SAT and DEN). On the other hand, *impact* is used to capture the influence of an event to the *goal* fulfillment, and they classify impact under: *strong positive*, *positive*, *negative*, and *strong negative*. **Spgr\_03\_01 [@avizienis2004basic],** : “*Basic Concepts and Taxonomy of Dependable and Secure Computing*”. Avizienis et al. [@avizienis2004basic] propose a new taxonomy for dependable and secure computing based on an extensive analysis of the related literature. The authors provide precise definitions characterizing the various concepts that come into play when addressing the dependability and security of computing and communication systems. The three top-level dimensions of this taxonomy are: *attribute*, *threat*, and *means*. The concept of *attribute* is analyzed in terms of: *availability*; *reliability*; *safety*; *confidentiality*; *integrity*; and *maintainability*. The concept of *threat* is further refined in terms of *fault*, *error*, and *failure*. While the concept of *means* is used to attain the various attributes of dependability and security, where these means can be grouped into four main categories: *fault prevention*; *fault tolerance*; *fault removal*; and *fault forecasting*. **Spgr\_07\_02 [@zannone2006requirements],** : “*A Requirements Engineering Methodology for Trust, Security, and Privacy*”. Zannone [@zannone2006requirements] introduces the Secure i\* (SI\*) methodology that adopts from Secure Tropos the concepts of *actors*, *goals*, *resources*, along with their different relations and social dependencies, and it proposes new relation among roles, namely *supervision*. In SI\*, an *actor* is defined along with a set of *objectives*, *capabilities*, and *entitlements*, which can be modeled through relations between actors and services (goals, tasks, and resources), namely: (1) *require* indicates that an actor intends to achieve a *service*, (2) *be entitled* indicates that an *actor* is the legitimate *owner* of a *service*, and (3) *provide* indicates that the *actor* has the capability to achieve a *service*. The delegation concept is refined in SI\* into: (1) *Delegation of execution (De)*, and (2) *Delegation of permission (De)*. In addition, the trust concept is refined to cope with the refinement of delegation they propose into: (1) *Trust of execution (Te)*, and (2) *Trust of permission (Tp)*. **Spgr\_07\_03 [@lin2003introducing],** : “*Introducing Abuse Frames for Analysing Security Requirements*”. Lin et al. [@lin2003introducing] develop an approach using Problem Frames to analyze security problems in order to determine security vulnerabilities. In particular, they introduce the notion of an anti-requirement as the requirement of a malicious user that can subvert an existing requirement, and they incorporate anti-requirements into abuse frames to represent the notion of a security threat that is imposed by malicious users in a particular problem context. **Spgr\_08\_01 [@mayer2009model],** : “*Model-based Management of Information System Security Risk*”. Mayer [@mayer2009model] proposes ISSRM (Information System Security Risk Management), a security risk management model. The ISSRM reference model addresses risk management at three different levels, combining together *asset*, *risk*, and *risk treatment* views, and it proposes concepts that are ordered in three main groups: **(i) Asset-related concepts** describe what assets are important to protect, and what criteria guarantee asset security; **(ii) Risk-related concepts** present how the risk itself is defined. A *risk* is the combination of a *threat* with one or more *vulnerabilities* leading to a negative impact harming the *assets*; and **(iii) Risk treatment-related concepts** describe what decisions, requirements and controls should be defined and implemented in order to mitigate possible *risks*. **Spgr\_08\_03 [@dritsas2006knowledge],** : “*A Knowledge-based Approach to Security Requirements for E-health Applications*”. Dritsas et al. [@dritsas2006knowledge] propose an ontology that includes the main security related concepts, and use the ontology for designing and developing a set of security patterns that address a subset of these requirements for applications that provide e-health services. The concepts used in the proposed ontology includes: *stakeholder*, *objective*, *threat*, *countermeasure*, *asset*, *vulnerability*, *deliberate attack*, *security pattern* and *security pattern context*. A *security pattern* provides a specific set of *countermeasures*, and a *security pattern context* is defined as a set of *asset*, *vulnerability*, and *deliberate attack* triplets. Therefore, one can start from the generic *security objectives*, find the *security pattern contexts* that match them and choose specific *security pattern*, which ensures that the high level *security objectives* can be fulfilled by implementing the respective *countermeasures*. **Spgr\_13\_01 [@asnar2008risk],** : “*Risk as Dependability Metrics for the Evaluation of Business Solutions: a Model-driven Approach*”. Asnar et al. [@asnar2008risk] adopt and extend the Tropos Goal Model [@asnar2006risk; @asnar2007trust] by considering also the interdependency among the actors within an organization. Through this extension analysts can assess the risk perceived by each actor, taking into account the organizational environment where the actor acts. Based on such analysis, we have provided a method to assist analysts in determining the treatments to be introduced in order to make risks be acceptable by all actors. **Spgr\_13\_02 [@den2003coras],** : “*The CORAS Methodology: Model-based Risk Assessment Using UML and UP*”. Braber [@den2003coras] introduces the CORAS methodology in which the Unified Modeling Language (UML) and Unified Process (UP) are combined to support a model-based risk assessment of security-critical systems. The CORAS ontology propose several concepts, such as *context* that influences the *target*, which contains *assets* and has its *security requirements*. *Security requirements* lead to *security policies*, which protect *assets* by reducing its related *vulnerabilities* that can be exploited by *threats*, which might reduce the *value* of the *asset*. A *Risk* contains an *unwanted incident* that has a certain *consequence* and *frequency* of occurrence. **Spgr\_13\_03 [@elahi2010vulnerability],** : “*A Vulnerability-centric Requirements Engineering Framework: Analyzing Security Attacks, Countermeasures, and Requirements based on Vulnerabilities*. Elahi et al. [@elahi2010vulnerability] adopt and extend their previous work [@elahi2009modeling] by proposing an agent- and goal-oriented framework for eliciting and analyzing security requirements. They refined the goal model evaluation method that helps analysts verifying whether top goals are satisfied with the risk of vulnerabilities and attacks and assess the efficacy of security countermeasures against such risks. More specifically, the evaluation does not only specify if the goals are satisfied, but also makes it possible to understand why and how the goals are satisfied (or denied) by tracing back the evaluation to vulnerabilities, attacks, and countermeasures. **Spgr\_13\_04 [@jurjens2002umlsec],** : “*UMLsec: Extending UML for Secure Systems Development*”. J[ü]{}rjens [@jurjens2002umlsec] proposes UMLsec that is an extension to UML modeling language, which allows for integrating security requirements modeling and analysis within the system development process. UMLsec is able to model security related features such as secrecy, integrity, access control, etc. It represents security feature on UML diagrams by providing several extension mechanisms, namely: (1) stereotypes: a new types of modeling elements that extends the semantics of existing types in the UML meta-model; (2) tagged values: that is used to associate data with model elements and (3) constraints: that are used to define criteria to determine whether requirements are met or not by the system design. In UMLsec, integrity is modeled as a constraint, which can restrict unwanted modification (e.g., insert), but information quality can be affected in several other ways that cannot be captured by this approach. **Spgr\_13\_05 [@matulevivcius2008adapting],** : “*Adapting Secure Tropos for Security Risk Management in the Early Phases of Information Systems Development*”. Matulevi[č]{}ius et al. [@matulevivcius2008adapting] have analyzed how Secure Tropos can be applied to analyze security risks at the early IS development phases. Their analysis suggested a number of improvements for Secure Tropos in order to deal better with security risk management activities. In particular, Secure Tropos could be improved with additional constructs adopted from existing security risk management models (e.g., ISSRM (Information System Security Risk Management)) such as risk, risk treatment, and control. More specifically, among the suggested risk-related concepts is a risk that presents how the risk itself is defined, what are the major principles that should be taken into account when defining the possible risks. The risk is described by the cause of the risk, and the impact of the risk captures the potential negative consequence of the risk, which can be represented by a negative contribution link between the attack and the related security constraint, i.e., the impact negates the security criteria. **Spgr\_13\_07 [@rostad2006extended],** : “*An Extended Misuse Case Notation: Including Vulnerabilities*”. R[ø]{}stad [@rostad2006extended] proposes an extended misuse case notation that includes the ability to represent vulnerabilities and the insider threat. In particular, beside the main concepts of misuse case notation (e.g., *actors*, *use cases*, *misuse cases*, *misusers*, etc.). R[ø]{}stad introduce the *insider* concept to capture inside attackers, since the *misuser* concept in misuse cases was mainly proposed to address outside attackers. More specifically, an *insider* is a *misusers* that is also member of an authorized group for the entity being attacked. In addition, she introduce the *vulnerability* concept that is a weakness in the system, which can be *exploited* by the *insider*. **Spgr\_18\_03 [@fenz2009formalizing],** : “*Formalizing Information Security Knowledge*”. Fenz and Ekelhar [@fenz2009formalizing] introduce security ontology for information security domain knowledge. In their ontology, a *vulnerability* is the absence of a proper safeguard,which could be exploited by a *threat*. A *threat* might threaten an *asset*, and it can be *exploited* by predefined *threat*, and *mitigation* is achieved by the implementation of one or more *control*. In addition, the *severity* of each *vulnerability* is rated by a three-point scale (high, medium, and low). A *threat* has a *source*, and a related *security objectives*. An *asset* is categorized either as a tangible or an intangible asset. While the *data* concept comprises meta-data on the knowledge of an organization. The *person* concept is used to model physical *persons* in the ontology, and the *organization* concept comprises organizations in the broadest sense and assigns *roles* to them. A *role* is a physical person or organization relevant to the organization. Finally, a *location* is used to relate location and threat information in order to assign a priori threat probabilities. **SCH\_24\_02 [@hong2004privacy],** : “*Privacy Risk Models for Designing Privacy-sensitive Ubiquitous Computing Systems*”. Hong et al. [@hong2004privacy] propose a privacy risk model that captures privacy concerns at high abstraction level, and then refining them into concrete issues for specific applications. The privacy risk model consists of two parts: (1) a *privacy risk analysis* that poses a series of questions to help designers think about the social and organizational context in which an application will be used, the technology used to implement that application, and control and feedback mechanisms that end-users will use; and (2) *privacy risk management* that takes the unordered list of privacy risks from the privacy risk analysis, prioritizes them, and helps design teams identify solutions for helping end-users manage those issues. **SCH\_28\_01 [@paja2014sts],** : “*STS-Tool: Security Requirements Engineering for Socio-Technical*”. Paja et al. [@paja2014sts] present the STS-Tool, a modeling and analysis support tool for STS-ml (Socio-Technical Security modeling language), a security requirements modeling language for socio-technical systems. STS-ml consists of three complementary views: 1- The social view, 2- The information view, 3- The authorization view. Through these views, STS-ml supports different types of security needs: (1) *Interaction (security) needs* are security-related constraints on goal delegations and document provisions; (2) *Authorisation needs* determine which information can be used, how, for which purpose, and by whom; (3) *Organisational constraints* constrain the adoption of roles and the uptake of responsibilities. In addition, STS-ml supports the following interaction security needs: 1. Over goal delegations: (a) *No-redelegation*, (b) *Non-repudiation*, (c) *Redundancy*, (d) *Trustworthiness*, and (e) *Availability*. 2. Over-document provisions: (a) *Integrity of transmission*, (b) *Confidentiality of transmission*, (c) *Availability*. 3. From organizational constraints: (a) *Separation of duties (SoD)*, and (b) *Combination of duties (CoD)*. **SCH\_43\_01 [@van2003handbook],** : “*Handbook of Privacy and Privacy-enhancing Technologies*”. Van Blarkom et al. [@van2003handbook] investigate several active areas related to privacy, Privacy-Enhancing Technologies (PET), intelligent software agents, and the inter-relations among these areas. Furthermore, they discussed the concepts of privacy and data protection, the European Directives that rule the protection of personal data and the relevant definitions. In particular, they investigate when personal data items become non-identifiable, the sensitivity of data, automated decisions, privacy preferences, and policies. In addition, they discussed existing technological solutions that offer agent user privacy protection, known under the name Privacy-Enhancing Technologies (PETs), the set of technologies/ principles that underlying PETs, and the legal basis for PET. Moreover, they discussed the Common Criteria for Information Technology Security Evaluation (CC) supplies important information for building privacy secure agents. **RQ2:** *What are the main concepts/relations that have been used for capturing privacy requirements?* Each of the 34 selected studies has been deeply investigated to identify any concept/relation that can be used for capturing privacy requirements. We have focused on identifying any concept/relation that can be used for capturing privacy requirements in their social and organizational context. More specifically, we tried to identify any concept that is related to privacy, social and organizational threats that might threaten privacy needs, treatment/countermeasures that can be used to mitigate threats concerning privacy needs. The result is shown in Table \[table:rq\], which presents the concepts/relations that have been identified in each selected studies. In particular, 55 concepts and relations[^2] have been identified, which have been grouped into four main groups based on their types: [&gt;c c c| p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} | p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |p[0.3cm]{} | p[0.3cm]{} |]{} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &\ & & & & & [X]{} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &\ & & **role** & & & [X]{} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &\ & & **agent** & & & [X]{} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &\ & & user & & - & - & & & & & & & & & & & & & & & & & & & & & & & & & - & & & & - & & -\ & & stakeholder & & & & & & & & & - & & & - & & & - & & & & & & & & & & & - & & & & & & - & &\ & & person & & & & & & & & & & & & & & & & & & - & & & & & & & & & & & & & & & &\ & & **is\_a** & & & [X]{} & & & & & && & & & & & & & & & & & & & & & & & & & & & & & &\ & & **plays** & & & [X]{} & & & & & && & & & & & & & & & & & & & & & & & & & & & & & &\ & & **goal** & & & [X]{} & & & & & && & & & & & & & & & & & & & & & & & & & & & & & &\ & & objective & & & & & & & & & & & & - & & - & & & & & & & & & & - & & - & & & & & & & &\ & & task & & & & & & & - & - & & - & & & - & & & - & & & - & & - & & - & & & & - & & & - & - & & - &\ & & action & & - & - & & & & - & - & - & & & - & & & & &- & & & & & & & & & & & & & & & & &\ & & **refinement** & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & & &\ & & asset & - & & & - & & - & & - & - & & - & - & & & & &- & - & & - & & & - & & - & - & & - & & & & & &\ & & **informational** & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & & &\ & & data & & - & - & & & & & & & & & & & & & & - & - & & & & & & & & & & & & & & - & & -\ & & resource & & & & - & & & - & - & & - & & - & - & & & - & & & & & - & & & & & & & & & & & & &\ & & **personal info** & & & & & & &&&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & sensitive info & & & & & & & & & & & & & & & & & - & & & & & & & & & - & & & & & & & & -\ & & **part\_of** & & & & & & &&&&& & & & & & & & & & & & & & & & & & & & & & & &\ & & **own** & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & & &\ & & **obj deleg.** & & && & & & &&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & **perm. deleg.** & & && & & & &&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & **info provision** & & && & & & &&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & **monitor** & & && & & &&&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & **obj trust** & & && & & &&&& & & & & & & & & & & & & & & & & & & & & & & & &\ & & **perm trust** & & && & & &&&& & & & & & & & & & & & & & & & & & & & & & & & &\ & risk & & & & - & & & & & - & & - & & & & & & & & - & - & & & - & & & & & & & - & - & - & - & -\ & **threat** & & & & & & && & & & & & & & & & & & & & & & & & & & & & & & & & &\ & **inten. threat** & & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & &\ & **casual threat** & & & & & & &&& & & & && & & & & & & & & & & & & & & & & & & & &\ & **vulnerability** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & attack & & & & - & - & & - & & - & & - & - & - & & & - & & & & & - & - & & & & - & & & & & & & &\ & **attacker** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & **attack method** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & **impact** & & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & &\ & **threaten** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & **exploit** & & & & & & & & & && & & & & & & && & & & & & & & & & & & & & & &\ & countermeasure & - & & & & - & - & - & - & - & & - & & & & & & & & - & - & & - & - & - & & - & & & - & & - & & &\ & **mitigate** & & & & & & && & && & & & & & & & & & & & & & & & & & & & & & & &\ & control & & & & & & & & - & - & & & & & & & & & - & & & & & - & & - & & & & & & & & & -\ & treatment & & & & & & & & & & & & & & & & & & & & & & & & - & & & & & & & & & &\ & **s/p goal** & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &\ & **s/p constraint** & & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & &\ & **s/p policy** & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & & & &\ & **s/p mechanism** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & **sec/priv req.** & & & & & & & && && & & & & & & & & & & & & & & & & & & & & & & &\ & **confidentiality** & & & & & & & & & && & & & & & & & & & & & & & & & & & & & & & & &\ & integrity & - & & & & & & - & - & - & - & - & & & & & - & & & & & - & - & & & & - & - & & - & & & & - &\ & availability & - & & & & & & - & & - & - & - & & & & & - & & & & & - & - & & & & - & - & & - & & & & - &\ & **non-repudiation** & & & & & & &&& & && & & & & & & & & & & & & & & & & & & & & & &\ & **notice** & & & & & & &&& && & & & & & & & & & & & & & & & & & & & & & & &\ & **anonymity** & & & & & & &&& && & & & & & & & & & & & & & & & & & & & & & & &\ & **transparency** & & & & & & &&& && & & & & & & & & & & & & & & & & & & & & & & &\ & **accountability** & & & & & & &&& && & & & & & & & & & & & & & & & & & & & & & & &\ \[table:rq\] **Organizational.** : 27 concepts and relations have been identified for capturing the agentive entities of the system in terms of their objectives, entitlements, dependencies and their expectations concerning such dependencies. The organizational concepts and relations are further grouped into four sub-categories: **Agentive entities.** : 8 concepts and relations have been identified for capturing the active entities of the system (e.g., actor, user, etc. ). **Intentional entities.** : 5 concepts and relations have been identified for capturing objectives that active entities aim for achieve/want to perform (e.g., goal, task, activity, etc. ). **Informational entities.** : 8 concepts and relations have been identified for capturing informational assets (e.g., data, asset, information, etc.). **Entities interactions.** : 6 concepts and relations have been identified for capturing the entities dependencies and expectations concerning such dependencies (e.g., delegation, dependency, provision, trust, etc. ). <!-- --> **Risk.** : 10 concepts and relations have been identified for capturing risk related aspects (e.g., risk, threat, vulnerabilities, attack, etc.). **Treatment.** : 8 concepts and relations have been identified for capturing treatment related aspects (e.g., treatment, countermeasure, mitigate etc.). **Privacy.** : 9 concepts and relations have been identified for capturing privacy related aspects (e.g., anonymity, confidentiality, etc.). Among the 55 identified concepts and relations, we have selected 38 main concepts and relations that can be used for capturing privacy requirements in their social and organizational context. In particular, these concepts and relations are 17 organizational, 9 risk, 5 treatment, and 7 privacy concepts, and they are shown in **Bold** typeset in Table \[table:rq\]. Each of the selected concepts and relations has been chosen based on the following two criteria: (1) its importance for capturing privacy requirements; and (2) the frequency of its appearance in the selected studies, which is shown in Table \[table:iteration\]. **RQ3:** *Do existing privacy studies cover the main privacy concepts/relations?* We answer *RQ3* by comparing the privacy related concepts/relations presented in each selected study with the main privacy concepts/relations identified while answering *RQ2*. In Table \[table:rq\], we use () when the study presents a main privacy concept/relation, and (-) when the study presents a normal privacy concept/relation. In addition, we use () to mark when a study misses a main concept/relation. Table \[table:limitation\] summarizes the percentage of the main privacy concepts/relations identified in each selected study with respect to the main four categories (organizational, risk, treatment, and privacy). Considering Table \[table:rq\] and Table \[table:limitation\], it is easy to note that most studies miss main privacy related concepts/relations, i.e., none of them cover all the main privacy related concepts/relations. In **RQ4**, we discuss the limitation of each selected study. \[table:iteration\] \[table:limitation\] **RQ4:** *RQ4 What are the limitations of existing privacy studies?* We answer this question by categorizing the studies into four groups (**Group1-4**) [^3] based on the concepts categories (e.g., organizational, risk, treatment, and privacy) that the studies do not appropriately cover: **Group 1,** : contains studies that do not appropriately cover the organizational concepts. In this group, we have identified 25 studies out of the 34 selected ones, including: ACM\_03 Lamsweerde [@van2004elaborating], ACM\_14 Labda et al. [@labda2014modeling], ACM\_16 Braghin et al. [@braghin2008introducing], ACM\_35 Singhal and Wijesekera [@singhal2010ontologies], ACM\_40 Wang and Guo [@wang2009ovm], CIT\_07 Lasheras et al. [@velasco2009modelling], IEEE\_12 Souag et al. [@souag2013using], IEEE\_15 Tsoumas and Gritzalis [@tsoumas2006towards], IEEE\_57 Kang and Liang [@kang2013security], Spgr\_7 Massacci et al. [@massacci2011extended], Spgr\_13 Elahi et al. [@elahi2009modeling], SCH\_18 Sindre and Opdahl [@sindre2005eliciting], SCH\_24 Kalloniatis et al. [@kalloniatis2008addressing], Spgr\_18\_03 Fenz and Ekelhart [@fenz2009formalizing], Spgr\_13\_01 Asnar et al. [@asnar2008risk], Spgr\_13\_02 Braber et al. [@den2003coras], Spgr\_13\_04 Jürjens [@jurjens2002umlsec], Spgr\_13\_07 R[ø]{}stad [@rostad2006extended], Spgr\_08\_01 Mayer [@mayer2009model], Spgr\_08\_03 Dritsas et al. [@dritsas2006knowledge], Spgr\_07\_03 Lin et al. [@lin2003introducing], Spgr\_03\_01 Avi[ž]{}ienis et al. [@avizienis2004basic], Spgr\_02\_01 Asnar et al. [@asnar2007trust], Spgr\_02\_02 Asnar et al. [@asnar2006risk], SCH\_24\_02 Hong et al. [@hong2004privacy], SCH\_43\_01 Blarkom et al. [@van2003handbook]. **Group 2,** : contains studies that do not appropriately cover risk concepts. In this group, we have identified 22 studies out of the 34 selected ones, including: ACM\_14 Labda et al. [@labda2014modeling], ACM\_16 Braghin et al. [@braghin2008introducing], ACM\_35 Singhal and Wijesekera [@singhal2010ontologies], CIT\_07 Lasheras et al. [@velasco2009modelling], IEEE\_50 Giorgini et al. [@Giorgini2005], IEEE\_57 Kang and Liang [@kang2013security], SCH\_18 Sindre and Opdahl [@sindre2005eliciting], SCH\_24 Kalloniatis et al. [@kalloniatis2008addressing], SCH\_28 Mouratidis and Giorgini [@mouratidis2007secure], SCH\_41 Solove [@solove2006taxonomy], Spgr\_13\_01 Asnar et al. [@asnar2008risk], Spgr\_13\_02 Braber et al. [@den2003coras], Spgr\_13\_04 Jürjens [@jurjens2002umlsec], Spgr\_08\_03 Dritsas et al. [@dritsas2006knowledge], Spgr\_07\_02 Zannone [@zannone2006requirements], Spgr\_07\_03 Lin et al. [@lin2003introducing], Spgr\_03\_01 Avi[ž]{}ienis et al. [@avizienis2004basic], Spgr\_02\_01 Asnar et al. [@asnar2007trust], Spgr\_02\_02 Asnar et al. [@asnar2006risk], SCH\_24\_02 Hong et al. [@hong2004privacy], SCH\_28\_01 Paja et al. [@paja2014sts], SCH\_43\_01 Blarkom et al. [@van2003handbook]. **Group 3,** : contains studies that do not appropriately cover treatment concepts. In this group, we have identified 31 studies out of the 34 selected ones, including: ACM\_03 Lamsweerde [@van2004elaborating], ACM\_14 Labda et al. [@labda2014modeling], ACM\_16 Braghin et al. [@braghin2008introducing], ACM\_35 Singhal and Wijesekera [@singhal2010ontologies], ACM\_40 Wang and Guo [@wang2009ovm], CIT\_07 Lasheras et al. [@velasco2009modelling], IEEE\_12 Souag et al. [@souag2013using], IEEE\_50 Giorgini et al. [@Giorgini2005], IEEE\_57 Kang and Liang [@kang2013security], Spgr\_7 Massacci et al. [@massacci2011extended], Spgr\_13 Elahi et al. [@elahi2009modeling], SCH\_18 Sindre and Opdahl [@sindre2005eliciting], SCH\_24 Kalloniatis et al. [@kalloniatis2008addressing], SCH\_41 Solove [@solove2006taxonomy], Spgr\_18\_03 Fenz and Ekelhart [@fenz2009formalizing], Spgr\_13\_01 Asnar et al. [@asnar2008risk], Spgr\_13\_02 Braber et al. [@den2003coras], Spgr\_13\_03 Elahi et al. [@elahi2010vulnerability], Spgr\_13\_04 Jürjens [@jurjens2002umlsec], Spgr\_13\_05 Matulevi[č]{}ius et al. [@matulevivcius2008adapting], Spgr\_13\_07 R[ø]{}stad [@rostad2006extended], Spgr\_08\_01 Mayer [@mayer2009model], Spgr\_08\_03 Dritsas et al. [@dritsas2006knowledge], Spgr\_07\_02 Zannone [@zannone2006requirements], Spgr\_07\_03 Lin et al. [@lin2003introducing], Spgr\_03\_01 Avi[ž]{}ienis et al. [@avizienis2004basic], Spgr\_02\_01 Asnar et al. [@asnar2007trust], Spgr\_02\_02 Asnar et al. [@asnar2006risk], SCH\_24\_02 Hong et al. [@hong2004privacy], SCH\_28\_01 Paja et al. [@paja2014sts], SCH\_43\_01 Blarkom et al. [@van2003handbook]. **Group 4,** : contains studies that do not appropriately cover the privacy concepts. In this group, we have identified 31 studies out of the 34 selected ones, including: ACM\_03 Lamsweerde [@van2004elaborating], ACM\_14 Labda et al. [@labda2014modeling], ACM\_16 Braghin et al. [@braghin2008introducing], ACM\_35 Singhal and Wijesekera [@singhal2010ontologies], ACM\_40 Wang and Guo [@wang2009ovm], CIT\_07 Lasheras et al. [@velasco2009modelling], CIT\_33 Liu et al. [@liu2003security], IEEE\_12 Souag et al. [@souag2013using], IEEE\_15 Tsoumas and Gritzalis [@tsoumas2006towards], IEEE\_50 Giorgini et al. [@Giorgini2005], Spgr\_7 Massacci et al. [@massacci2011extended], Spgr\_13 Elahi et al. [@elahi2009modeling], SCH\_18 Sindre and Opdahl [@sindre2005eliciting], SCH\_24 Kalloniatis et al. [@kalloniatis2008addressing], SCH\_28 Mouratidis and Giorgini [@mouratidis2007secure], SCH\_41 Solove [@solove2006taxonomy], Spgr\_18\_03 Fenz and Ekelhart [@fenz2009formalizing], Spgr\_13\_01 Asnar et al. [@asnar2008risk], Spgr\_13\_02 Braber et al. [@den2003coras], Spgr\_13\_03 Elahi et al. [@elahi2010vulnerability], Spgr\_13\_04 Jürjens [@jurjens2002umlsec], Spgr\_13\_05 Matulevi[č]{}ius et al. [@matulevivcius2008adapting], Spgr\_13\_07 R[ø]{}stad [@rostad2006extended], Spgr\_08\_01 Mayer [@mayer2009model], Spgr\_08\_03 Dritsas et al. [@dritsas2006knowledge], Spgr\_07\_02 Zannone [@zannone2006requirements], Spgr\_07\_03 Lin et al. [@lin2003introducing], Spgr\_03\_01 Avi[ž]{}ienis et al. [@avizienis2004basic], Spgr\_02\_01 Asnar et al. [@asnar2007trust], Spgr\_02\_02 Asnar et al. [@asnar2006risk], SCH\_24\_02 Hong et al. [@hong2004privacy]. Based on the previous categories, we have 15 studies that do not appropriately cover all the four concepts categories, and 13 studies that do not appropriately cover three categories. 5 studies do not appropriately cover two categories, and one study does not appropriately cover only one categories. A detailed description of the concepts and relations that each of these studies does not cover can be obtained from Table \[table:rq\]. Note that most of these studies have not been developed to address privacy related issues. Therefore, it is not a negative thing when they do not cover privacy related concepts. **RQ4** has been considered in this study to assist authors of selected studies, if they aim to extend their frameworks and approaches to cover privacy concerns. A novel privacy ontology ======================== Several resent studies stress the need for addressing privacy concerns during the system design (e.g., Privacy by Design (PbD) [@kalloniatis2008addressing; @labda2014modeling]). Nevertheless, based on the results of this review, it is easy to note that no existing study covers all the main privacy concepts/relations that have been identified in the review, i.e., no existing ontology enables for capturing main privacy aspects and without such ontology it is almost impossible to address privacy concerns during the system design. Therefore, proposing such ontology would be a viable solution for this problem. To this end, we propose a novel privacy ontology based on the main privacy concepts/relations identified in Table \[table:rq\]. The meta-model of our ontology is depicted in Figure \[fig:onto\], and the concepts of the ontology are organized into four main dimensions: **Organizational dimension:** : proposes concepts to capture the social and technical components of the system in terms of their capabilities, objectives, and dependencies. **Risk dimension:** : proposes concepts to capture risks that might endanger privacy needs at the social and organizational levels. **Treatment dimension:** : proposes concepts to capture countermeasure techniques to mitigate risks to privacy needs. **Privacy dimension:** : proposes concepts to capture the stakeholders’ (actors) privacy requirements/needs concerning their personal information. In what follows, we define each of these dimensions in terms of their concepts and relations **(1) Organizational dimension.** Most current complex systems consist of several autonomous components that interact and depend on one another for achieving their objectives. Therefore, this dimension includes the organizational concepts of the system, which have been further organized into several categories, including: intentional entities, entities’ objectives, informational assets, entities interactions, and entities expectations concerning such interactions (social trust). In what follows, we define each of these dimensions along with their concepts and relations. represent the active entities of the system, we have selected three concepts along with two relations: **Actor** : represents an autonomous entity that has intentionality and strategic goals within the system. Actor can be decomposed into sub-units: **Role** : is an abstract characterization of an actor in terms of a set of behaviors and functionalities within some specialized context. A role can be a specialization (**is\_a**) of one another. **Agent** : is an autonomous entity that has a specific manifestation in the system. An agent can **plays** a role or more within the system, i.e., an agent inherits the properties of the roles it plays. the behavior of actors is, usually, determined by the objectives they aim to achieve. Therefore, we adopted the goal concept and and/or decomposition (refinement) relations to represent such objectives. **Goal** : is a state of affairs that an actor intends to achieve. When a goal is too coarse to be achieved, it can be refined through *and/or-decompositions* of a root goal into finer sub-goals. **And-decomposition** : implies that the achievement of the root-goal requires the achievement of all its sub-goals. **Or-decomposition** : is used to provide different alternatives to achieve the root goal, and it implies that the achievement of the root-goal requires the achievement of any of its sub-goals. information is one of the most important concepts when we speak about privacy. Among the available concepts for capturing informational asset, e.g., data [@labda2014modeling], a resource (physical or informational) [@Giorgini2005; @zannone2006requirements; @mouratidis2007secure; @massacci2011extended], asset [@kang2013security; @elahi2010vulnerability], etc., we have adopted the following concepts and relations: **Information** : represents any informational entity without intentionality. Information can be atomic or composite (composed of several parts), and we rely on *part\_of* relation to capture the relation between an information entity and its sub-parts. In the context of this work, we differentiate between two main types of information: **Personal information** : any information that can be *related* (directly or indirectly) to an identified or identifiable legal entity (e.g., names, addresses, medical records, etc.), who has the right to control how such information can be used by others [@braghin2008introducing; @van2003handbook]. **Public information** : any information that cannot be *related* (directly or indirectly) to an identified or identifiable legal entity, or personal information that has been made public by its legal entity [@labda2014modeling]. actors may use information to achieve their goals. Our ontology adopts three relations between goals and information(e.g., *produce*, *read*, and *modify*), where each of these relations can be defined as follows: **Produce** : indicates that information can be created by achieving the goal that is responsible for its production; **Read** : indicates that the goal achievement depends on consuming such information; **Modify** : indicates that the goal achievement depends on modifying such information. as previously mentioned, we differentiate between personal and public information if it can be *related* (directly or indirectly) to an identified or identifiable legal entity. In what follows, we define the *own* concept that relates personal information to its legal entity, and we specify how information owner controls the usage to its personal information. **Own** : indicates that an actor is the legitimate owner of information, where information owner has full control over the use of information it owns. \[fig:onto\] **A permission** : is consent of a particular use of a particular object in a system [@sandhu1996role], i.e., the holder of the permission is allowed to perform some action(s) in the system. Information owner has the authority to control the use of its own information, i.e., the owner can control the delegated permissions over information it owns. In our ontology, information permissions are classified under (P)roduce, (R)ead, (M)odify permissions, which covers the three relations between goals and information that our ontology propose. actors may not have the required capabilities to achieve their own objectives by themselves (e.g., achieve a goal, furnish information, etc.). Therefore, they depend on one another for such objectives. In what follows, we discuss the concepts that are used for capturing the different actors’ social interactions and dependencies. **Information provision** : indicates that an actor has the capability to deliver information to another one, where the source of the provision relation is the provider and the destination is the requester. Information provision has one attribute that describes the provisioning type, which can be either *confidential* or *non-confidential*, where the first guarantee the confidentiality of the transmitted information while the last does not. <!-- --> **Goal delegation** : indicates that one actor delegates the responsibility to achieve a goal to other actors, where the source of delegation called the delegator , the destination is called delegatee, and the subject of delegation is called delegatum. **Permissions delegation** : indicates that an actor delegates the permissions to produce, read and/or modify over a specific information to another actor. the need for trust arises when actors depend on one another for goals or permissions since such dependencies might entail risk [@chopra2003trust; @gharib2015analyzing]. More specifically, a delegator has no warranty that the delegated goal will be achieved or the delegated permissions will not be misused by the delegatee. Therefore, our ontology adopts the notion of trust and distrust to capture the actors’ expectations of one another concerning their delegations: **Trust** : indicates the expectation of trustor that the trustee will behave as expected considering the trustum (e.g., trustee will achieve the delegated goal, or it will not misuse the delegated permission); **Distrust** : indicates the expectation of trustor that the trustee will not behave as expected considering the trustum (e.g., trustee will not achieve the delegated goal, or it will misuse the delegated permission). we rely on monitoring to compensate the lack of trust or distrust in the trustee concerning the trustum [@gans2001modeling; @zannone2006requirements]. **Monitoring** : can be defined as the process of observing and analyzing the performance of an actor in order to detect any undesirable performance [@guessoum2004monitoring], where the source of monitoring is called the monitor and the destination is called monitoree. **(2) Risk dimension.** Risk can be defined as an event that has a negative impact on the system, i.e., it is the possibility that a particular threat will harm one or more asset of a system by exploiting a vulnerability [@kang2013security; @singhal2010ontologies; @mayer2009model; @elahi2009modeling]. In our ontology, risk is not a primitive concept and we do not include it into the ontology, since it can be captured by other concepts such as threat, vulnerabilities, attack, etc. In what follows, we define the risk dimension related concepts along with their interrelations: **A threat** : is a potential incident that *threaten* an asset (personal information) by *exploiting* a *vulnerability* concerning such asset [@mayer2009model; @singhal2010ontologies; @kang2013security]. A *threat* can be either natural (e.g. earthquake, etc.), accidental (e.g. hardware/software failure, etc.), or intentional (e.g. theft of personal information, etc.)[@fenz2009formalizing; @velasco2009modelling; @souag2015security]. Therefore, the ontology differentiates between two types of threat: **Casual threat** : (natural or accidental): a threat that does not require a *threat actor* nor an *attack method*. **Intentional threat** : a threat that require a *threat actor* and a presumed *attack method* [@lin2003introducing; @massacci2011extended]. **Threat actor** : is an actor that aims for achieving the *intentional threat* [@rostad2006extended; @mayer2009model; @elahi2009modeling]. **Attack method** : is a standard means by which a *threat actor* carries out an *intentional threat* [@mayer2009model; @elahi2010vulnerability; @souag2015security]. **Impact** : is the consequence of the *threat* *over* the asset, and it can be characterized by a *severity* attribute that captures the level of the impact (e.g. high, medium or low) [@wang2009ovm; @souag2015security]. **A vulnerability** : is a weakness in the system, asset (personal information), etc. that can be *exploited* by a *threat* [@rostad2006extended; @mayer2009model; @singhal2010ontologies]. **(3) Treatment dimension.** This dimension introduces countermeasure concepts to mitigate risks, we adopted a high abstraction level countermeasure concepts to capture the required protection/treatment level (e.g., privacy goal), which can be refined into concrete protection/treatment constraints (e.g., mechanisms or policies) that can be implemented. The concepts of the treatment dimension are: **A privacy goal** : is an aim to counter threats and prevents harm to personal information by satisfying privacy criteria concerning such information. **A privacy constraint** : is a restriction that is used to realize/satisfy a privacy goal, constraints can be either a privacy policy or privacy mechanism. **A privacy policy** : is a privacy statement that defines the permitted and/or forbidden actions to be carried out by actors of the system toward information. **A privacy mechanism** : is a concrete technique to be implemented for helping towards the satisfaction of privacy goal (attribute). **(4) Privacy dimension.** Introduce concepts to capture the stakeholders’ (actors) privacy requirements/needs concerning their personal information. The concepts of the privacy dimension are: **Privacy requirement** : is used to capture the actors’ (personal information owner/subject) privacy needs at a high abstraction level, and it is specialized from the *privacy goal* concept. Moreover, privacy requirement concept is further specialized into five more refined concepts. **Confidentiality,** : means personal information should be kept secure from any potential leaks and improper access [@solove2006taxonomy; @dritsas2006knowledge; @labda2014modeling]. We rely on the following principles to analyze confidentiality: **Non-disclosure,** : personal information can only be disclosed if the owner’s consent is provided, i.e., the disclosure of the personal information should be under the control of its legitimate owner [@solove2006taxonomy; @dritsas2006knowledge; @braghin2008introducing; @labda2014modeling]. Note that *non-disclosure* also cover information transmission that is why we differentiate between two types of information provision (confidential and non-confidential). **Need to know (NtK),** : an actor should only use information if it is strictly necessary for completing a certain task [@labda2014modeling; @paja2014sts]. **Purpose of use,** : personal information should only be used for specific, explicit, legitimate purposes and not further used in a way that is incompatible with those purposes [@van2003handbook; @solove2006taxonomy; @dritsas2006knowledge]. *Purpose of use* is able to address situations where an actor might be granted a permission to use some personal information for a legitimate purpose, yet after accessing it, he/she might use the information for some other purpose. **Notice,** : the data subject (information owner) should be notified when its information is being collected [@van2003handbook; @solove2006taxonomy; @dritsas2006knowledge]. Notice is considered mainly to address situations where personal information *related* to a legitimate entity (data subject) is being collected without his/her knowledge. **Anonymity,** : the identity of the information owner should not be disclosed unless it is required [@dritsas2006knowledge; @solove2006taxonomy], i.e., the primary/secondary identifiers of the data subject (e.g., name, social security number, address, etc. ) should be removed if they are not required and information still can be used for the same purpose after their removal. We rely on *part\_of* relation to model the internal structure of personal information, i.e., we link the identifiers of the data subject with the rest of the information item by the *part\_of* relation. If the identifiers are not required for the task, they can be easily removed, and information can be used without linking it back to its owner/data subject (unlinkability). **Transparency,** : information owner should be able to know who is using his/her information and for what purposes [@van2003handbook; @dritsas2006knowledge; @kang2013security]. We rely on the following principles to analyze transparency: **Authentication,** : a mechanism that aims at verifying whether actors are who they claim they are [@paja2014sts]. **Authorization,** : a mechanism that aims at verifying whether actors can use information in accordance with their credentials [@dritsas2006knowledge]. **Accountability,** : information owner should have a mechanism available to them to hold information users accountable for their actions concerning information [@dritsas2006knowledge; @kang2013security]. We rely on the following principles to analyze accountability: **Non-repudiation,** : the delegator cannot repudiate he/she delegated; and the delegatee cannot repudiate he/she accepted the delegation [@kang2013security; @paja2014sts]. **Not-re-delegation,** : the delegatee is requested by the delegator not to re-delegate the delegatum, i.e., the re-delegation of a goal/permission is forbidden [@paja2014sts]. Threats to validity =================== After presenting and discussing our systematic literature review, we discuss the threats to its validity in this section. Following Runeson et al. [@runeson2009guidelines], we classify threats to validity under four types: construct, internal, external and reliability: **1- Construct threats:** is concerned with to what extent a test measures what it claims to be measuring [@runeson2009guidelines]. Construct validity is particularly important, since it might influence the internal validity as well [@mackenzie2003dangers]. We have identified the following threats: Poor conceptualization of the construct: : occurs when the predicted outcome of the study is defined too narrowly [@mackenzie2003dangers], i.e., using only one factor to analyze the subject of the study. To avoid this threat, the research objective has been transformed into several research questions and for each of these questions, several factors were specified to evaluate whether they have been properly answered. In addition, we followed the best practices in the area to define the criteria while searching for and selecting the related studies (e.g., inclusion and exclusion criteria, quality assessment criteria, etc.). Systematic error: : may occur while designing and conducting the review. To avoid such threat, the review protocol has been carefully designed based on well-adopted methods, and it has been strictly followed during the different phases of the review. **2- Internal threats:** is concerned with factors that have not been considered in the study, and they could have influenced the investigated factors in the study [@trochim2006research; @runeson2009guidelines]. One internal threat has been identified: Publication bias: : publication bias is a common threat to the validity of systematic reviews, and it refers to a situation where positive research results are more likely to be reported than negatives ones [@keele2007guidelines]. Our review focused on finding privacy related concepts/relations by reviewing the related literature, and there are no positive nor negative research results in such case. Despite this, we have specified very clear inclusion and exclusion criteria, and quality assessment criteria while searching for and selecting the related studies. **3- External threats:** is concerned with to what extent the results of the study can be generalized [@runeson2009guidelines]. One internal threat has been identified: Completeness: : it is almost impossible to capture all related studies, yet our review protocol and search strategy were very carefully designed to cover as much as possible of the related studies. In addition, we might exclude some relevant non-English published studies since we only considered English studies in this review. To mitigate this limitation we performed a manual scan of the references of all the primary selected studies in order to identify those studies that were missed during the first search stage. However, we cannot guarantee that we have identified all the main available studies, which can be used to answer our research questions. **4- Reliability threats:** is concerned with to what extent the study is dependent on the researcher(s), i.e., if another researcher(s) conducted the same study, the result should be the same. The search terms, search sources, inclusion and exclusion criteria, quality assessment questions, etc. are all available, and any researcher can repeat the review and he should get similar results. However, the researcher should take into consideration the time when the studies search process was performed, i.e., the researcher should limit the search time to March 2016. Related work ============ There are few systematic reviews concerning privacy/securities ontologies. For instance, Souag et al. [@souag2012ontologies] performed a systematic review that proposes an analysis and a typology of existing security ontologies. While Blanco et al. [@blanco2008systematic] conducted a systematic review with a main aim for identifying, extracting and analyzing the main proposals for security ontologies. Fabian et al. [@fabian2010comparison] present a conceptual framework for security requirements engineering by mapping the diverse terminologies of different security requirements engineering methods to that framework. Moreover, a security ontology for capturing security requirements have been presented in [@souag2015security]. However, the focus of all the previously mentioned studies was security ontology. On the other hand, Blanco et al. [@blanco2011basis] conduct a systematic review for extracting the key requirements that an integrated and unified security ontology should have. While Mellado et al. [@mellado2010systematic] carried out a systematic review of the existing literature concerning security requirements engineering in order to summarize the current contributions and to provide a road map for future research in this area. Iankoulova and Daneva [@iankoulova2012cloud] performed a systematic review concerning the security requirements of cloud computing. In particular, they have classified the main identified security requirements under nine sub-areas: access control, attack/harm detection, non-repudiation, integrity, security auditing, physical protection, privacy, recovery, and prosecution. Li [@li2011empirical] conducted a systematic review concerning online information privacy concerns, consequences, and moderating effects. Based on the review outcome, he proposed a framework to illustrate the relationships between the previously mentioned factors and to highlight opportunities for further improvement. Finally, Fern[á]{}ndez-Alem[á]{}n et al. [@fernandez2013security] performed systematic literature review for identifying and analyzing critical privacy and security aspects of the electronic health record systems. Conclusions and Future Work =========================== In this paper, we argued that many wrong design decisions might be made due to the insufficient knowledge about the privacy-related concepts, and we advocate that a well-defined privacy ontology that captures the privacy related concepts along with their interrelations can solve this problem. Therefore, we conduct a systematic review concerning the existing privacy/security literature with a main purpose of identifying the main concepts along with their interrelation for capturing privacy requirements. The objectives of the research were considered to have been achieved since the research questions posed have been answered. Moreover, we used the identified concepts/relations for proposing a privacy ontology to be used by software engineers while dealing with privacy requirements. For future work, we aim to develop core privacy ontology to be used by software/security engineers when dealing with privacy requirements. To achieve that, we are planning to contact the authors of the selected studies to get their feedback concerning the proposed privacy ontology. In addition, we will conduct a controlled experiment with software/security engineers to evaluate the usability of the ontology. Finally, we plan to evaluate the completeness and validity of the ontology by deploying it to capture the privacy requirements for two real case studies that belong to different domains (e.g., medical sector and public administration). [100]{} Simont: A security information management ontology framework. In [*Secure and Trust Computing, Data Management and Applications*]{}. Springer, 2011, pp. 201–208. Is there a cost to privacy breaches? an event study. (2006), 94. : Context correlation for trust bootstrapping in pervasive environment. In [*4th International Conference on Intelligent Environments (IET)*]{} (2008), IET, pp. 1–8. Ontology of e-learning security. In [*Information Society (i-Society), 2010 International Conference on*]{} (2010), IEEE, pp. 652–655. Model driven security engineering for the realization of dynamic security requirements in collaborative systems. In [*Models in Software Engineering*]{}. Springer, 2006, pp. 278–287. Agent models and different user ontologies for an electronic market place. , 1 (2004), 1–41. Hipaa compliance and smart cards: Solutions to privacy and security requirements. (2003). A scheme for privacy-preserving ontology mapping. In [*Proceedings of the 18th International Database Engineering & Applications Symposium*]{} (2014), ACM, pp. 87–95. A framework for modular erdf ontologies. , 3-4 (2013), 189–249. Analyzing website privacy requirements using a privacy goal taxonomy. In [*Requirements Engineering, 2002. Proceedings. IEEE Joint International Conference on*]{} (2002), IEEE, pp. 23–31. Gene ontology: tool for the unification of biology. , 1 (2000), 25–29. An interoperable security framework for connected healthcare. In [*Consumer Communications and Networking Conference (CCNC), 2011 IEEE*]{} (2011), IEEE, pp. 116–120. From trust to dependability through risk analysis. In [*The Second InternationalConference on Availability, Reliability and Security, ARES’07.*]{} (2007), IEEE, pp. 19–26. Risk modelling and reasoning in goal models, technical report [DIT]{}-06-008. Tech. rep., Universit[á]{} degli studi di Trento, 2006. Risk as dependability metrics for the evaluation of business solutions: a model-driven approach. In [*Third Conference on Availability, Reliability and Security, ARES08*]{} (2008), IEEE, pp. 1240–1247. Basic concepts and taxonomy of dependable and secure computing. , 1 (2004), 11–33. A framework for exploiting security expertise in application development. In [*Trust and Privacy in Digital Business*]{}. Springer, 2006, pp. 62–70. Privacy-preserving reasoning on the semanticweb. In [*International Conference on Web Intelligence, IEEE/WIC/ACM*]{} (2007), IEEE, pp. 791–797. Ontology-based identification of research gaps and immature research areas. In [*Multidisciplinary Research and Practice for Information Systems*]{}. Springer, 2012, pp. 1–16. Semantic matching of web services security policies. In [*Risk and Security of Internet and Systems (CRiSIS), 2012 7th International Conference on*]{} (2012), IEEE, pp. 1–8. What is computer security? , 1 (2003), 67–69. A security ontology for incident analysis. In [*Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research*]{} (2010), ACM, p. 46. Basis for an integrated security ontology according to a systematic review of existing proposals. , 4 (2011), 372–388. A systematic review and comparison of security ontologies. In [*3rd Conference on Availability, Reliability and Security, ARES ’08*]{} (2008), IEEE, pp. 813–820. Enhancing privacy and authorization control scalability in the grid through ontologies. , 1 (2009), 16–24. Intelligent security and privacy solutions for enabling personalized telepathology. , Suppl 1 (2011), S4. The image protector-a flexible security rule specification toolkit. In [*Security and Cryptography (SECRYPT), 2011 Proceedings of the International Conference on*]{} (2011), IEEE, pp. 345–350. Introducing privacy in a hospital information system. In [*Proceedings of the fourth international workshop on Software engineering for secure systems*]{} (2008), ACM, pp. 9–16. . School of Information Technologies, University of Sydney, 2004. Analyzing regulatory rules for privacy and security requirements. , 1 (2008), 5–20. Eddy, a formal language for specifying and analyzing data flow specifications for conflicting privacy requirements. , 3 (2014), 281–307. Tropos: An agent-oriented software development methodology. , 3 (2004), 203–236. Designing for trust. In [*Trust, reputation, and security: Theories and practice*]{}. Springer, 2002, pp. 15–29. The economic cost of publicly announced information security breaches: empirical evidence from the stock market. , 3 (2003), 431–448. Security conscious web service composition. In [*International Conference on Web Services ICWS’06*]{} (2006), IEEE, pp. 489–496. Privacy by design: Origins, meaning, and prospects. (2011), 170. Privacy by design: The 7 foundational principles. (2009). Managing identities via interactions between ontologies. In [*On The Move to Meaningful Internet Systems OTM Workshops*]{} (2003), Springer, pp. 732–740. Knowledge modeling for privacy-by-design in smart surveillance solution. In [*Advanced Video and Signal Based Surveillance (AVSS), 2013 10th IEEE International Conference on*]{} (2013), IEEE, pp. 171–176. Improving privacy and security in multi-authority attribute-based encryption. In [*Proceedings of the 16th ACM conference on Computer and communications security*]{} (2009), ACM, pp. 121–130. An ontology for context-aware pervasive computing environments. , 03 (2003), 197–207. The [SOUPA]{} ontology for pervasive computing. In [*Ontologies for agents: Theory and experiences*]{}. Springer, 2005, pp. 233–258. Intelligent agents meet the semantic web in smart spaces. , 6 (2004), 69–79. Soupa: Standard ontology for ubiquitous and pervasive applications. In [*Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on*]{} (2004), IEEE, pp. 258–267. An ontological study of data purpose for privacy policy enforcement. In [*Privacy, Security, Risk and Trust (PASSAT) and Third Inernational Conference on Social Computing (SocialCom)*]{} (2011), IEEE, pp. 1208–1213. The design of an ontology-based service-oriented architecture framework for traditional chinese medicine healthcare. In [*14th International Conference on e-Health Networking, Applications and Services (Healthcom)*]{} (2012), IEEE, pp. 353–356. Trust in electronic environments. In [*System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on*]{} (2003), Ieee, pp. 10–pp. Private information retrieval. , 6 (1998), 965–981. Capturing semantics for information security and privacy assurance. In [*Ubiquitous Intelligence and Computing*]{}. Springer, 2008, pp. 105–118. Enabling access control and privacy through ontology. In [*Innovations in Information Technology, 2007. IIT’07. 4th International Conference on*]{} (2007), IEEE, pp. 168–172. Ontology-based matching of security attributes for personal data access in e-health. In [*On the Move to Meaningful Internet Systems: OTM 2011*]{}. Springer, 2011, pp. 605–616. Context ontology for secure interoperability. In [*Availability, Reliability and Security, 2008. ARES 08. Third International Conference on*]{} (2008), IEEE, pp. 821–827. How to capture, model, and verify the knowledge of legal, security, and privacy experts: a pattern-based approach. In [*Proceedings of the 11th international conference on Artificial intelligence and law*]{} (2007), ACM, pp. 149–153. Dealing with the formal analysis of information security policies through ontologies: A case study. In [*Proceedings of the Third Australasian Workshop on Advances in Ontologies-Volume 85*]{} (2007), Australian Computer Society, Inc., pp. 55–60. An ontology for run-time verification of security certificates for soa. In [*7th International Conference on Availability, Reliability and Security (ARES)*]{} (2012), IEEE, pp. 525–533. Pattern-based security requirements specification using ontologies and boilerplates. In [*Requirements Patterns (RePa), 2012 IEEE Second International Workshop on*]{} (2012), IEEE, pp. 54–59. Coresec: an ontology of security aplied to the business process of management. In [*Proceedings of the 2008 Euro American Conference on Telematics and Information Systems*]{} (2008), ACM, p. 13. Regulatory ontologies: An intellectual property rights approach. In [*On The Move to Meaningful Internet Systems 2003: OTM 2003 Workshops*]{} (2003), Springer, pp. 621–634. The [CORAS]{} methodology: model-based risk assessment using [UML]{} and up. (2003), 332–357. Security for daml web services: Annotation and matchmaking. In [*The Semantic Web-ISWC*]{}. Springer, 2003, pp. 335–350. . Springer, 2006. Toward a security ontology. , 3 (2003), 0006–7. A knowledge-based approach to security requirements for e-health applications. (2006). Security requirements for a semantic service-oriented architecture. In [*Availability, Reliability and Security, 2007. ARES 2007. The Second International Conference on*]{} (2007), IEEE, pp. 366–373. Ontology-based reasoning in requirements elicitation. In [*Software Engineering and Formal Methods, 2009 Seventh IEEE International Conference on*]{} (2009), IEEE, pp. 263–272. Reasoning with rules and ontologies. In [*Reasoning web*]{}. Springer, 2006, pp. 93–127. Ontological mapping of common criteria’s security assurance requirements. In [*New Approaches for Security, Privacy and Trust in Complex Environments*]{}. Springer, 2007, pp. 85–95. Security ontologies: Improving quantitative risk analysis. In [*System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on*]{} (2007), IEEE, pp. 156a–156a. Security ontology: Simulating threats to corporate assets. In [*International Conference on Information Systems Security*]{} (2006), Springer, pp. 249–259. Privacy and security in e-learning. , 4 (2003), 1–19. A modeling ontology for integrating vulnerabilities into security requirements conceptual foundations. In [*ER 2009*]{}. Springer, 2009, pp. 99–114. A vulnerability-centric requirements engineering framework: analyzing security attacks, countermeasures, and requirements based on vulnerabilities. , 1 (2010), 41–62. Semantic access control in web based communities. In [*Computing in the Global Information Technology, 2008. ICCGI’08. The Third International Multi-Conference on*]{} (2008), IEEE, pp. 131–136. Isn’t the time ripe for a standard ontology on security of information and networks? In [*Proceedings of the 7th International Conference on Security of Information and Networks*]{} (2014), ACM, p. 1. Socio-technical systems. management sciences, models and techniques. churchman cw et al, 1960. Ontology-based security adaptation at run-time. In [*4th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO)*]{} (2010), IEEE, pp. 204–212. Limiting privacy breaches in privacy preserving data mining. In [*Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems*]{} (2003), ACM, pp. 211–222. A comparison of security requirements engineering methods. , 1 (2010), 7–40. Ontologies: A silver bullet for knowledge management and electronic-commerce (2000). . Ontology-based generation of it-security metrics. In [*Proceedings of the 2010 ACM Symposium on Applied Computing*]{} (2010), ACM, pp. 1833–1839. Formalizing information security knowledge. In [*Proceedings of the 4th international Symposium on information, Computer, and Communications Security*]{} (2009), ACM, pp. 183–194. Information security fortification by ontological mapping of the iso/iec 27001 standard. In [*Dependable Computing, 2007. PRDC 2007. 13th Pacific Rim International Symposium on*]{} (2007), IEEE, pp. 381–388. Security and privacy in electronic health records: A systematic literature review. , 3 (2013), 541–562. Surveillance ontology for legal, ethical and privacy protection based on skos. In [*Digital Signal Processing (DSP), 2013 18th International Conference on*]{} (2013), IEEE, pp. 1–5. Security and privacy for web databases and services. In [*Advances in Database Technology-EDBT 2004*]{}. Springer, 2004, pp. 17–28. Engineering safety and security related requirements for software intensive systems. In [*ICSE Companion*]{} (2007), p. 169. Security use cases. , 3 (2003). A taxonomy of security-related requirements. In [*International Workshop on High Assurance Systems (RHAS’05)*]{} (2005), Citeseer. The ontological interpretation of informational privacy. , 4 (2005), 185–200. A security architecture for computational grids. In [*Proceedings of the 5th ACM conference on Computer and communications security*]{} (1998), ACM, pp. 83–92. Ontology guided risk analysis: From informal specifications to formal metrics. In [*Advances in Information and Intelligent Systems*]{}. Springer, 2009, pp. 227–249. Discovering multidimensional correlations among regulatory requirements to understand risk. , 4 (2011), 16. Semantic web technologies to reconcile privacy and context awareness. , 3 (2004), 241–260. Modeling the impact of trust and distrust in agent networks. In [*Proc. of AOIS’01*]{} (2001), pp. 45–58. An approach for privacy protection based-on ontology. In [*Networks Security Wireless Communications and Trusted Computing (NSWCTC), 2010 Second International Conference on*]{} (2010), vol. 2, IEEE, pp. 397–400. Towards a base ontology for privacy protection in service-oriented architecture. In [*Service-Oriented Computing and Applications (SOCA), 2009 IEEE International Conference on*]{} (2009), IEEE, pp. 1–8. Privacy, consumers, and costs: How the lack of privacy costs consumers and why business studies of privacy costs are biased and incomplete. In [*Ford Foundation*]{} (2002). Analyzing trust requirements in socio-technical systems: A belief-based approach. In [*IFIP Working Conference on The Practice of Enterprise Modeling*]{} (2015), Springer, pp. 254–270. Privacy requirements: Findings and lessons learned in developing a privacy platform. In [*24nd International Requirements Engineering Conference (RE), to appear*]{} (2016), IEEE. Modeling security requirements through ownership, permission and delegation. In [*13th International Conference on Requirements Engineering*]{} (2005), IEEE, pp. 167–176. A framework for security driven software evolution. In [*Automation and Computing (ICAC), 2014 20th International Conference on*]{} (2014), IEEE, pp. 194–199. , vol. 46. IOS press, 1998. Monitoring and organizational-level adaptation of multi-agent systems. In [*Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 2*]{} (2004), IEEE Computer Society, pp. 514–521. Engineering privacy by design. (2011). Use of ontology technology for standardization of medical records and dealing with associated privacy issues. In [*Industrial Technology, 2006. ICIT 2006. IEEE International Conference on*]{} (2006), IEEE, pp. 2839–2845. Case study [I]{}: Ontology-based multi-agent system for human disease studies. In [*Ontology-Based Multi-Agent Systems*]{}. Springer, 2009, pp. 179–216. Security requirements engineering: A framework for representation and analysis. , 1 (2008), 133–153. A framework for modeling privacy requirements in role engineering. In [*Proc. of REFSQ*]{} (2003), vol. 3, pp. 137–146. Privacy support and evaluation on an ontological basis. In [*23rd International Conference on Data Engineering Workshop*]{} (2007), IEEE, pp. 221–227. Privacy ontology support for e-commerce. , 2 (2008), 54–61. Multi-agent security service architecture for mobile learning. In [*Information Technology: Research and Education, 2004. ITRE 2004. 2nd International Conference on*]{} (2004), IEEE, pp. 91–95. Ontology-enabled access control and privacy recommendations. In [*Mining, Modeling, and Recommending’Things’ in Social Media*]{}. Springer, 2015, pp. 35–54. Event-based applications and enabling technologies. In [*Proceedings of the Third ACM International Conference on Distributed Event-Based Systems*]{} (2009), ACM, p. 1. Privacy risk models for designing privacy-sensitive ubiquitous computing systems. In [*Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques*]{} (2004), ACM, pp. 91–100. Towards combining ontologies and model weaving for the evolution of requirements models. In [*Innovations for requirement analysis. From stakeholders’ needs to formal designs*]{}. Springer, 2007, pp. 85–102. A light-weight ranger intrusion detection system on wireless sensor networks. In [*Fifth International Conference on Genetic and Evolutionary Computing (ICGEC)*]{} (2011), IEEE, pp. 49–52. Cloud computing security requirements: A systematic review. In [*2012 Sixth International Conference on Research Challenges in Information Science (RCIS)*]{} (2012), IEEE, pp. 1–7. Specifying an access control model for ontologies for the semantic web. In [*Secure Data Management*]{}. Springer, 2005, pp. 73–85. Intrusion correlation using ontologies and multi-agent systems. In [*Information Security and Assurance*]{}. Springer, 2010, pp. 51–63. Risk evaluation for personal identity management based on privacy attribute ontology. In [*Conceptual Modeling-ER 2008*]{}. Springer, 2008, pp. 183–198. Guidelines on security and privacy in public cloud computing. (2011), 144. Decision support for partially moving applications to the cloud: the example of business intelligence. In [*Proceedings of the 2013 international workshop on Hot topics in cloud services*]{} (2013), ACM, pp. 35–42. Umlsec: Extending [UML]{} for secure systems development. In [*UML The Unified Modeling Language*]{}. Springer, 2002, pp. 412–425. Introducing the common non-functional ontology. In [*Enterprise Interoperability II*]{}. Springer, 2007, pp. 633–645. User-centric social context information management: an ontology-based approach and platform. , 5 (2014), 1061–1083. Security and privacy challenges in open and dynamic environments. , 6 (2006), 89–91. Authorization and privacy for semantic web services. , 4 (2004), 50–56. Using domain ontology as domain knowledge for requirements elicitation. In [*Requirements Engineering, 14th IEEE International Conference*]{} (2006), IEEE, pp. 189–198. Dealing with privacy issues during the system design process. In [*Signal Processing and Information Technology, 2005. Proceedings of the Fifth IEEE International Symposium on*]{} (2005), IEEE, pp. 546–551. Addressing privacy requirements in system design: the [P]{}ri[S]{} method. , 3 (2008), 241–255. Ontology alignment in rfid privacy protection. In [*Complex, Intelligent and Software Intensive Systems, 2009. CISIS’09. International Conference on*]{} (2009), IEEE, pp. 718–723. A security ontology with mda for software development. In [*Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2013 International Conference on*]{} (2013), IEEE, pp. 67–74. An ontology for secure e-government applications. In [*Availability, Reliability and Security, 2006. ARES 2006. The First International Conference on*]{} (2006), IEEE, pp. 5–pp. An ontology-based approach to context-aware access control for software services. In [*Web Information Systems Engineering–WISE 2013*]{}. Springer, 2013, pp. 410–420. A semantic policy framework for context-aware access control applications. In [*12th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)*]{} (2013), IEEE, pp. 753–762. . Guidelines for performing systematic literature reviews in software engineering. Tech. rep., Keele University, 2007. Security oriented service composition: A framework. In [*Innovations in Information Technology (IIT), 2012 International Conference on*]{} (2012), IEEE, pp. 48–53. Security ontology for annotating resources. In [*OTM Confederated International Conferences“ On the Move to Meaningful Internet Systems”*]{} (2005), Springer, pp. 1483–1499. Procedures for performing systematic reviews. , 2004 (2004), 1–26. Contrology-an ontology-based cloud assurance approach. In [*4th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)*]{} (2015), IEEE, pp. 105–107. Privacy analysis using ontologies. In [*Proceedings of the second ACM conference on Data and Application Security and Privacy*]{} (2012), ACM, pp. 205–216. Privacy verification using ontologies. In [*Availability, Reliability and Security (ARES), 2011 Sixth International Conference on*]{} (2011), IEEE, pp. 627–632. Deriving implementation-level policies for usage control enforcement. In [*Proceedings of the second ACM conference on Data and Application Security and Privacy*]{} (2012), ACM, pp. 83–94. Modeling of privacy-aware business processes in bpmn to protect personal data. In [*Proceedings of the 29th Annual ACM Symposium on Applied Computing*]{} (2014), ACM, pp. 1399–1405. A conceptual meta-model for secured information systems. In [*Proceedings of the 7th International Workshop on Software Engineering for Secure Systems*]{} (2011), ACM, pp. 22–28. Privacy by design—principles of privacy-aware ubiquitous systems. In [*Ubicomp 2001: Ubiquitous Computing*]{} (2001), Springer, pp. 273–291. Ontology of secure service level agreement. In [*High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on*]{} (2015), IEEE, pp. 166–172. Building problem domain ontology from security requirements in regulatory documents. In [*Proceedings of the 2006 international workshop on Software engineering for secure systems*]{} (2006), ACM, pp. 43–50. Data security and privacy in wireless body area networks. , 1 (2010), 51–58. Empirical studies on online information privacy concerns: literature review and an integrative framework. , 1 (2011), 453–496. Ontology-based negotiation of security requirements in cloud. In [*Computational Aspects of Social Networks (CASoN), 2012 Fourth International Conference on*]{} (2012), IEEE, pp. 192–197. Introducing abuse frames for analysing security requirements. In [*11th Requirements Engineering International Conference*]{} (2003), IEEE, pp. 371–372. A proxy for privacy: the discreet box. In [*The International Conference on Computer as a Tool, EUROCON*]{} (2007), IEEE, pp. 966–973. Ontology-based requirements conflicts analysis in activity diagrams. In [*Computational Science and Its Applications–ICCSA 2009*]{}. Springer, 2009, pp. 1–12. Security and privacy requirements analysis within a social setting. In [*11th International Requirements Engineering Conference*]{} (2003), IEEE, pp. 151–161. Enterprise knowledge management and conceptual modelling. In [*Conceptual Modeling*]{}. Springer, 1999, pp. 123–143. A collaborative ontology development tool for information security managers. In [*Proceedings of the 4th Symposium on Computer Human Interaction for the Management of Information Technology*]{} (2010), ACM, p. 5. The dangers of poor construct conceptualization. , 3 (2003), 323–326. Methodology for information systems risk analysis and management, 2006. Detecting privacy in attention aware system. In [*Intelligent Environments, 2006. IE 06. 2nd IET International Conference on*]{} (2006), vol. 2, IET, pp. 231–239. Retracted: shared ontology for pervasive computing. In [*Advances in Computer Science–ASIAN 2005. Data Management on the Web*]{}. Springer, 2005, pp. 64–78. An extended ontology for security requirements. In [*Advanced Information Systems Engineering Workshops*]{} (2011), Springer, pp. 622–636. Computer-aided support for secure tropos. , 3 (2007), 341–364. An ontology for secure socio-technical systems. (2007), 469. Using a security requirements engineering methodology in practice: the compliance with the italian data protection legislation. , 5 (2005), 445–455. Detecting conflicts between functional and security requirements with secure tropos: John rusnak and the allied irish bank. (2008). Adapting [S]{}ecure [T]{}ropos for security risk management in the early phases of information systems development. In [*Advanced Information Systems Engineering*]{} (2008), Springer, pp. 541–555. The production rule framework: developing a canonical set of software requirements for compliance with law. In [*proceedings of the 1st ACM International Health Informatics Symposium*]{} (2010), ACM, pp. 629–636. . PhD thesis, University of Namur, 2009. Towards a risk-based security requirements engineering framework. In [*Workshop on Requirements Engineering for Software Quality. In Proc. of REFSQ*]{} (2005), vol. 5. Use of ontologies in pervasive computing environments. (2003). A systematic review of security requirements engineering. , 4 (2010), 153–165. Development of an ontology-based smart card system reference architecture. In [*Ontologies*]{}. Springer, 2007, pp. 841–863. Ontology-based evaluation of iso 27001. In [*I3E*]{} (2010), Springer, pp. 93–102. Privacy-preserving ontology matching. In [*AAAI Workshop on Context and Ontologies*]{} (2005). Privacy-preserving semantic interoperation and access control of heterogeneous databases. In [*Proceedings of the 2006 ACM Symposium on Information, computer and communications security*]{} (2006), ACM, pp. 66–77. A legal ontology to support privacy preservation in location-based services. In [*On the Move to Meaningful Internet Systems: OTM Workshops*]{} (2006), Springer, pp. 1755–1764. Semantic annotations for security policy matching in ws-policy. In [*Security and Cryptography (SECRYPT), 2011 Proceedings of the International Conference on*]{} (2011), IEEE, pp. 443–449. Secure [Tropos]{}: A security-oriented extension of the [Tropos]{} methodology. , 2 (2007), 285–309. An ontology for modelling security: The tropos approach. In [*Knowledge-Based Intelligent Information and Engineering Systems*]{} (2003), Springer, pp. 1387–1394. Surprise: user-controlled granular privacy and security for personal data in smartercontext. In [*Proceedings of the 2012 Conference of the Center for Advanced Studies on Collaborative Research*]{} (2012), IBM Corp., pp. 131–145. Accounting for social, spatial, and textual interconnections. In [*Computer Applications for Handling Legal Evidence, Police Investigation and Case Argumentation*]{}. Springer, 2012, pp. 483–765. Cryptographic approach to “privacy-friendly” tags. In [*RFID privacy workshop*]{} (2003), vol. 82, Cambridge, USA. Managing security and privacy in ubiquitous ehealth information interchange. In [*Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication*]{} (2011), ACM, p. 26. Database privacy: balancing confidentiality, integrity and availability. , 2 (2002), 20–27. -tool: Security requirements engineering for socio-technical systems. In [*Engineering Secure Future Internet Services and Systems*]{}. Springer, 2014, pp. 65–96. Efficient projection of ontologies. Leveraging ontologies upon a holistic privacy-aware access control model. In [*Foundations and Practice of Security*]{}. Springer, 2014, pp. 209–226. An information security ontology incorporating human-behavioural implications. In [*Proceedings of the 2nd International Conference on Security of Information and Networks*]{} (2009), ACM, pp. 46–55. An ontology based approach to information security. In [*Metadata and Semantic Research*]{}. Springer, 2009, pp. 183–192. Spins: Security protocols for sensor networks. , 5 (2002), 521–534. Property attestation—scalable and privacy-friendly security assessment of peer computers. Privacy compliance in european healthgrid domains: An ontology-based approach. In [*Computer-Based Medical Systems, 2009. CBMS 2009. 22nd IEEE International Symposium on*]{} (2009), IEEE, pp. 1–8. Ontology views: a theoretical perspective. In [*On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops*]{} (2006), Springer, pp. 1814–1824. Ontologies in a pervasive computing environment. In [*Workshop on Ontologies in Distributed Systems at IJCAI, Acapulco, Mexico*]{} (2003), Citeseer. Ontology in information security: a useful theoretical foundation and methodological tool. In [*Proceedings of the 2001 workshop on New security paradigms*]{} (2001), ACM, pp. 53–59. Preserving privacy in web services. In [*Proceedings of the 4th international workshop on Web information and data management*]{} (2002), ACM, pp. 56–62. A survey on ontologies for human behavior recognition. , 4 (2014), 43. An extended misuse case notation: Including vulnerabilities and the insider threat. In [*International Working Conference on Requirements Engineering: Foundation for Software Quality*]{} (2006), Springer, pp. 33–34. Guidelines for conducting and reporting case study research in software engineering. , 2 (2009), 131–164. Ontology-based platform for trusted regulatory compliance services. In [*On The Move to Meaningful Internet Systems Workshops OTM*]{} (2003), Springer, pp. 675–689. A privacy preference ontology (ppo) for linked data. In [*LDOW*]{} (2011), Citeseer. Role-based access control models yz. , 2 (1996), 38–47. Secure enterprise interoperability ontology for semantic integration of business to business applications. In [*P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2013 Eighth International Conference on*]{} (2013), IEEE, pp. 68–75. Towards knowledge level privacy and security using rdf/rdfs and rbac. In [*Semantic Computing (ICSC), 2015 IEEE International Conference on*]{} (2015), IEEE, pp. 264–267. The epistemology of computer security. , 6 (2009), 8–10. What are information security ontologies useful for? In [*Metadata and Semantics Research*]{}. Springer, 2015, pp. 51–61. Eliciting security requirements with misuse cases. , 1 (2005), 34–44. A comparative study of cloud security ontologies. In [*Reliability, Infocom Technologies and Optimization (ICRITO)(Trends and Future Directions), 2014 3rd International Conference on*]{} (2014), IEEE, pp. 1–6. Revisiting security ontologies. (2014). Ontologies for modeling enterprise level security metrics. In [*Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research*]{} (2010), ACM, p. 58. An information privacy taxonomy for collaborative environments. , 4 (2006), 382–394. Conceptualizing privacy. (2002), 1087–1155. A taxonomy of privacy. (2006), 477–564. Large-scale complex [IT]{} systems. , 7 (2012), 71–77. Ict tools and systems supporting innovation in product/process development. (2009), 113–152. Towards a new generation of security requirements definition methodology using ontologies. In [*24th International Conference on Advanced Information Systems Engineering (CAiSE’12)*]{} (2012), pp. 1–8. Reusable knowledge in security requirements engineering: a systematic mapping study. (2015), 1–33. Ontologies for security requirements: A literature survey and classification. In [*Advanced Information Systems Engineering Workshops*]{} (2012), Springer, pp. 61–69. A security ontology for security requirements elicitation. In [*Engineering Secure Software and Systems*]{}. Springer, 2015, pp. 157–177. Using security and domain ontologies for security requirements analysis. In [*Computer Software and Applications Conference Workshops (COMPSACW), 2013 IEEE 37th Annual*]{} (2013), IEEE, pp. 101–107. Evaluating automatically a text miner for ontologies: a catch-22 situation? In [*On the Move to Meaningful Internet Systems: OTM 2008*]{}. Springer, 2008, pp. 1404–1422. Achieving privacy in trust negotiations with an ontology-based approach. , 1 (2006), 13–30. Safe: Secure and big data-adaptive framework for efficient cross-domain communication. In [*Proceedings of the First International Workshop on Privacy and Secuirty of Big Data*]{} (2014), ACM, pp. 19–28. Ontology guided xml security engine. , 3 (2004), 209–223. Privacy preserving modules for ontologies. In [*Perspectives of Systems Informatics*]{}. Springer, 2009, pp. 380–387. Trust-terms ontology for defining security requirements and metrics. In [*Proceedings of the Fourth European Conference on Software Architecture: Companion Volume*]{} (2010), ACM, pp. 175–180. Towards cross-domain security properties supported by ontologies. In [*Web Information Systems (WISE) Workshops*]{} (2004), Springer, pp. 58–69. Security and privacy challenges in cloud computing environments. , 6 (2010), 24–31. Security issues in a soa-based provenance system. In [*Provenance and Annotation of Data*]{}. Springer, 2006, pp. 203–211. A framework for multi-agent system engineering using ontology domain modelling for security architecture risk assessment in e-commerce security services. In [*3rd IEEE International Symposium on Network Computing and Applications(NCA)*]{} (2004), IEEE, pp. 409–412. Hit considerations: Informatics and technology needs and considerations. In [*Integration of Medical and Dental Care and Patient Data*]{}. Springer, 2012, pp. 25–137. . Cengage Learning, 2006. Introducing privacy awareness in network monitoring ontologies. In [*Trustworthy Internet*]{}. Springer, 2011, pp. 317–331. Towards an ontology-based security management. In [*20th International Conference on Advanced Information Networking and Applications (AINA)*]{} (2006), vol. 1, IEEE, pp. 985–992. Security-by-ontology: A knowledge-centric approach. In [*Security and Privacy in Dynamic Environments*]{}. Springer, 2006, pp. 99–110. Modeling computer attacks: An ontology for intrusion detection. In [*Recent Advances in Intrusion Detection*]{} (2003), Springer, pp. 113–135. Ontologies: Principles, methods and applications. , 02 (1996), 93–136. Handbook of privacy and privacy-enhancing technologies. (2003). Elaborating security requirements by construction of intentional anti-models. In [*Proceedings of the 26th International Conference on Software Engineering*]{} (2004), IEEE Computer Society, pp. 148–157. Modelling reusable security requirements based on an ontology framework. , 2 (2009), 119. Privacy protection for smartphones: an ontology-based firewall. In [*Information Security Theory and Practice. Security and Privacy of Mobile Devices in Wireless Communication*]{}. Springer, 2011, pp. 371–380. An ontological approach applied to information security and trust. (2007), 114. Security attack ontology for web services. In [*Semantics, Knowledge and Grid, 2006. SKG’06. Second International Conference on*]{} (2006), IEEE, pp. 42–42. Specifying dynamic security properties of web service based systems. In [*Semantics, Knowledge and Grid, 2006. SKG’06. Second International Conference on*]{} (2006), IEEE, pp. 34–34. An ontology framework for managing security attacks and defences in component based software systems. In [*Software Engineering, 2008. ASWEC 2008. 19th Australian Conference on*]{} (2008), IEEE, pp. 552–561. Ontology-based analysis of information security standards and capabilities for their harmonization. In [*Proceedings of the 3rd international conference on Security of information and networks*]{} (2010), ACM, pp. 137–141. : an ontology for vulnerability management. In [*Proceedings of the 5th Annual Workshop on Cyber Security and Information Intelligence Research*]{} (2009), ACM, p. 34. Environmental metrics for software security based on a vulnerability ontology. In [*Secure Software Integration and Reliability Improvement, 2009. SSIRI 2009. Third IEEE International Conference on*]{} (2009), IEEE, pp. 159–168. Using ontologies to perform threat analysis and develop defensive strategies for mobile security. , 1–25. A taxonomy for privacy. Tech. rep., DTIC Document, 1981. Internet of things - new security and privacy challenges. , 1 (2010), 23–30. Mining and analysing security goal models in health information systems. In [*Software Engineering in Health Care, 2009. SEHC’09. ICSE Workshop on*]{} (2009), IEEE, pp. 42–52. Research on semantic-based security services model of soa. In [*E-Business and Information System Security, 2009. EBISS’09. International Conference on*]{} (2009), IEEE, pp. 1–4. The design and enforcement of a rule-based constraint policy language for service composition. In [*Social Computing (SocialCom), 2010 IEEE Second International Conference on*]{} (2010), IEEE, pp. 873–880. . Springer, 2004. Ontology-based information content security analysis. In [*Fuzzy Systems and Knowledge Discovery, 2008. FSKD’08. Fifth International Conference on*]{} (2008), vol. 5, IEEE, pp. 479–483. A framework for specifying and managing security requirements in collaborative systems. In [*Autonomic and Trusted Computing*]{}. Springer, 2006, pp. 500–510. Hierarchical situation modeling and reasoning for pervasive computing. In [*Software Technologies for Future Embedded and Ubiquitous Systems, 2006 and the 2006 Second International Workshop on Collaborative Computing, Integration, and Assurance. SEUS 2006/WCCIA 2006. The Fourth IEEE Workshop on*]{} (2006), IEEE, pp. 6–pp. An adaptable security framework for service-based systems. In [*10th International Workshop on Object-Oriented Real-Time Dependable Systems (WORDS)*]{} (2005), IEEE, pp. 28–35. Designing for privacy and other competing requirements. In [*2nd Symposium on Requirements Engineering for Information Security (SREIS’02), Raleigh, North Carolina*]{} (2002), Citeseer, pp. 15–16. Enforcing a security pattern in stakeholder goal models. In [*Proceedings of the 4th ACM Workshop on Quality of Protection*]{} (2008), ACM, pp. 9–14. . PhD thesis, University of Trento, 2006. Developing a privacy ontology for privacy control in context-aware systems. Tech. rep., 2006. Appendix A: Quality assessment application {#appendix-a-quality-assessment-application .unnumbered} ========================================== [ | p[0.6cm]{} | p[2.3cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.3cm]{} | p[0.6cm]{} | p[2.3cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.5cm]{} | p[0.3cm]{} | ]{} **N & **ID & **Q1 & **Q2 & **Q3 & **Q4 & **Q5 & **S. & **N & **ID & **Q1 & **Q2 & **Q3 & **Q4 & **Q5 & **S\ ******************************** ****1** & ACM\_02 [@chase2009improving] & ****-** & ****-** & ****-** & ****1** & ****1** & ****2** &************** ****2** & ACM\_03 [@van2004elaborating] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****3** & ACM\_04 [@rezgui2002preserving] & ****-** & ****-** & ****-** & ****1** & ****1** & ****2** &**************************** ****4** & ACM\_05 [@kost2012privacy] & ****1** & ****1** & ****-** & ****-** & ****-** & ****2**\ ****5** & ACM\_06 [@gandhi2011discovering] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****6** & ACM\_07 [@hinze2009event] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****7** & ACM\_08 [@oladimeji2011managing] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****8** & ACM\_10 [@srinivasan2014safe] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****9** & ACM\_11 [@weber2009mining] & ****-** & ****-** & ****-** & ****-** & ****1** & ****1** &**************************** ****10** & ACM\_13 [@munoz2012surprise] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2**\ ****11** & ACM\_14 [@labda2014modeling] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4** &**************************** ****12** & ACM\_16 [@braghin2008introducing] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ****13** & ACM\_17 [@compagna2007capture] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****14** & ACM\_18 [@yu2008enforcing] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****15** & ACM\_19 [@maxwell2010production] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****16** & ACM\_22 [@studer2009privacy] & ****1** & ****1** & ****-** & ****1** & ****-** & ****3**\ ****17** & ACM\_23 [@mitra2006privacy] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****18** & ACM\_24 [@sullivan2010trust] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0**\ ****19** & ACM\_26 [@mace2010collaborative] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****20** & ACM\_28 [@schaefer2009epistemology] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0**\ ****21** & ACM\_30 [@yau2006framework] & ****1** & ****1** & ****-** & ****1** & ****-** & ****3** &**************************** ****22** & ACM\_32 [@alam2006model] & ****1** & ****1** & ****-** & ****-** & ****-** & ****2**\ ****23** & ACM\_34 [@fenz2010ontology] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-** &**************************** ****24** & ACM\_35 [@singhal2010ontologies] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ****25** & ACM\_36 [@blackwell2010security] & ****-** & ****-** & ****1** & ****-** & ****-** & ****1** &**************************** ****26** & ACM\_37 [@da2007dealing] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0**\ ****27** & ACM\_40 [@wang2009ovm] & ****1** & ****1** & ****1** & ****-** & ****1** & ****4** &**************************** ****28** & IEEE\_03 [@kost2011privacy] & ****1** & ****1** & ****-** & ****-** & ****-** & ****2**\ ****29** & IEEE\_09 [@rahmouni2009privacy] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****30** & IEEE\_11 [@liccardo2012ontology] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****31** & IEEE\_12 [@souag2013using] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****32** & IEEE\_13 [@daramola2012pattern] & ****-** & ****1** & ****-** & ****-** & ****-** & ****1**\ ****33** & IEEE\_14 [@yau2006hierarchical] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****34** & IEEE\_15 [@tsoumas2006towards] & ****1** & ****1** & ****1** & ****-** & ****1** & ****4**\ ****35** & IEEE\_18 [@squicciarini2006achieving] & ****1** & ****1** & ****-** & ****1** & ****-** & ****3** &**************************** ****36** & IEEE\_19 [@firesmith2007engineering] & ****1** & ****1** & ****-** & ****-** & ****-** & ****2**\ ****37** & IEEE\_21 [@wang2009environmental] & ****1** & ****-** & ****1** & ****1** & ****-** & ****3** &**************************** ****38** & IEEE\_25 [@chen2004soupa] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****39** & IEEE\_26 [@akmayeva2010ontology] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0** &**************************** ****40** & IEEE\_28 [@coma2008context] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****41** & IEEE\_30 [@maisonnasse2006detecting] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****42** & IEEE\_33 [@gao2010approach] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0**\ ****43** & IEEE\_35 [@chandramouli2013knowledge] & ****1** & ****1** & ****-** & ****1** & ****-** & ****3** &**************************** ****44** & IEEE\_36 [@singh2014comparative] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****45** & IEEE\_38 [@lee15ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****46** & IEEE\_41 [@garcia2009towards] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****47** & IEEE\_42 [@fenz2007information] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****48** & IEEE\_48 [@chowdhury2007enabling] & ****1** & ****-** & ****-** & ****1** & ****-** & ****3**\ ****49** & IEEE\_49 [@bishop2003computer] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****50** & IEEE\_50 [@Giorgini2005] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****51** & IEEE\_51 [@hecker2008privacy] & ****1** & ****1** & ****1** & ****-** & ****-** & ****3** &**************************** ****52** & IEEE\_52 [@hadzic2006use] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****53** & IEEE\_54 [@blanquer2009enhancing] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****54** & IEEE\_56 [@torrellas2004framework] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****55** & IEEE\_57 [@kang2013security] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4** &**************************** ****56** & IEEE\_58 [@vorobiev2006security] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****57** & IEEE\_59 [@yan2008ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****58** & IEEE\_60 [@ekelhart2007security] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4**\ ****59** & CIT\_01 [@poritz2004property] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****60** & CIT\_07 [@velasco2009modelling] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****61** & CIT\_09 [@breaux2008analyzing] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****62** & CIT\_12 [@chor1998private] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****63** & CIT\_13 [@souag2012towards] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-** &**************************** ****64** & CIT\_15 [@ekclhart2007ontological] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2**\ ****65** & CIT\_18 [@Massacci2007] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****66** & CIT\_23 [@zhangdeveloping] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****67** & CIT\_26 [@langheinrich2001privacy] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****68** & CIT\_29 [@vorobiev2008ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2**\ ****69** & CIT\_31 [@ranganathan2003ontologies] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****70** & CIT\_33 [@liu2003security] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****71** & Spgr\_01 [@kim2005security] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****72** & Spgr\_02 [@fabian2010comparison] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-**\ ****73** & Spgr\_03 [@souag2012ontologies] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-** &**************************** ****74** & Spgr\_07 [@massacci2011extended] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ****75** & Spgr\_08 [@souag2015security] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-** &**************************** ****76** & Spgr\_13 [@elahi2009modeling] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ****77** & Spgr\_14 [@tsoumas2006security] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****78** & Spgr\_18 [@milicevic2010ontology] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****79** & Spgr\_19 [@dhiah2006ontology] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****80** & Spgr\_20 [@vincent2011privacy] & ****-** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****81** & Spgr\_22 [@gandhi2009ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****82** & Spgr\_28 [@heupel2015ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2**\ ****83** & Spgr\_31 [@chen2005soupa] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****84** & Spgr\_32 [@mouratidis2003ontology] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2**\ ****85** & Spgr\_34 [@mitre2006legal] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****86** & Spgr\_35 [@chowdhury2008capturing] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****87** & Spgr\_36 [@pereira2009ontology] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1** &**************************** ****88** & Spgr\_38 [@delgado2003regulatory] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ************** **N & **ID & **Q1 & **Q2 & **Q3 & **Q4 & **Q5 & **S & **N & **ID & **Q1 & **Q2 & **Q3 & **Q4 & **Q5 & **S\ ******************************** ****89** & Spgr\_41 [@iwaihara2008risk] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &************** ****90** & Spgr\_55 [@ekelhart2006security] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****91** & Spgr\_56 [@abulaish2011simont] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2** &**************************** ****92** & Spgr\_58 [@ciuciu2011ontology] & ****1** & ****-** & ****-** & ****-** & ****-** & ****1**\ ****93** & Spgr\_60 [@breaux2014eddy] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****94** & SCH\_02 [@chen2003ontology] & ****1** & ****1** & ****-** & ****-** & ****1** & ****3**\ ****95** & SCH\_03 [@massacci2005using] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****96** & SCH\_06 [@sacco2011privacy] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2**\ ****97** & SCH\_16 [@firesmith2003security] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****98** & SCH\_18 [@sindre2005eliciting] & ****1** & ****-** & ****1** & ****1** & ****1** & ****4**\ ****99** & SCH\_20 [@anton2002analyzing] & ****1** & ****1** & ****-** & ****-** & ****1** & ****3** &**************************** ****100** & SCH\_24 [@kalloniatis2008addressing] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****101** & SCH\_26 [@haley2008security] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****102** & SCH\_27 [@he2003framework] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3**\ ****103** & SCH\_28 [@mouratidis2007secure] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****104** & SCH\_32 [@donner2003toward] & ****-** & ****-** & ****-** & ****-** & ****-** & ****0**\ ****105** & SCH\_36 [@alliance2003hipaa] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****106** & SCH\_41 [@solove2006taxonomy] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****107** & SCH\_43 [@skinner2006information] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****108** & Spgr\_18\_01 [@karyda2006ontology] & ****1** & ****-** & ****-** & ****1** & ****-** & ****3**\ ****109** & Spgr\_18\_02 [@raskin2001ontology] & ****-** & ****-** & ****-** & ****-** & ****1** & ****1** &**************************** ****110** & Spgr\_18\_03 [@fenz2009formalizing] & ****1** & ****1** & ****1** & ****-** & ****1** & ****4**\ ****111** & Spgr\_13\_01 [@asnar2008risk] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****112** & Spgr\_13\_02 [@den2003coras] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4**\ ****113** & Spgr\_13\_03 [@elahi2010vulnerability] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****114** & Spgr\_13\_04 [@jurjens2002umlsec] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4**\ ****115** & Spgr\_13\_05 [@matulevivcius2008adapting] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****116** & Spgr\_13\_06 [@mayer2005towards] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****117** & Spgr\_13\_07 [@rostad2006extended] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****118** & Spgr\_13\_08 [@singh2014revisiting] & ****1** & ****1** & ****-** & ****-** & ****1** & ****3**\ ****117** & Spgr\_13\_07 [@rostad2006extended] & ****1** & ****-** & ****-** & ****1** & ****1** & ****3** &**************************** ****118** & Spgr\_13\_08 [@singh2014revisiting] & ****1** & ****1** & ****-** & ****-** & ****1** & ****3**\ ****119** & Spgr\_08\_01 [@mayer2009model] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5** &**************************** ****120** & Spgr\_08\_02 [@velasco2009modelling] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ****121** & Spgr\_08\_03 [@dritsas2006knowledge] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4** &**************************** ****122** & Spgr\_07\_01 [@blanco2008systematic] & ****-** & ****-** & ****-** & ****-** & ****-** & ****-**\ ****123** & Spgr\_07\_02 [@zannone2006requirements] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4** &**************************** ****124** & Spgr\_07\_03 [@lin2003introducing] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2**\ ****125** & Spgr\_03\_01 [@avizienis2004basic] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4** &**************************** ****126** & Spgr\_03\_02 [@firesmith2005taxonomy] & ****1** & ****-** & ****-** & ****-** & ****1** & ****2**\ ****127** & Spgr\_02\_01 [@asnar2007trust] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4** &**************************** ****128** & Spgr\_02\_02 [@asnar2006risk] & ****1** & ****1** & ****-** & ****1** & ****1** & ****4**\ ****129** & SCH\_24\_01 [@kalloniatis2005dealing] & ****1** & ****-** & ****-** & ****1** & ****-** & ****2** &**************************** ****130** & SCH\_24\_02 [@hong2004privacy] & ****1** & ****1** & ****1** & ****1** & ****1** & ****5**\ ****131** & SCH\_28\_01 [@paja2014sts] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4** &**************************** ****132** & SCH\_43\_01 [@van2003handbook] & ****1** & ****1** & ****1** & ****1** & ****-** & ****4**\ ************** Appendix B: Overview of all the considered studies {#appendix-b-overview-of-all-the-considered-studies .unnumbered} ================================================== [ | p[0.6cm]{} | p[1.7cm]{} | p[3cm]{} | p[2.9cm]{} | p[1cm]{} | p[0.9cm]{} | p[1.7cm]{} | ]{} ****N** & ****ID** & ****Title** & ****Author(s)** & ****Pub Year** & ****\# Cited** & ****Decision**\ ************** 001 & ACM\_01 [@olivier2002database] & Database Privacy, Balancing Confidentiality, Integrity and Availability & Martin S Olivier & 2002 & 30 & Excluded stage 1\ 002 & ACM\_02 [@chase2009improving] & Improving privacy and security in multi-authority attribute-based encryption & Melissa Chase, Sherman S.M. Chow & 2009 & 375 & Excluded stage 2\ 003 & ACM\_03 [@van2004elaborating] & Elaborating Security Requirements by Construction of Intentional Anti-Models & Axel van Lamsweerde & 2004 & 337 & Selected\ 004 & ACM\_04 [@rezgui2002preserving] & Preserving Privacy in Web Services & Abdelmounaam Rezgui, Mourad Ouzzani, Athman Bouguettaya, Medjahed Brahim & 2002 & 102 & Excluded stage 2\ 005 & ACM\_05 [@kost2012privacy] & Privacy analysis using ontologies & Martin Kost, Johann Christoph Freytag & 2012 & 9 & Excluded stage 2\ 006 & ACM\_06 [@gandhi2011discovering] & Discovering Multidimensional Correlations among Regulatory Requirements to Understand Risk & Robin A. Gandhi, Seok Won Lee & 2011 & 7 & Excluded stage 2\ 007 & ACM\_07 [@hinze2009event] & Event-based applications and enabling technologies & Hinze Annika Kai Sachs, Alejandro Buchmann & 2009 & 110 & Excluded stage 2\ 008 & ACM\_08 [@oladimeji2011managing] & Managing security and privacy in ubiquitous eHealth information interchange & Ebenezer A. Oladimeji, Lawrence Chung, Hyo Taeg Jung, Kim Jaehyoun & 2011 & 12 & Excluded stage 2\ 009 & ACM\_09 [@kumari2012deriving] & Deriving implementation-level policies for usage control enforcement & Prachi Kumari, Alexander Pretschner & 2012 & 18 & Excluded stage 1\ 010 & ACM\_10 [@srinivasan2014safe] & SAFE: Secure and Big Data-Adaptive Framework for Efficient Cross-Domain Communication & Avinash Srinivasan, Wu Jie, Zhu Wen & 2014 & 1 & Excluded stage 2\ 011 & ACM\_11 [@weber2009mining] & Mining and Analysing Security Goal Models in Health Information Systems & Jens H. Weber-Jahnke, Onabajo Adeniyi & 2009 & 6 & Excluded stage 2\ 012 & ACM\_12 [@juan2013decision] & Decision support for partially moving applications to the cloud: the example of business intelligence & Adrian Juan-Verdejo, Henning Baars & 2013 & 10 & Excluded stage 2\ 013 & ACM\_13 [@munoz2012surprise] & Surprise: user-controlled granular privacy and security for personal data in Smarter Context & Juan C. Muñoz, Tamura Gabriel, Norha M. Villegas, and Hausi A. Müller & 2012 & 5 & Excluded stage 2\ 014 & ACM\_14 [@labda2014modeling] & Modeling of privacy-aware business processes in BPMN to protect personal data & Wadha JLabda, Nikolay Mehandjiev, Pedro Sampaio & 2014 & 0 & Selected\ 015 & ACM\_15 [@lammari2011conceptual] & A conceptual meta-model for secured information systems & Nadira Lammari, Jean-Sylvain Bucumi, Jacky Akoka, Isabelle Comyn-Wattiau & 2011 & 2 & Excluded stage 1\ 016 & ACM\_16 [@braghin2008introducing] & Introducing privacy in a hospital information system & Stefano Braghin, Alberto Coen-Porisini, Pietro Colombo, Sabrina Sicari, Alberto Trombetta & 2008 & 9 & Selected\ 017 & ACM\_17 [@compagna2007capture] & How to capture, model, and verify the knowledge of legal, security, and privacy experts: a pattern-based approach & Luca Compagna, Paul El Khoury, Fabio Massacci, Thomas Reshma, Nicola Zannone & 2007 & 29 & Excluded stage 2\ 018 & ACM\_18 [@yu2008enforcing] & Enforcing a Security Pattern in Stakeholder Goal Models & Yijun Yu, Kaiya Haruhiko, Washizaki Hironori, Xiong Yingfei, Hu Zhenjiang, Yoshioka Nobukazu & 2008 & 17 & Excluded stage 2\ 019 & ACM\_19 [@maxwell2010production] & The production rule framework: developing a canonical set of software requirements for compliance with law & Jeremy C. Maxwell, Annie I. Antón & 2010 & 15 & Excluded stage 2\ 020 & ACM\_20 [@tan2006security] & Security issues in a SOA-Based provenance system & Victor Tan, Paul Groth, Simon Miles, Sheng Jiang, Steve Munroe, Sofia Tsasakou, Luc Moreau & 2006 & 73 & Excluded stage 1\ 021 & ACM\_21 [@amagasa2014scheme] & A scheme for privacy-preserving ontology mapping & Toshiyuki Amagasa,Fan Zhang, Jun Sakuma, Hiroyuki Kitagawa & 2014 & 0 & Excluded stage 1\ 022 & ACM\_22 [@studer2009privacy] & Privacy preserving modules for ontologies & Thomas Studer & 2010 & 4 & Excluded stage 2\ 023 & ACM\_23 [@mitra2006privacy] & Privacy-preserving semantic interoperation and access control of heterogeneous databases & Prasenjit Mitra, Chi-Chun Pan, Peng Liu, and Vijayalakshmi Atluri & 2006 & 35 & Excluded stage 2\ 024 & ACM\_24 [@sullivan2010trust] & Trust-terms ontology for defining security requirements and metrics & Kieran Sullivan, Jim Clarke, Barry P. Mulcahy & 2010 & 3 & Excluded stage 2\ 025 & ACM\_25 [@kabir2014user] & User-centric social context information management: an ontology-based approach and platform & Muhammad Ashad Kabir, Jun Han, Jian Yu, Alan Colman & 2014 & 9 & Excluded stage 1\ 026 & ACM\_26 [@mace2010collaborative] & A collaborative ontology development tool for information security managers & John C. Mace, Simon Parkin, Aad van Moorsel & 2010 & 6 & Excluded stage 2\ 027 & ACM\_27 [@bao2007privacy] & Privacy-Preserving Reasoning on the Semantic Web & Jie Bao, Giora Slutzki, Vasant Honavar & 2007 & 37 & Duplicated\ 028 & ACM\_28 [@schaefer2009epistemology] & The Epistemology of Computer Security & Robert Schaefer & 2009 & 6 & Excluded stage 2\ 029 & ACM\_29 [@rodriguez2014survey] & A Survey on Ontologies for Human Behavior Recognition & Rodríguez, Natalia Díaz and Cuéllar, Manuel P Lilius, Johan Calvo-Flores, Miguel Delgado & 2014 & 17 & Excluded stage 1\ 030 & ACM\_30 [@yau2006framework] & A framework for specifying and managing security requirements in collaborative systems & Stephen S. Yau, Chen Zhaoji & 2006 & 17 & Excluded stage 2\ 031 & ACM\_31 [@elcci2014isn] & Isn’t the Time Ripe for a Standard Ontology on Security of Information and Networks? & Atilla El[ç]{}i & 2014 & 3 & Excluded stage 1\ 032 & ACM\_32 [@alam2006model] & Model driven security engineering for the realization of dynamic security requirements in collaborative systems & Muhammad Alam & 2007 & 14 & Excluded stage 1\ 033 & ACM\_33 [@de2008coresec] & CoreSec: an ontology of security applied to the business process of management & Ryan Ribeiro de Azevedo, Fred Freitas, Silas Cardoso de Almeida, Marcelo José SC Almeida, Edson C. de Barros C Filho, Wendell Campos Veras & 2008 & 1 & Excluded stage 1\ 034 & ACM\_34 [@fenz2010ontology] & Ontology-based generation of IT-security metrics & Stefan Fenz & 2010 & 21 & Excluded stage 2\ 035 & ACM\_35 [@singhal2010ontologies] & Ontologies for Modeling Enterprise Level Security Metrics & Singhal Anoop, Wijesekera Duminda & 2010 & 7 & Selected\ 036 & ACM\_36 [@blackwell2010security] & A Security Ontology for Incident Analysis & Clive Blackwell & 2007 & 7 & Excluded stage 2\ 037 & ACM\_37 [@da2007dealing] & Dealing with the formal analysis of Information Security policies through ontologies: a case study & Da Silva, G. M. H., Rademaker, A., Vasconcelos, D. R., Amaral, F. N., Bazílio, C., Costa, V. G., Haeusler, E. H & 2007 & 3 & Excluded stage 2\ 038 & ACM\_38 [@lee2006building] & Building problem domain ontology from security requirements in regulatory documents & Lee, Seok-Won, Robin Gandhi, Divya Muthurajan, Yavagal Deepak. Ahn Gail-Joon & 2006 & 42 & Excluded stage 1\ 039 & ACM\_39 [@vorobiev2010ontology] & Ontology-based analysis of information security standards and capabilities for their harmonization & Vladimir I Vorobiev, Ludmila Fedorchenko, Vadim P Zabolotsky, Alexander V Lyubimov & 2010 & 2 & Excluded stage 1\ 040 & ACM\_40 [@wang2009ovm]& OVM: an ontology for vulnerability management & Ju An Wang, Guo Minzhe & 2009 & 40 & Selected\ 041 & IEEE\_01 [@kagal2004authorization] & Authorization and privacy for semantic Web services & Lalana Kagal, Tim Finin, Massimo Paolucci, Naveen Srinivasan, Katia Sycara, Grit Denker & 2004 & 242 & Excluded stage 1\ 042 & IEEE\_02 [@fernandez2013surveillance] & Surveillance ontology for legal, ethical and privacy protection based on SKOS & Virginia Fernandez Arguedas, Ebroul Izquierdo, Krishna Chandramouli & 2013 & 1 & Excluded stage 1\ 043 & IEEE\_03 [@kost2011privacy] & Privacy Verification Using Ontologies & Martin Kost, Johann-Christoph Freytag, Frank Kargl, Antonio Kung & 2011 & 10 & Excluded stage 2\ 044 & IEEE\_04 [@kayes2013semantic] & A Semantic Policy Framework for Context-Aware Access Control Applications & ASM Kayes, Jun Han, Alan Colman & 2013 & 1 & Excluded stage 1\ 045 & IEEE\_05 [@lioudakis2007proxy] & A Proxy for Privacy: the Discreet Box & Georgios V. Lioudakis, Eleftherios A. Koutsoloukas, Nikolaos Dellas, Sofia Kapellaki, George N. Prezerakos, Dimitra I. Kaklamani, Iakovos S. Venieris & 2007 & 9 & Excluded stage 1\ 046 & IEEE\_06 [@modica2011semantic] & Semantic annotations for security policy matching in WS-Policy & Giuseppe Di Modica, Orazio Tomarchio & 2011 & 1 & Excluded stage 1\ 047 & IEEE\_07 [@modica2011semantic] & Semantic Security Policy Matching in Service Oriented Architectures & Giuseppe Di Modica, Orazio Tomarchio & 2011 & 4 & Excluded stage 1\ 048 & IEEE\_08 [@durbeck2007security] & Security Requirements for a Semantic Service-oriented Architecture & Stefan D[ü]{}rbeck, Rolf Schillinger, Jan Kolter & 2007 & 14 & Excluded stage 1\ 049 & IEEE\_09 [@rahmouni2009privacy] & Privacy compliance in European health grid domains: An ontology-based approach & Hanene Boussi Rahmouni, Tony Solomonides, Marco Casassa Mont, Simon Shiu & 2009 & 9 & Excluded stage 2\ 050 & IEEE\_10 [@chen2011ontological] & An Ontological Study of Data Purpose for Privacy Policy Enforcement & Shan Chen, Mary-Anne Williams & 2011 & 0 & Excluded stage 1\ 051 & IEEE\_11 [@liccardo2012ontology] & Ontology-based Negotiation of Security Requirements in Cloud & Loredana Liccardo, Massimiliano Rak, Giuseppe Di Modica, Orazio Tomarchio & 2012 & 2 & Excluded stage 2\ 052 & IEEE\_12 [@souag2013using] & Using Security and Domain ontologies for Security Requirements Analysis & Amina Souag, Camille Salinesi, Isabelle Wattiau, Haris Mouratidis & 2013 & 4 & Selected\ 053 & IEEE\_13 [@daramola2012pattern] & Pattern-based security requirements specification using ontologies and boilerplates & Olawande Daramola, Guttorm Sindre, Tor Stalhane & 2012 & 4 & Excluded stage 2\ 054 & IEEE\_14 [@yau2006hierarchical] & Hierarchical Situation Modeling and Reasoning for Pervasive Computing & Stephen S Yau, Junwei Liu & 2006 & 83 & Excluded stage 2\ 055 & IEEE\_15 [@tsoumas2006towards] & Towards an Ontology-based Security Management & TSOUMAS Bill, GRITZALIS Dimitris & 2006 & 88 & Selected\ 056 & IEEE\_16 [@khan2012security] & Security oriented service composition: A framework & Khaled M Khan, Abdelkarim Erradi, Saleh Alhazbi, Jun Han & 2012 & 3 & Excluded stage 1\ 057 & IEEE\_17 [@hentea2004multi] & Multi-agent security service architecture for mobile learning & Manana Hentea & 2004 & 6 & Excluded stage 1\ 058 & IEEE\_18 [@squicciarini2006achieving] & Achieving privacy in trust negotiations with an ontology-based approach & A. C. Squicciarini, E. Bertino, E. Ferrari, I. Ray & 2006 & 50 & Excluded stage 2\ 059 & IEEE\_19 [@firesmith2007engineering] & Engineering Safety and Security Related Requirements for Software Intensive Systems & Donald G. Firesmith & 2007 & 30 & Excluded stage 2\ 060 & IEEE\_20 [@chen2004intelligent] & Intelligent agents meet the semantic Web in smart spaces & Harry Chen, Tim Finin, Anupam Joshi, Lalana Kagal, Filip Perich, Dipanjan Chakraborty & 2004 & 277 & Excluded stage 1\ 061 & IEEE\_21 [@wang2009environmental] & Environmental Metrics for Software Security Based on a Vulnerability Ontology & Ju An Wang, Minzhe Guo, Hao Wang, Min Xia, Linfeng Zhou & 2009 & 4 & Excluded stage 2\ 062 & IEEE\_22 [@bouna2011image] & The image protector - A flexible security rule specification toolkit & Bechara Al Bouna, Richard Chbeir, Alban Gabillon & 2011 & 9 & Excluded stage 1\ 063 & IEEE\_23 [@ben2012semantic] & Semantic matching of web services security policies & Monia Ben Brahim, Tarak Chaari, Maher Ben Jemaa, Mohamed Jmaiel & 2012 & 1 & Excluded stage 1\ 064 & IEEE\_24 [@chen2012design] & The design of an ontology-based service-oriented architecture framework for traditional Chinese medicine healthcare & Shih-Wei Chen, Yu-Ting Tseng, Tsai-Ya Lai & 2012 & 0 & Excluded stage 1\ 065 & IEEE\_25 [@chen2004soupa] & Soupa: Standard ontology for ubiquitous and pervasive applications & Harry Chen, Filip Perich, Tim Finin, Anupam Joshi & 2004 & 634 & Excluded stage 2\ 066 & IEEE\_26 [@akmayeva2010ontology] & Ontology of e-Learning security & Galyna Akmayeva, Charles Shoniregun & 2010 & 2 & Excluded stage 2\ 067 & IEEE\_27 [@wei2010design] & The Design and Enforcement of a Rule-based Constraint Policy Language for Service Composition & Wei Wei, Ting Yu & 2010 & 1 & Excluded stage 1\ 068 & IEEE\_28 [@coma2008context] & Context Ontology for Secure Interoperability & Celine Coma, Nora Cuppens-Boulahia1, Frederic Cuppens, Ana Rosa Cavalli & 2008 & 17 & Excluded stage 2\ 069 & IEEE\_29 [@saripalle2015towards] & Towards knowledge level privacy and security using RDF/RDFS and RBAC & Rishi Kanth Saripalle, Alberto De la Rosa Algarin, Timoteus B. Ziminski & 2015 & 0 & Excluded stage 1\ 070 & IEEE\_30 [@maisonnasse2006detecting] & Detecting privacy in attention aware system & Maisonnasse, Jéróme, Nicolas Gourier, Oliver Brdiczka, Patrick Reignier, James L. Crowley & 2006 & 4 & Excluded stage 2\ 071 & IEEE\_31 [@vorobiev2006specifying] & Specifying Dynamic Security Properties of Web Service Based Systems & Artem Vorobiev, Jun Han & 2006 & 17 & Excluded stage 1\ 072 & IEEE\_32 [@elahi2008semantic] & Semantic Access Control in Web Based Communities & Najeeb Elahi, Mohammad MR Chowdhury, Josef Noll & 2008 & 34 & Excluded stage 1\ 073 & IEEE\_33 [@gao2010approach] & An Approach for Privacy Protection Based-On Ontology & Feng Gao, Jingsha He, Shufen Peng, Xu Wu, Xiu Liu & 2010 & 9 & Excluded stage 2\ 074 & IEEE\_34 [@hsieh2011light] & A Light-Weight Ranger Intrusion Detection System on Wireless Sensor Networks & Chia-Fen Hsieh, Yung-Fa Huang, Rung-Ching Chen & 2011 & 8 & Excluded stage 1\ 075 & IEEE\_35 [@chandramouli2013knowledge] & Knowledge modeling for privacy-by-design in smart surveillance solution & Krishna Chandramouli, Virginia Fernandez Arguedas, Ebroul Izquierdo & 2013 & 0 & Excluded stage 2\ 076 & IEEE\_36 [@singh2014comparative] & A comparative study of Cloud Security Ontologies & Vaishali Singh, S.K. Pandey & 2014 & 0 & Excluded stage 2\ 077 & IEEE\_37 [@bao2007privacy] & Privacy-Preserving Reasoning on the Semantic Web & Jie Bao, Giora Slutzki, Vasant Honavar & 2007 & 37 & Excluded stage 1\ 078 & IEEE\_38 [@lee15ontology] & Ontology of Secure Service Level Agreement & Chen-Yu Lee, Krishna M. Kavi, Paul Raymond, Gomathisankaran Mahadevan & 2015 & 0 & Excluded stage 2\ 079 & IEEE\_39 [@koinig2015contrology] & Contrology - An Ontology-Based Cloud Assurance Approach & Ulrich Koinig, Simon Tjoa, Jungwoo Ryoo & 2015 & 0 & Excluded stage 1\ 080 & IEEE\_40 [@sardis2013secure] & Secure Enterprise Interoperability Ontology for Semantic Integration of Business to Business Applications & Emmanuel Sardis, Spyridon V Gogouvitis, Thanassis Bouras, Panagiotis Gouvas, Theodora Varvarigou & 2015 & 0 & Excluded stage 1\ 081 & IEEE\_41 [@garcia2009towards] & Towards a base ontology for privacy protection in service-oriented architecture & Diego Garcia, M. Beatriz F. Toledo, Miriam A. M. Capretz, David S. Allison, Gordon S. Blair, Paul Grace, Carlos Flores & 2009 & 1 & Excluded stage 2\ 082 & IEEE\_42 [@fenz2007information] & Information Security Fortification by Ontological Mapping of the ISO/IEC 27001 Standard & Fenz, Stefan, Gernot Goluch, Andreas Ekelhart, Bernhard Riedl, Edgar Weippl & 2007 & 45 & Excluded stage 2\ 083 & IEEE\_43 [@ahamed2008cctb] & CCTB: Context Correlation for Trust Bootstrapping in Pervasive Environment & Ahamed, Sheikh Monjur, Mehrab, Mohammad Saiful Islam & 2008 & 9 & Excluded stage 1\ 084 & IEEE\_44 [@asim2011interoperable] & An interoperable security framework for connected healthcare & Muhammad Asim, Milan Petkovi/’c, Mike Qu, Changjie Wang & 2011 & 2 & Excluded stage 1\ 085 & IEEE\_45 [@guan2014framework] & A framework for security driven software evolution & Hui Guan, Xuan Wang, Hongj Yang & 2014 & 0 & Excluded stage 1\ 086 & IEEE\_46 [@wei2009research] & Research on Semantic-Based Security Services Model of SOA & Cuncun Wei, Guanghua Chen, Qianqian Ge & 2009 & 0 & Excluded stage 1\ 087 & IEEE\_47 [@ahamed2008cctb] & CCTB: Context Correlation for Trust Bootstrapping in Pervasive Environment & Ahamed, Sheikh I and Monjur, Mehrab and Islam, Mohammad Saiful & 2008 & 9 & Duplicated\ 088 & IEEE\_48 [@chowdhury2007enabling] & Enabling Access Control and Privacy through Ontology & Mohammad M. R. Chowdhury, JosefNoll’ and Juan Miguel Gomez & 2007 & 8 & Excluded stage 2\ 089 & IEEE\_49 [@bishop2003computer] & What is computer security? & Matt Bishop & 2003 & 1916 & Excluded stage 2\ 090 & IEEE\_50 [@Giorgini2005] & Modeling security requirements through ownership, permission and delegation & Paolo Giorgini, Fabio Massacci, John Mylopoulos and Nicola Zannone & 2005 & 198 & Selected\ 091 & IEEE\_51 [@hecker2008privacy] & Privacy Ontology Support for E-Commerce & Michael Hecker, Tharam S. Dillon, and Elizabeth Chang & 2008 & 31 & Excluded stage 2\ 092 & IEEE\_52 [@hadzic2006use] & Use of Ontology Technology for Standardization of Medical Records and Dealing with Associated Privacy Issues & Maja Hadzic, Dillon Tharam, Elizabeth Chang & 2006 & 4 & Excluded stage 2\ 093 & IEEE\_53 [@kanbe2009ontology] & Ontology Alignment in RFID Privacy Protection & Masakazu Kanbe, Shuichiro Yamamoto & 2009 & 2 & Excluded stage 1\ 094 & IEEE\_54 [@blanquer2009enhancing] & Enhancing Privacy and Authorization Control Scalability in the Grid Through Ontologies & Ignacio Blanquer, Hernández Vicente, Segrelles Damiá, Erik Torres & 2009 & 25 & Excluded stage 2\ 095 & IEEE\_55 [@evesti2010ontology] & Ontology-Based Security Adaptation at Run-Time & Antti Evesti, Eila Ovaska & 2010 & 12 & Excluded stage 1\ 096 & IEEE\_56 [@torrellas2004framework] & A framework for multi-agent system engineering using ontology domain modelling for security architecture risk assessment in e-commerce security services & Gustavo A. Santana Torrellas & 2004 & 10 & Excluded stage 2\ 097 & IEEE\_57 [@kang2013security] & A Security Ontology with MDA for Software Development & Wentao Kang, Liang Ying & 2013 & 1 & Selected\ 098 & IEEE\_58 [@vorobiev2006security] & Security Attack Ontology for Web Services & Artem Vorobiev and Jun Han & 2006 & 64 & Excluded stage 2\ 099 & IEEE\_59 [@yan2008ontology] & Ontology-Based Information Content Security Analysis & Pan Yan, Zhao Yanping, Sanxing Cao & 2008 & 7 & Excluded stage 2\ 100 & IEEE\_60 [@ekelhart2007security] & Security Ontologies: Improving Quantitative Risk Analysis & Ekelhart, Andreas, Stefan Fenz, Markus Klemen, Edgar Weippl & 2007 & 88 & Excluded stage 2\ 101 & IEEE\_61 [@d2012ontology]& An Ontology for Run-Time Verification of Security Certificates for SOA & Stefania D’Agostini, Valeria Di Giacomo, Claudia Pandolfo, Domenico Presenza & 2012 & 4 & Excluded stage 1\ 102 & IEEE\_62 [@yau2005adaptable] & An adaptable security framework for service-based systems & Stephen S Yau, Yisheng Yao, Zhaoji Chen, Luping Zhu & 2005 & 13 & Excluded stage 1\ 103 & CIT\_01 [@poritz2004property] & Property attestation—scalable and privacy-friendly security assessment of peer computers & Jonathan Poritz, Matthias Schunter, Els Van Herreweghen, and Michael Waidner & 2004 & 137 & Excluded stage 2\ 104 & CIT\_02 [@guarino1998formal] & Formal Ontology and Information Systems & Nicola Guarino & 1998 & 4406 & Excluded stage 1\ 105 & CIT\_03 [@souag2013using] & Using Security and Domain ontologies for Security Requirements Analysis & Amina Souag, Camille Salinesi, Isabelle Wattiau, Haris Mouratidis & 2013 & 4 & Duplicated\ 106 & CIT\_04 [@kost2012privacy] & Privacy analysis using ontologies & Martin Kost, Johann-Christoph Freytag & 2012 & 9 & Duplicated\ 107 & CIT\_05 [@fenselontologies] & Ontologies: A Silver Bullet for Knowledge Management and Electronic & Dieter Fensel & 2000 & 23 & Excluded stage 1\ 108 & CIT\_06 [@kost2011privacy] & Privacy Verification using Ontologies & Martin Kost, Johann-Christoph Freytag & 2011 & 10 & Duplicated\ 109 & CIT\_07 [@velasco2009modelling] & Modeling Reusable Security Requirements Based on an Ontology Framework & Joaquín Lasheras, Rafael Valencia-García, Jesualdo Tomás Fernández-Breis & 2009 & 30 & Selected\ 110 & CIT\_08 [@foster1998security] & A Security Architecture for Computational Grids & Ian Foster, Carl Kesselman, Gene Tsudik, Steven Tuecke & 1998 & 1765 & Excluded stage 1\ 111 & CIT\_09 [@breaux2008analyzing] & Analyzing regulatory rules for privacy and security requirements & Travis D. Breaux, Annie Antón & 2008 & 251 & Excluded stage 2\ 112 & CIT\_10 [@kim2005security] & Security Ontology for Annotating Resources & Kim Anya, Jim Luo, Myong Kang & 2005 & 151 & Duplicated\ 113 & CIT\_11 [@perrig2002spins]& SPINS: Security Protocols for Sensor Networks & Adrian Perrig, Robert Szewczyk, Justin Douglas Tygar, Victor Wen, David E Culler & 2002 & 4493 & Excluded stage 1\ 114 & CIT\_12 [@chor1998private]& Private Information Retrieval & Benny Chor, Kushilevitz Eyal, Oded Goldreich, Madhu Sudan & 1998 & 1535 & Excluded stage 2\ 115 & CIT\_13 [@souag2012towards] & Towards a new generation of security requirements definition methodology using ontologies & Amina Souag & 2012 & 4 & Excluded stage 2 - Survey paper\ 116 & CIT\_14 [@floridi2005ontological] & The ontological interpretation of informational privacy & Luciano Floridi & 2005 & 109 & Excluded stage 1\ 117 & CIT\_15 [@ekclhart2007ontological] & Ontological mapping of common criteria’s security assurance requirements & Andreas Ekclhart, Stefan Fenz, Gernot Goluch, Edgar Weippl & 2007 & 21 & Excluded stage 2\ 118 & CIT\_16 [@vorobiev2006security] & Security Attack Ontology for Web Services & Artem Vorobiev, Jun Han & 2006 & 57 & Duplicated\ 119 & CIT\_17 [@velasco2009modelling] & An Ontology for Modelling Security: The Tropos Approach & Haralambos Mouratidis, Paolo Giorgini, Gordon Manson & 2003 & 52 & Duplicated\ 120 & CIT\_18 [@Massacci2007] & An Ontology for Secure Socio-Technical Systems & Fabio Massacci, John Mylopoulos, Nicola Zannone & 2007 & 45 & Excluded stage 2 - better version Spgr\_07\_02\ 121 & CIT\_19 [@chen2003ontology] & An Ontology for Context-Aware Pervasive Computing Environments & Harry Chen, Tim Finin, Anupam Joshi & 2003 & 1023 & Duplicated\ 122 & CIT\_20 [@squicciarini2006achieving] & Achieving Privacy in Trust Negotiations with an Ontology-Based Approach & Anna C. Squicciarini, Elisa Bertino, Elena Ferrari, Indrakshi Ray & 2006 & 50 & Duplicated\ 123 & CIT\_21 [@parkin2009information] & An Information Security Ontology Incorporating Human-Behavioral Implications & Simon E Parkin, Aad van Moorsel, Robert Coles & 2009 & 33 & Excluded stage 1\ 124 & CIT\_22 [@vorobiev2007ontological] & An Ontological Approach Applied to Information Security and Trust & Artem Vorobiev, Bekmamedova Nargiza & 2007 & 12 & Excluded stage 1\ 125 & CIT\_23 [@zhangdeveloping] & Developing a privacy ontology for privacy control in context-aware systems & Ni Zhang, Chris Todd & 2005 & 0 & Excluded stage 2\ 126 & CIT\_24 [@mitra2005privacy] & Privacy-preserving ontology matching & Prasenjit Mitra, Peng Liu, Chi-Chun Pan & 2005 & 10 & Excluded stage 1\ 127 & CIT\_25 [@chen2004soupa] & SOUPA: Standard Ontology for Ubiquitous and Pervasive Applications & Harry Chen, Filip Perich, Tim Finin, Anupam Joshi & 2004 & 634 & Duplicated\ 128 & CIT\_26 [@langheinrich2001privacy] & Privacy by Design - Principles of Privacy-Aware Ubiquitous Systems & Marc Langheinrich & 2001 & 769 & Excluded stage 2\ 129 & CIT\_27 [@chen2005soupa] & The SOUPA Ontology for Pervasive Computing & Chen, Harry, Tim Finin, Anupam Joshi & 2005 & 179 & Duplicated\ 130 & CIT\_28 [@hecker2007privacy] & Privacy support and evaluation on an ontological basis & Michael Hecker, Dillon Tharam & 2007 & 5 & Excluded stage 1\ 131 & CIT\_29 [@vorobiev2008ontology] & An ontology framework for managing security attacks and defenses in component based software systems & Artem Vorobiev, Jun Han, Nargiza Bekmamedova & 2008 & 7 & Excluded stage 2\ 132 & CIT\_30 [@gandhi2009ontology] & Ontology Guided Risk Analysis: From Informal Specifications to Formal Metrics & Robin Gandhi, Seok-Won Lee & 2009 & 1 & Duplicated\ 133 & CIT\_31 [@ranganathan2003ontologies] & Ontologies in a pervasive computing environment & Anand Ranganathan, Robert E. McGrath, Roy H. Campbell, Mickunas M. Dennis & 2003 & 65 & Excluded stage 2\ 134 & CIT\_32 [@eiter2006reasoning] & Reasoning with rules and ontologies & Thomas Eiter, Giovambattista Ianni, Axel Polleres, Roman Schindlauer, Hans Tompits & 2006 & 75 & Excluded stage 1\ 135 & CIT\_33 [@liu2003security] & Security and Privacy Requirements Analysis within a Social Setting & Lin Liu, Eric Yu, John Mylopoulos & 2006 & 75 & Selected\ 136 & CIT\_34 [@evfimievski2003limiting] & Limiting Privacy Breaches in Privacy Preserving Data Mining & Alexandre Evfimievski, Johannes Gehrke, Srikant Ramakrishnan & 2003 & 642 & Excluded stage 1\ 137 & CIT\_35 [@mcgrath2003use] & Use of Ontologies in Pervasive Computing Environments & Robert E McGrath, Anand Ranganathan, Roy H Campbell, Mickunas M Dennis & 2003 & 33 & Excluded stage 1\ 138 & Spgr\_01 [@kim2005security] & Security ontology for annotating resources & Kim Anya, Jim Luo, Myong Kang & 2005 & 151 & Excluded stage 2\ 139 & Spgr\_02 [@fabian2010comparison] & A comparison of security requirements engineering methods & Fabian Benjamin, Seda Gurses, Maritta Heisel, Thomas Santen, Holger Schmidt & 2010 & 129 & Excluded stage 2 - Survey paper\ 140 & Spgr\_03 [@souag2012ontologies] & Ontologies for security requirements: A literature survey and classification & Amina Souag, Camille Salinesi, Isabelle Wattiau & 2012 & 41 & Excluded stage 2 - Survey paper\ 141 & Spgr\_04 [@studer2009privacy] & Privacy preserving modules for ontologies & Thomas Studer & 2010 & 4 & Duplicated\ 142 & Spgr\_05 [@yau2006framework] & A framework for specifying and managing security requirements in collaborative systems & Stephen S. Yau, Chen Zhaoji & 2006 & 17 & Duplicated\ 143 & Spgr\_06 [@alam2006model] & Model driven security engineering for the realization of dynamic security requirements in collaborative systems & Muhammad Alam & 2007 & 14 & Duplicated\ 144 & Spgr\_07 [@massacci2011extended] & An Extended Ontology for Security Requirements & Fabio Massacci, John Mylopoulos, Federica Paci, Thein Thun Tun, Yijun Yu & 2011 & 16 & Selected\ 145 & Spgr\_08 [@souag2015security] & A Security Ontology for Security Requirements Elicitation & Amina Souag, Camille Salinesi, Raúl Mazo, Isabelle Comyn-Wattiau & 2015 & 4 & Excluded stage 2 - Survey paper\ 146 & Spgr\_09 [@souag2012ontologies] & Ontologies for Security Requirements: A Literature Survey and Classification & Amina Souag, Camille Salinesi, Isabelle Wattiau & 2012 & 29 & Duplicated\ 147 & Spgr\_10 [@souag2015reusable]& Reusable knowledge in security requirements engineering: a systematic mapping study & Amina Souag, Raúl Mazo, Camille Salinesi, Isabelle Comyn-Wattiau & 2015 & 1 & Excluded stage 1\ 148 & Spgr\_11 [@kim2005security]& Security Ontology for Annotating Resources & Kim Anya, Jim Luo, Myong Kang & 2005 & 151 & Duplicated\ 149 & Spgr\_12 [@tropea2011introducing]& Introducing Privacy Awareness in Network Monitoring Ontologies & Giuseppe Tropea, Georgios V Lioudakis, Nicola Blefari-Melazzi, Dimitra I Kaklamani, Iakovos S Venieris & 2011 & 1 & Excluded stage 1\ 150 & Spgr\_13 [@elahi2009modeling] & A Modeling Ontology for Integrating Vulnerabilities into Security Requirements Conceptual Foundation & Golnaz Elahi, Eric Yu, Nicola Zannone & 2009 & 21 & Selected\ 151 & Spgr\_14 [@tsoumas2006security] & Security-by-Ontology: A Knowledge-Centric Approach & Bill Tsoumas, Panagiotis Papagiannakopoulos, Stelios Dritsas, Dimitris Gritzalis & 2006 & 12 & Excluded stage 2\ 152 & Spgr\_15 [@sicilia2015information] & What are Information Security Ontologies Useful for? & Bill Sicilia, Miguel-Angel García-Barriocanal, Elena Javier Bermejo-Higuera, Salvador Sánchez-Alonso & 2015 & 0 & Excluded stage 1\ 153 & Spgr\_16 [@papagiannakopoulou2014leveraging] & Leveraging Ontologies upon a Holistic Privacy-Aware Access Control Model & Eugenia I Papagiannakopoulou, Maria N Koukovini, Georgios V Lioudakis, Nikolaos Dellas, Joaquin Garcia-Alfaro, Dimitra I Kaklamani, Iakovos S Venieris, Nora Cuppens-Boulahia, Frédéric Cuppens & 2014 & 4 & Excluded stage 1\ 154 & Spgr\_17 [@sure2004towards] & Towards Cross-Domain Security Properties Supported by Ontologies & York Sure, Jochen Haller & 2004 & 5 & Excluded stage 1\ 155 & Spgr\_18 [@milicevic2010ontology] & Ontology-Based Evaluation of ISO 27001. & Danijel Milicevic, Matthias Goeken & 2010 & 3 & Excluded stage 2\ 156 & Spgr\_19 [@dhiah2006ontology] & An Ontology-Based Approach for Managing and Maintaining Privacy in Information Systems & Dhiah el Diehn, Abou-Tair I. Stefan Berlik & 2006 & 5 & Excluded stage 2\ 157 & Spgr\_20 [@vincent2011privacy] & Privacy Protection for Smartphones: An Ontology-Based Firewall & Johann Vincent, Christine Porquet, Maroua Borsali, Harold Leboulanger & 2011 & 8 & Excluded stage 2\ 158 & Spgr\_21 [@ionita2005specifying] & Specifying an Access Control Model for Ontologies for the Semantic Web & Cecilia Ionita, Osborn M, Sylvia L & 2005 & 9 & Excluded stage 1\ 159 & Spgr\_22 [@gandhi2009ontology] & Ontology Guided Risk Analysis: From Informal Specifications to Formal Metrics & Robin Gandhi, Lee Seok-Won & 2009 & 1 & Excluded stage 2\ 160 & Spgr\_23 [@torres2012hit] & HIT Considerations: Informatics and Technology Needs and Considerations & Miguel Humberto Torres-Urquidy, Valerie J. H. Powell, Franklin M. Din, Mark Diehl, Valerie Bertaud-Gounot, W. Ted Klein, Sushma Mishra, Shin-Mey Rose Yin Geist, Monica Chaudhari, Mureen Allen & 2012 & 0 & Excluded stage 1\ 161 & Spgr\_24 [@hoss2007towards] & Towards Combining Ontologies and Model Weaving for the Evolution of Requirements Models & Allyson M Hoss, Doris L Carver & 2008 & 5 & Excluded stage 1\ 162 & Spgr\_25 [@stoica2004ontology] & Ontology Guided XML Security Engine & Andrei Stoica, Csilla Farkas & 2004 & 27 & Excluded stage 1\ 163 & Spgr\_26 [@wangusing] & Using ontologies to perform threat analysis and develop defensive strategies for mobile security & Ping Wang , Kuo-Ming Chao, Chi-Chun Lo, Yu-Shih Wang & 2015 & 1 & Excluded stage 1\ 164 & Spgr\_27 [@weippl2004semanticlife] & SemanticLIFE Collaboration: Security Requirements and Solutions – Security Aspects of Semantic Knowledge Management & Edgar R. Weippl, Alexander Schatten, Shuaib Karim, A. Min Tjoa & 2004 & 13 & Excluded stage 1\ 165 & Spgr\_28 [@heupel2015ontology] & Ontology-Enabled Access Control and Privacy Recommendations & Robin Gandhi, Lee Seok-Won & 2009 & 1 & Excluded stage 2\ 166 & Spgr\_29 [@nissan2012accounting] & Accounting for Social, Spatial, and Textual Interconnections & Ephraim Nissan & 2012 & 0 & Excluded stage 1\ 167 & Spgr\_30 [@hadzic2009case] & Case Study I: Ontology-Based Multi-Agent System for Human Disease Studies & Maja Hadzic, Pornpit Wongthongtham, Tharam Dillon, Elizabeth Chang & 2009 & 0 & Excluded stage 1\ 168 & Spgr\_31 [@chen2005soupa] & The SOUPA Ontology for Pervasive Computing & Harry Chen, Tim Finin, Anupam Joshi & 2015 & 191 & Excluded stage 2\ 169 & Spgr\_32 [@mouratidis2003ontology] & An Ontology for Modelling Security: The Tropos Approach & Haralambos Mouratidis, Paolo Giorgini, Gordon Manson & 2003 & 54 & Excluded stage 2\ 170 & Spgr\_33 [@spyns2008evaluating] & Evaluating Automatically a Text Miner for Ontologies: A Catch-22 Situation? & Peter Spyns & 2008 & 2 & Excluded stage 1\ 171 & Spgr\_34 [@mitre2006legal] & EA Legal Ontology to Support Privacy Preservation in Location-Based Services & Hugo A. Mitre, Ana Isabel González-Tablas, Benjamín Ramos, Arturo Ribagorda & 2006 & 6 & Excluded stage 2\ 172 & Spgr\_35 [@chowdhury2008capturing] & Capturing Semantics for Information Security and Privacy Assurance & Mohammad M. R. Chowdhury, Javier Chamizo, Josef Noll, Juan Miguel Gómez & 2008 & 6 & Excluded stage 2\ 173 & Spgr\_36 [@pereira2009ontology] & An Ontology Based Approach to Information Security & Teresa Pereira, Henrique Santos & 2009 & 10 & Excluded stage 2\ 174 & Spgr\_37 [@panettoefficient] & Efficient Projection of Ontologies & Julius Köpke, Johann Eder, Michaela Schicho & 2013 & 1 & Excluded stage 1\ 175 & Spgr\_38 [@delgado2003regulatory] & Regulatory Ontologies: An Intellectual Property Rights Approach & Haojun Yu, Sun Yuqing, Jinyan Hu & 2012 & 0 & Excluded stage 2\ 176 & Spgr\_39 [@sorli2009ict] & ICT Tools and Systems Supporting Innovation in Product/Process Development & Mikel Sorli, Dragan Stokic & 2009 & 0 & Excluded stage 1\ 177 & Spgr\_40 [@balopoulos2006framework] & A Framework for Exploiting Security Expertise in Application Development & Theodoros Balopoulos, Lazaros Gymnopoulos, Maria Karyda, Spyros Kokolakis, Stefanos Gritzalis, Sokratis Katsikas & 2006 & 0 & Excluded stage 1\ 178 & Spgr\_41 [@iwaihara2008risk] & Risk Evaluation for Personal Identity Management Based on Privacy Attribute Ontology & Mizuho Iwaihara, Murakami Kohei, Ahn Gail-Joon , Masatoshi Yoshikawa & 2008 & 12 & Excluded stage 2\ 179 & Spgr\_42 [@analyti2013framework] & A framework for modular ERDF ontologies & Analyti, Antoniou Anastasia, Grigoris and Damásio, Carlos Viegas, Ioannis Pachoulakis & 2013 & 3 & Excluded stage 1\ 180 & Spgr\_43 [@kayes2013ontology] & An Ontology-Based Approach to Context-Aware Access Control for Software Services & Asm Kayes, Jun Han, Alan Colman & 2013 & 8 & Excluded stage 1\ 181 & Spgr\_44 [@mezgar2007development] & Development of an Ontology-Based Smart Card System Reference Architecture & István Mezgár, Zoltán Kincses & 2007 & 0 & Excluded stage 1\ 182 & Spgr\_45 [@kabilan2007introducing] & Introducing the Common Non-Functional Ontology & Vandana Kabilan, Paul Johannesson, Sini Ruohomaa, Pirjo Moen, Andrea Herrmann, Rose-Mharie Ahlfeldt, Hans Weigand & 2007 & 4 & Excluded stage 1\ 183 & Spgr\_46 [@ryan2003ontology] & Ontology-Based Platform for Trusted Regulatory Compliance Services & Henry Ryan, Peter Spyns, Pieter De Leenheer, Richard Leary & 2003 & 7 & Excluded stage 1\ 184 & Spgr\_47 [@isaza2010intrusion] & Intrusion Correlation Using Ontologies and Multi-agent Systems & Gustavo Isaza, Andrés Castillo, Marcelo López, Luis Castillo, Manuel López & 2010 & 6 & Excluded stage 1\ 185 & Spgr\_48 [@albers2004agent] & Agent Models and Different User Ontologies for an Electronic Market Place & Marcel Albers, Catholijn M Jonker, Mehrzad Karami, Jan Treur, & 2004 & 27 & Excluded stage 1\ 186 & Spgr\_49 [@rajugan2006ontology] & Ontology Views: A Theoretical Perspective & Rajagopal Rajugan, Elizabeth Chang, Tharam S Dillon, & 2006 & 11 & Excluded stage 1\ 187 & Spgr\_50 [@undercoffer2003modeling] & Modeling Computer Attacks: An Ontology for Intrusion Detection & Jeffrey Undercoffer, Anupam Joshi, John Pinkston & 2003 & 136 & Excluded stage 1\ 188 & Spgr\_51 [@denker2003security] & Security for DAML Web Services: Annotation and Matchmaking & Grit Denker, Lalana Kagal, Tim Finin, Massimo Paolucci, Katia Sycara & 2003 & 183 & Excluded stage 1\ 189 & Spgr\_52 [@liu2009ontology] & Ontology-Based Requirements Conflicts Analysis in Activity Diagrams & Chi-Lun Liu & 2009 & 6 & Excluded stage 1\ 190 & Spgr\_53 [@beckers2012ontology] & Ontology-Based Identification of Research Gaps and Immature Research Areas & Kristian Beckers, Stefan Eicker, Stephan Fa[ß]{}bender, Maritta Heisel, Holger Schmidt, Widura Schwittek & 2012 & 4 & Excluded stage 1\ 191 & Spgr\_54 [@ceravolo2003managing] & Managing Identities via Interactions between Ontologies & Paolo Ceravolo & 2003 & 4 & Excluded stage 1\ 192 & Spgr\_55 [@ekelhart2006security] & Security Ontology: Simulating Threats to Corporate Assets & Andreas Ekelhart, Stefan Fenz, Markus D. Klemen, Edgar R. Weippl & 2006 & 31 & Excluded stage 2\ 193 & Spgr\_56 [@abulaish2011simont] & SIMOnt: A Security Information Management Ontology Framework & Muhammad Abulaish, Syed Irfan Nabi, Khaled Alghathbar, Azeddine Chikh & 2011 & 3 & Excluded stage 2\ 194 & Spgr\_57 [@man2005retracted] & Retracted: Shared Ontology for Pervasive Computing & Junfeng Man, Aimin Yang, Xingming Sun & 2005 & 3 & Excluded stage 1\ 195 & Spgr\_58 [@ciuciu2011ontology] & Ontology-Based Matching of Security Attributes for Personal Data Access in e-Health & Ioana Ciuciu, Brecht Claerhout, Louis Schilders, Robert Meersman & 2011 & 5 & Excluded stage 2\ 196 & Spgr\_59 [@blobel2011intelligent] & Intelligent security and privacy solutions for enabling personalized telepathology & Bernd Blobel & 2011 & 5 & Excluded stage 1\ 197 & Spgr\_60 [@breaux2014eddy] & Eddy, a formal language for specifying and analyzing data flow specifications for conflicting privacy & Travis D. Breaux, Hibshi Hanan, Rao Ashwini & 2014 & 16 & Excluded stage 2\ 198 & SCH\_01 [@breaux2008analyzing] & Analyzing regulatory rules for privacy and security requirements & Travis D. Breaux, Annie Antón & 2008 & 230 & Duplicated\ 199 & SCH\_02 [@chen2003ontology] & An Ontology for Context-Aware Pervasive Computing Environments & Harry Chen, Tim Finin, Anupam Joshi & 2003 & 1086 & Excluded stage 2\ 200 & SCH\_03 [@massacci2005using] & Using a security requirements engineering methodology in practice: the compliance with the Italian data protection legislation & Fabio Massacci, Marco Prest, Nicola Zannone & 2005 & 82 & Excluded stage 2 - better version Spgr\_07\_02\ 201 & SCH\_04 [@takabi2010security] & Security and privacy challenges in cloud computing environments & Hassan Takabi, James BD Joshi, Ahn Gail-Joon & 2010 & 660 & Excluded stage 1\ 202 & SCH\_05 [@ashburner2000gene] & Gene Ontology: tool for the unification of biology & Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, Harris MA, Hill DP, Issel-Tarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE, Ringwald M, Rubin GM, Sherlock G. & 2000 & 15501 & Excluded stage 1\ 203 & SCH\_06 [@sacco2011privacy] & A Privacy Preference Ontology (PPO) for Linked Data & Owen Sacco, Alexandre Passant & 2011 & 52 & Excluded stage 2\ 204 & SCH\_07 [@poritz2004property] & Property attestation—scalable and privacy-friendly security assessment of peer computers & Jonathan Poritz, Matthias Schunter, Els Van Herreweghen, and Michael Waidner & 2004 & 132 & Duplicated\ 205 & SCH\_08 [@chase2009improving] & Improving privacy and security in multi-authority attribute-based encryption & Melissa Chase, Sherman S.M. Chow & 2009 & 375 & Duplicated\ 206 & SCH\_09 [@li2010data] & Data security and privacy in wireless body area networks & Ming Li, Wenjing Lou, Kui Ren & 2010 & 297 & Excluded stage 1\ 207 & SCH\_10 [@ohkubo2003cryptographic] & Cryptographic approach to “privacy-friendly” tags & Miyako Ohkubo, Koutarou Suzuki, Shingo Kinoshita, and others & 2003 & 793 & Excluded stage 1\ 208 & SCH\_11 [@chen2004soupa] & Soupa: Standard ontology for ubiquitous and pervasive applications & Harry Chen, Filip Perich, Tim Finin, Anupam Joshi & 2004 & 634 & Duplicated\ 209 & SCH\_12 [@ferrari2004security] & Security and privacy for web databases and services & Elena Ferrari, Bhavani Thuraisingham & 2004 & 94 & Excluded stage 1\ 210 & SCH\_13 [@weber2010internet] & Internet of Things – New security and privacy challenges & Rolf H Weber & 2010 & 297 & Excluded stage 1\ 211 & SCH\_14 [@el2003privacy] & Privacy and Security in E-Learning & Khalil El-Khatib, Larry Korba, Yuefei Xu, George Yee & 2003 & 86 & Excluded stage 1\ 212 & SCH\_15 [@ferrari2004security] & Security and privacy for web databases and services & Elena Ferrari, Bhavani Thuraisingham & 2004 & 94 & Excluded stage 1\ 213 & SCH\_16 [@firesmith2003security] & Security use cases & Donald G. Firesmith & 2003 & 353 & Excluded stage 2\ 214 & SCH\_17 [@squicciarini2006achieving] & Achieving privacy in trust negotiations with an ontology-based approach & Anna C. Squicciarini, Elisa Bertino, Elena Ferrari, Indrakshi Ray & 2006 & 50 & Duplicated\ 215 & SCH\_18 [@sindre2005eliciting]& Eliciting security requirements with misuse cases & Guttorm Sindre, Andreas L. Opdahl & 2005 & 830 & Selected\ 216 & SCH\_19 [@chen2005soupa] & The SOUPA ontology for pervasive computing & Chen, Harry, Tim Finin, Anupam Joshi & 2005 & 179 & Duplicated\ 217 & SCH\_20 [@anton2002analyzing] & Analyzing website privacy requirements using a privacy goal taxonomy & Annie Antón, Julia B. Earp, Angela Reese & 2002 & 105 & Excluded stage 2\ 218 & SCH\_21 [@jansen2011guidelines] & Guidelines on security and privacy in public cloud computing & Wayne Jansen, Timothy Grance, and others & 2011 & 502 & Duplicated\ 219 & SCH\_22 [@van2004elaborating] & Elaborating security requirements by construction of intentional anti-models & Axel Van Lamsweerde & 2004 & 337 & Duplicated\ 220 & SCH\_23 [@massacci2005using] & Using a security requirements engineering methodology in practice: the compliance with the Italian data protection legislation & Wayne Jansen, Timothy Grance, and others & 2011 & 502 & Excluded stage 1\ 221 & SCH\_24 [@kalloniatis2008addressing] & Addressing privacy requirements in system design: the PriS method & Christos Kalloniatis, Evangelia Kavakli, Stefanos Gritzalis & 2008 & 76 & Selected\ 222 & SCH\_25 [@carminati2006security] & Security conscious web service composition & Barbara Carminati, Elena Ferrari, Patrick CK Hung, & 2006 & 93 & Excluded stage 1\ 223 & SCH\_26 [@haley2008security] & Security requirements engineering: A framework for representation and analysis & Charles B. Haley, Robin Laney, Jonathan D. Moffett, Bashar Nuseibeh & 2008 & 281 & Excluded stage 2\ 224 & SCH\_27 [@he2003framework] & A framework for modeling privacy requirements in role engineering & Qingfeng He, Annie I. Antón & 2003 & 85 & Excluded stage 2\ 225 & SCH\_28 [@mouratidis2007secure] & Secure Tropos: a security-oriented extension of the Tropos methodology & Haralambos Mouratidis, Paolo Giorgini & 2007 & 193 & Selected\ 226 & SCH\_29 [@rezgui2002preserving] & Preserving privacy in web services & Abdelmounaam Rezgui, Mourad Ouzzani, Athman Bouguettaya, Brahim Medjahed & 2002 & 102 & Duplicated\ 227 & SCH\_30 [@gandon2004semantic] & Semantic web technologies to reconcile privacy and context awareness & Fabien L Gandon, Norman M Sadeh & 2004 & 215 & Excluded stage 1\ 228 & SCH\_31 [@hecker2008privacy] & Privacy ontology support for e-commerce & Michael Tharam S. Hecker, Elizabeth Chang Dillon & 2008 & 29 & Duplicated\ 229 & SCH\_32 [@donner2003toward] & Toward a security ontology & Marc Donner & 2003 & 56 & Excluded stage 2\ 230 & SCH\_33 [@gao2010approach] & An approach for privacy protection based-on ontology & Gao Feng, Jingsha He, Shufen Peng, Xu Wu, Xiu Liu & 2010 & 9 & Duplicated\ 231 & SCH\_34 [@kim2005security] & Security ontology for annotating resources & Kim Anya, Jim Luo, Myong Kang & 2005 & 151 & Duplicated\ 232 & SCH\_35 [@brar2004privacy] & Privacy and security in ubiquitous personalized applications & Ajay Brar, Judy Kay & 2004 & 38 & Excluded stage 1\ 233 & SCH\_36 [@alliance2003hipaa] & HIPAA compliance and smart cards: Solutions to privacy and security requirements & Alliance, Smart Card & 2003 & 17 & Excluded stage 2\ 234 & SCH\_37 [@mitra2005privacy] & Privacy-preserving ontology matching & Prasenjit Mitra, Peng Liu, Chi-Chun Pan & 2005 & 10 & Duplicated\ 235 & SCH\_38 [@iwaihara2008risk] & Risk evaluation for personal identity management based on privacy attribute ontology & Mizuho Iwaihara, Murakami Kohei, Gail-Joon Ahn, Masatoshi Yoshikawa & 2008 & 12 & Duplicated\ 236 & SCH\_39 [@kagal2006security] & Security and privacy challenges in open and dynamic environments & Lalana Kagal, Tim Finin, Anupam Joshi, Sol Greenspan, & 2006 & 27 & Excluded stage 1\ 237 & SCH\_40 [@fabian2010comparison] & A comparison of security requirements engineering methods & Fabian Benjamin, Seda Gurses, Maritta Heisel, Thomas Santen, Holger Schmidt & 2010 & 121 & Duplicated\ 238 & SCH\_41 [@solove2006taxonomy] & A taxonomy of privacy & Daniel J Solove & 2006 & 967 & Selected\ 239 & SCH\_42 [@ware1981taxonomy] & A taxonomy for privacy & Willis H Ware & 1981 & 5 & Excluded stage 1\ 240 & SCH\_43 [@skinner2006information] & An information privacy taxonomy for collaborative environments & Geoff Skinner, Song Han, Elizabeth Chang & 2006 & 24 & Excluded stage 2\ \ 241 & Spgr\_18\_01 [@karyda2006ontology] & An ontology for secure e-government applications & M. Karyda, T. Balopoulos, S. Dritsas, L. Gymnopoulos, S. Kokolakis, C. Lambrinoudakis, S. Gritzalis & 2006 & 37 & Excluded stage 2\ 242 & Spgr\_18\_02 [@raskin2001ontology] & Ontology in information security: a useful theoretical foundation and methodological tool & Victor Raskin, Christian F Hempelmann, Katrina E Triezenberg, Sergei Nirenburg & 2001 & 133 & Excluded stage 2\ 243 & Spgr\_18\_03 [@fenz2009formalizing] & Formalizing information security knowledge & Stefan RaskinFenz, Andreas Ekelhart & 2009 & 144 & Selected\ 244 & Spgr\_13\_01 [@asnar2008risk] & Risk as dependability metrics for the evaluation of business solutions: a model-driven approach & Yudistira Asnar, Rocco Moretti, Maurizio Sebastianis, Nicola Zannone & 2008 & 30 & Selected\ 245 & Spgr\_13\_02 [@den2003coras] & The CORAS methodology. model-based risk assessment using UML and UP & Folker den Braber, Theo Dimitrakos, Bjorn A. Gran, Mass S. Lund, Ketil Stolen, Jan O. Aagedal & 2003 & 66 & Selected\ 246 & Spgr\_13\_03 [@elahi2010vulnerability] & A vulnerability-centric requirements engineering framework. analyzing security attacks, countermeasures, and requirements based on vulnerabilities & Golnaz Elahi, Eric Yu, Nicola Zannone & 2010 & 73 & Selected\ 247 & Spgr\_13\_04 [@jurjens2002umlsec] & UMLsec: Extending UML for secure systems development & Jürjens, Jan & 2002 & 583 & Selected\ 248 & Spgr\_13\_05 [@matulevivcius2008adapting] & Adapting secure tropos for security risk management in the early phases of information systems development & Raimundas Matulevi[č]{}ius, Nicolas Mayer, Haralambos Mouratidis, Eric Dubois, Patrick Heymans, Nicolas Genon, & 2008 & 60 & Selected\ 249 & Spgr\_13\_06 [@mayer2005towards] & Towards a risk-based security requirements engineering framework & Nicolas Mayer, André Rifaut, Eric Dubois, and others & 2005 & 52 & Excluded stage 2 - better version Spgr\_08\_01\ 250 & Spgr\_13\_07 [@rostad2006extended] & An extended misuse case notation: Including vulnerabilities and the insider threat & Lillian R[ø]{}stad, & 2006 & 46 & Selected\ 251 & Spgr\_13\_08 [@singh2014revisiting] & Revisiting Security Ontologies & Vaishali Singh, SK Pandey & 2014 & 2 & Excluded stage 2\ 252 & Spgr\_08\_01 [@mayer2009model] & Model-based management of information system security risk & Nicolas Mayer & 2009 & 70 & Selected\ 253 & Spgr\_08\_02 [@velasco2009modelling] & Modelling reusable security requirements based on an ontology framework & Joaquin Velasco, Lasheras Valencia-García, Rafael Fernández-Breis, Tomás Jesualdo, Ambrosio Toval, and others & 2009 & 31 & Excluded stage 2\ 254 & Spgr\_08\_03 [@dritsas2006knowledge] & A knowledge-based approach to security requirements for e-health applications & S. Dritsas, L. Gymnopoulos, M. Karyda, T. Balopoulos, S. Kokolakis, C. Lambrinoudakis, S. Katsikas & 2006 & 17 & Selected\ 255 & Spgr\_07\_01 [@blanco2008systematic] & A systematic review and comparison of security ontologies & Blanco, Carlos and Lasheras, Joaquin and Valencia-García, Rafael and Fernández-Medina, Eduardo and Toval, Ambrosio and Piattini, Mario & 2008 & 79 & Excluded stage 2 - Survey paper\ 256 & Spgr\_07\_02 [@zannone2006requirements] & A requirements engineering methodology for trust, security, and privacy & Nicola Zannone & 2007 & 17 & Selected\ 257 & Spgr\_07\_03 [@lin2003introducing] & Introducing abuse frames for analysing security requirements & Luncheng Lin, Bashar Nuseibeh, Darrel Ince, Michael Jackson, Jonathan Moffett & 2003 & 73 & Excluded stage 2\ 258 & Spgr\_03\_01 [@avizienis2004basic] & Basic Concepts and Taxonomy of Dependable and Secure & Algirdas Avi[ž]{}ienis, Jean-Claude Laprie, Brian Randell, Carl Landwehr & 2004 & 3703 & Selected\ 259 & Spgr\_03\_02 [@firesmith2005taxonomy] & A taxonomy of security-related requirements & Donald G Firesmith & 2005 & 44 & Excluded stage 2\ 260 & Spgr\_02\_01 [@asnar2007trust] & From trust to dependability through risk analysis & Yudistira Asnar, Paolo Giorgini, Fabio Massacci, Nicola Zannone & 2007 & 57 & Selected\ 261 & Spgr\_02\_02 [@asnar2006risk] & Risk modelling and reasoning in goal models & Yudistira Asnar, Paolo Giorgini, John Mylopoulos & 2006 & 17 & Selected\ 262 & SCH\_24\_01 [@kalloniatis2005dealing] & Dealing with privacy issues during the system design process & Christos Kalloniatis, Evangelia Kavakli, Stefanos Gritzalis, & 2005 & 15 & Excluded stage 2\ 263 & SCH\_24\_02 [@hong2004privacy] & Privacy risk models for designing privacy-sensitive ubiquitous computing systems & Jason I Hong, Jennifer Ng, Scott D Lederer, , James A Landay, & 2004 & 218 & Selected\ 264 & SCH\_28\_01 [@paja2014sts] & STS-Tool Security Requirements Engineering for Socio-Technical Systems & Elda Paja, Fabiano Dalpiaz, Paolo Giorgini & 2014 & 2 & Selected\ 265 & SCH\_43\_01 [@van2003handbook] & Handbook of privacy and privacy-enhancing technologies & GW Van Blarkom, JJ Borking, JGE Olk & 2003 & 69 & Selected\ [^1]: An overview of all considered studies is shown in Table \[table:papers\] in Appendix B [^2]: When there are more than one concept with very close meaning, we have chosen the most appropriate one to represent all [^3]: These groups are not mutually exclusive, i.e., a study may belong to all of them
--- abstract: | A microscopic theory for electronic spectrum of the CuO$_2$ plane within an effective $p$-$d$ Hubbard model is proposed. Dyson equation for the single-electron Green function in terms of the Hubbard operators is derived which is solved self-consistently for the self-energy evaluated in the noncrossing approximation. Electron scattering on spin fluctuations induced by kinematic interaction is described by a dynamical spin susceptibility with a continuous spectrum. Doping and temperature dependence of electron dispersions, spectral functions, the Fermi surface and the coupling constant $\lambda$ are studied in the hole doped case. At low doping, an arc-type Fermi surface and a pseudogap in the spectral function are observed. PACS numbers: 74.20.Mn, 71.27.+a, 71.10.Fd, 74.72.-h author: - 'N.M.Plakida$^{a,b}$ and V.S. Oudovenko$^{a,c}$' title: 'Electronic spectrum in high-temperature cuprate superconductors' --- Introduction ============ Recent high-resolution angle-resolved photoemission spectroscopy (ARPES) studies revealed a complicated character of electronic structure and quasiparticle (QP) spectra in copper oxide superconductors. In particular, a pseudogap in the electronic spectrum and an arc-type Fermi surface (FS) at low hole concentrations were revealed, a substantial wave-vector and energy dependent renormalization of the Fermi-velocity of QP (“kinks" in the dispersion) was observed (see, e.g., [@Damascelli03; @Sadovskii01; @Eschrig05] and references therein). As was originally pointed out by Anderson [@Anderson87], strong electron correlations in cuprates play an essential role in explaining their normal and superconducting properties. A conventional approach in describing strong electron correlations is based on consideration of the Hubbard model [@Hubbard63]. The model has some advantages in comparison with the $t$-$J$ model which can be derived from the Hubbard model in the limit of strong correlations. Namely, the Hubbard model allows to study a moderate correlation limit observed experimentally in curates and more consistently takes into account a two-subband character of electronic structure, in particular, a weight transfer between subbands with doping. Various methods were proposed to study electronic structure within the Hubbard model. An unbiased method based on numerical simulations for finite clusters (for a review see e.g., [@Bulut02]) precludes, however, to study subtle features of QP spectra due to poor energy and wave-vector resolutions in small size clusters. In analytical calculations of spectra mean-field type approximations are often used (for a review see [@Ovchinnikov04; @Mancini04]) which cannot reproduce the above mentioned effects caused by the self-energy contributions. In the dynamical mean field theory (DMFT) (for a review see [@Georges96; @Kotliar05]) the self-energy is treated in the single-site approximation which also unable to describe wave-vector dependent phenomena. To overcome this flaw of DMFT, various types of the dynamical cluster theory were developed (for a review see [@Maier04; @Tremblay06]). In these methods only a restricted wave-vector and energy resolutions can be achieved, depending on the size of the clusters, while the physical interpretation of the origin of an anomalous electronic structure in numerical methods is not straightforward. To elucidate the mechanism of the pseudogap formation, the charge carriers scattering by short-ranged (static) antiferromagnetic (AF) spin fluctuations was considered in several analytical semi-phenomenological studies (for a review see [@Sadovskii01]). More recently, by including into the DMFT scheme an additional momentum-dependent component of the self-energy originating from short-range AF (or charge) correlations, the spin-fluctuation scenario of the pseudogap formation [@Sadovskii05] and the arc-type FS [@Kuchinskii05] have been supported (for a review see [@Kuchinskii06]). At the same time, it is important to study the effects of the charge carriers scattering by the [*dynamical*]{} spin-fluctuations which are believed to be responsible for the kink phenomenon [@Eschrig05]. This can be done by considering the Dyson equation for the single-particle Green function (GF) within the Hubbard model in the limit of strong correlations. For instance, calculation of electronic spectrum within the first order perturbation theory for the self-energy has reproduced quite accurately quantum Monte Carlo results [@Krivenko05], while application of an incremental cluster expansion for the self-energy has enabled to observe a kink structure in the QP spectrum [@Kakehashi04]. The aim of the present paper is to develop a microscopic theory for the electronic spectrum in strongly correlated systems, as cuprates, which consistently takes into account effects of electron scattering by dynamical spin fluctuations. For this, we have considered an effective Hubbard model reduced form the $p$-$d$ model for the CuO$_2$ plane in cuprates. By applying the Mori-type projection technique for the thermodynamic GF [@Zubarev60] in terms of the Hubbard operators, we derived an [*exact*]{} Dyson equation, as was elaborated in our previous publications [@Plakida95; @Plakida99; @Plakida03]. A self-consistent solution of the Dyson equation with the self-energy evaluated in the noncrossing approximation (NCA) beyond a perturbation approach was performed. This enabled us to calculate the dispersion and spectral functions of single-particle excitations, the FS, and the electron occupation numbers. In particular, we studied a hole-doped case at various hole concentrations. At low doping, the FS reveals an arc-type shape with a pseudogap in the $(\pi, 0)$ region of the Brillouin zone (BZ). A strong renormalization effects of the dispersion close to the Fermi energy (“kinks") are observed due to electron scattering by dynamical AF spin fluctuations induced by kinematic interaction generic for the Hubbard operators. Electron occupation numbers show only a small drop at the Fermi energy. For high temperature or large hole concentrations, AF correlations become weak and a crossover to a Fermi-liquid-like behavior is observed. In the next Section we briefly discuss the model and derivation of the Dyson equation, and the self-energy calculation in the NCA. The results of numerical solution of the self-consistent system of equations for various hole concentrations and discussion are presented in Sect. 3. Conclusion is given in Sect. 4. General formulation =================== Effective Hubbard model and Dyson equation ------------------------------------------ Following a cell-cluster perturbation theory (e.g., [@Plakida95; @Feiner96; @Yushankhai97]) based on a consideration of the original two-band $p$-$d$ model for the CuO$_2$ layer [@Emery87] we consider an effective two-dimensional Hubbard model for holes $$\begin{aligned} H &= & \varepsilon_1\sum_{i,\sigma}X_{i}^{\sigma \sigma} + \varepsilon_2\sum_{i}X_{i}^{22} + \sum_{i\neq j,\sigma}\bigl\{t_{ij}^{11}X_{i}^{\sigma 0}X_{j}^{0\sigma} \nonumber \\ & + & t_{ij}^{22}X_{i}^{2 \sigma}X_{j}^{\sigma 2} +2\sigma t_{ij}^{12}(X_{i}^{2\bar\sigma}X_{j}^{0 \sigma} + {\rm H.c.})\bigr\}, \label{m1}\end{aligned}$$ where $X_{i}^{nm} = |in\rangle\langle im|$ are the Hubbard operators (HOs) for the four states $\,n, m=|0\rangle ,\,|\sigma\rangle, |2\rangle =|\uparrow \downarrow \rangle $, $\sigma = \pm 1/2 = (\uparrow,\downarrow)$, $\bar\sigma=-\sigma$. Here $\varepsilon_1=\varepsilon_d-\mu$ and $\varepsilon_2=2\varepsilon_1+ U_{eff} $ where $\mu$ is the chemical potential. The effective Coulomb energy in the Hubbard model (\[m1\]) is the charge-transfer energy $ U_{eff} = \Delta = \epsilon_p-\epsilon_d$. The superscript $2$ and $1$ refers to the two-hole $p$-$d$ singlet subband and the one-hole subband, respectively. According to the cell-cluster perturbation theory, we can take similar values for the hopping parameters in (\[m1\]): $\, t^{22}_{ij} = t^{11}_{ij} = t^{12}_{ij} = t_{ij}$. The bare electron dispersion defined by the hopping parameter $ t_{ij}$ we determine by the conventional equation $$t({\bf k}) = 4 t \, \gamma({\bf k}) + 4 t' \,\gamma'({\bf k}) + 4 t''\, \gamma''({\bf k}) , \label{m1a}$$ where $t, \, t', \, t''\,$ are the hopping parameters for the nearest neighbor (n.n.) $ ( \pm a_{x}, \pm a_{y})$, the next nearest neighbor (n.n.n.) $\pm (a_x \pm a_y)$ and $\, \pm 2 a_{x}, \pm 2 a_{y}$ sites, respectively, and $\gamma({\bf k})= (1/2)(\cos k_x +\cos k_y), \, \gamma'({\bf k}) = \,\cos k_x \cos k_y\, $ and $\gamma '' ({\bf k})= (1/2)(\cos 2 k_x +\cos 2 k_y) $ (the lattice constants $ a_{x}= a_{y}$ equal to unity). To get a physically reasonable value for the charge-transfer gap for the conventional value of $ t \simeq 0.4$ eV we take $\;\Delta = U_{eff} = 8\, t \simeq 3.2$ eV. The bare bandwidth is $W = 8 t \simeq U_{eff}$ which shows that the effective $p$-$d$ Hubbard model (\[m1\]) corresponds to the strong correlation limit. In what follows, the energy will be measured in unit of $\,t \,$ with $\varepsilon_d = 0$ in $\varepsilon_1$. The chemical potential $\mu$ depends on the average [*hole*]{} occupation number $$n = 1 + \delta = \langle\, \sum\sb{\sigma} X\sb{i}\sp{\sigma \sigma} + 2 X\sb{i}\sp{22} \rangle . \label{m2}$$ The HOs entering (\[m1\]) obey the completeness relation $\, X_{i}^{00} + X_{i}^{\uparrow \uparrow} + X_{i}^{\downarrow \downarrow} + X_{i}^{22} = 1 \,$ which rigorously preserves the constraint of no double occupancy of any quantum state $|in\rangle$ at each lattice site $i$. Due to the projected character of the HOs, they have complicated commutation relations $\,\left[X_{i}^{\alpha\beta}, X_{j}^{\gamma\delta}\right]_{\pm}= \delta_{ij}\left(\delta_{\beta\gamma}X_{i}^{\alpha\delta}\pm \delta_{\delta\alpha}X_{i}^{\gamma\beta}\right)$, which results in the so-called [*kinematic interaction*]{}. The upper sign here refers for the Fermi-like HOs like $X_{i}^{0\sigma}$ and the lower sign is for the Bose-like ones, like the spin or number operators. To discuss the electronic structure within the model (\[m1\]), we introduce a thermodynamic matrix Green function (GF) [@Zubarev60] $$\begin{aligned} {\hat G}_{ij\sigma}(t-t')&=& \langle\langle \hat X_{i\sigma}(t)\! \mid \! \hat X_{j\sigma}^{\dagger}(t')\rangle\rangle \nonumber \\ &=& -i\theta(t-t')\langle \{ \hat X_{i\sigma}(t)\, , \, \hat X_{j\sigma}^{\dagger}(t')\}\rangle , \label{m4}\end{aligned}$$ in terms of the two-component operators $\, \hat X_{i\sigma} = \left( \begin{array}{c} X_i^{\sigma 2} \\ X_i^{0\bar\sigma} \end{array} \right)$ and $\, \hat X_{i\sigma}^{\dagger} = (X_{i}^{2\sigma}\,\, X_{i}^{\bar\sigma 0}) $. To calculate the GF (\[m4\]), we apply the Mori-type projection technique by writing equations of motion for the Heisenberg operators in the form: $${\hat Z}_{i \sigma} = [\hat X_{i\sigma},\, H]= \sum_{j}\,\hat{\varepsilon}_{ij\sigma} \hat X_{j\sigma} + {\hat Z}_{i \sigma}^{(ir)} , \label{m5}$$ where the [*irreducible*]{} $\hat Z $-operator is determined by the orthogonality condition: $$\langle\{{\hat Z}_{i \sigma}^{(ir)},\, {\hat X}_{j\sigma}^{\dagger}\}\rangle = \langle {\hat Z}_{i\sigma}^{(ir)}\,{\hat X}_{j\sigma}^{\dagger} + {\hat X}_{j\sigma}^{\dagger}\, {\hat Z}_{i\sigma}^{(ir)} \rangle = 0\, . \label{m5a}$$ This defines the frequency matrix $$\hat{\varepsilon}\sb{ij} = \langle\{[\hat X_{i\sigma},H],\hat X_{j\sigma}^{\dagger}\}\rangle \; \hat{Q}^{-1} , \label{m6}$$ where $ \hat{Q} =\langle\{\hat X_{i\sigma},\hat X_{i\sigma}^{\dagger}\}\rangle = \left( \begin{array}{cc} Q\sb{2} & 0 \\ 0 & Q\sb{1} \end{array} \right) $. The weight factors $\, Q\sb{2} = \langle X\sb{i}\sp{22} + X\sb{i}\sp{\sigma\sigma}\rangle = n/2 \,$ and $\, Q\sb{1} = \langle X\sb{i}\sp{00} + X\sb{i}\sp{\bar\sigma \bar\sigma} \rangle = 1-Q\sb{2}\, $ in a paramagnetic state depend only on the hole occupation number (\[m2\]). The frequency matrix (\[m6\]) determines the QP spectra within the generalized mean field approximation (MFA). The corresponding zero-order GF in MFA reads: $${\hat G}^{\, 0}_\sigma({\bf k},\omega) = \Bigl(\omega \hat{\tau}_{0} - \hat{\varepsilon}({\bf k}) \Bigr)^{-1}\hat{Q}, \label{m7}$$ where $\,\hat\tau_{0}$ is the unity matrix and we introduced the frequency matrix (\[m6\]) in the ${\bf k}$-representation $\hat{\varepsilon}({\bf k})$. By differentiating the many-particle GF $\,\langle\langle \hat Z_{i\sigma}^{irr}(t)\! \mid \! \hat X_{j\sigma}^{\dagger}(t')\rangle\rangle\, $ over the second time $t'$ and applying the same projection procedure as in (\[m5\]) we derive the Dyson equation in the form [@Plakida95] $${\hat G}_\sigma({\bf k},\omega)^{-1}= {\hat G}_{\sigma}^{\, 0}({\bf k}, \omega)^{-1} - {\hat \Sigma}_{\sigma}({\bf k},\omega). \label{m8}$$ Here the self-energy matrix $\,{\hat \Sigma}_{\sigma}({\bf k},\omega)$ is determined by a [*proper* ]{} part (which have no single zero-order GF) of the many-particle GF in the form $${\hat \Sigma}_{\sigma}({\bf k}, \omega) = {\hat{Q}}^{-1} \langle\!\langle {\hat Z}_{\sigma}^{(ir)} \!\mid\! {\hat Z}_{\sigma}^{(ir)\dagger} \rangle\!\rangle^{(prop)}_{{\bf k}, \omega}\;{\hat{Q}}^{-1}. \label{m9}$$ The equations (\[m7\]) – (\[m9\]) provide an exact representation for the GF (\[m4\]). However, to calculate it one has to use an approximation for the self-energy matrix (\[m9\]) which describes inelastic scattering of electrons on spin and charge fluctuations. It is important to point out that in the Hubbard model (\[m1\]) electron interaction with spin- or charge fluctuations are induced by the kinematic interaction with the coupling constants equal to the original hopping parameters, as has been already pointed out by Hubbard [@Hubbard63]. For instance, the equation of motion for the operator $X\sb{i}\sp{\sigma 2} $ reads $$\begin{aligned} id \, X\sb{i}\sp{\sigma 2}/dt &= &[X\sb{i}\sp{\sigma 2}, H] = (\varepsilon_1 + \Delta) X\sb{i}\sp{\sigma2} \nonumber \\ &+& \sum\sb{l\ne i,\sigma '}\! \left( t\sb{il}\sp{22} B\sb{i\sigma\sigma '}\sp{22} X\sb{l}\sp{\sigma ' 2} - 2 \sigma t\sp{21}\sb{il} B\sb{i\sigma\sigma '}\sp{21} X\sb{l}\sp{0\bar\sigma '} \right) \nonumber \\ &-& \sum\sb{l\ne i} X\sb{i}\sp{02} \left( t\sp{11}\sb{il} X\sb{l}\sp{\sigma0} + 2 \sigma t\sp{21}\sb{il} X\sb{l}\sp{2 \bar\sigma} \right), \label{m10}\end{aligned}$$ where $B\sb{i\sigma\sigma'}\sp{\alpha\beta}$ are Bose-like operators describing the number (charge) and spin fluctuations: $$\begin{aligned} B\sb{i\sigma\sigma'}\sp{22} & = & (X\sb{i}\sp{22} + X\sb{i}\sp{\sigma\sigma}) \delta\sb{\sigma'\sigma} + X\sb{i}\sp{\sigma\bar\sigma} \delta\sb{\sigma'\bar\sigma} \nonumber\\ & = & ( N\sb{i}/2 + S\sb{i}\sp{z}) \delta\sb{\sigma'\sigma} + S\sb{i}\sp{\sigma}\delta\sb{\sigma'\bar\sigma}, \label{m11}\\ B\sb{i\sigma\sigma'}\sp{21} & = & ( N\sb{i}/2 + S\sb{i}\sp{z}) \delta\sb{\sigma'\sigma} - S\sb{i}\sp{\sigma} \delta\sb{\sigma'\bar\sigma}\, . \nonumber\end{aligned}$$ Therefore, in the Hubbard model (\[m1\]), contrary to spin-fermion models where electron interaction with spin- or charge fluctuations are specified by fitting coupling constants [@Eschrig05], this interaction is fixed by the hopping parameters. Mean-Field Approximation {#MFA} ------------------------ The single-particle excitations in MFA are defined by the frequency matrix (\[m6\]). By using equations of motion like (\[m10\]), we get the following energy spectrum for holes in two subbands $$\begin{aligned} {\varepsilon}_{1, 2} ({\bf k})& = & ({1}/{2}) [\omega_{2} ({\bf k}) + \omega_1 ({\bf k})] \mp({1}/{2}) \Lambda({\bf k}), \nonumber\\ \Lambda({\bf k}) &= & \{[\omega_{2} ({\bf k}) - \omega_1 ({\bf k})]^2 + 4 W({\bf k})^2 \}^{1/2}, \label{n1}\end{aligned}$$ where the original excitation spectra in the Hubbard subbands and the hybridization parameter are $$\begin{aligned} {\omega}_1({\bf k})& = & 4 t\,\alpha_{1} \gamma({\bf k}) + 4 t'\,\beta_{1}\gamma'({\bf k}) - \mu, \nonumber \\ {\omega}_2({\bf k})& = & 4 t\,\alpha_{2} \gamma({\bf k}) + 4 t'\,\beta_{2} \gamma'({\bf k}) + \Delta - \mu, \nonumber \\ W({\bf k}) & = & 4 t\,\alpha_{12} \gamma({\bf k}) + 4 t' \,\beta_{12} \gamma'({\bf k}). \label{n2}\end{aligned}$$ where we omitted $t''$ contribution in (\[m1a\]) and introduced the renormalization parameters $\, \alpha_{1(2)}= Q_{1(2)}[ 1 + {C_{1}}/{Q^2_{1(2)}}], \, \beta_{1(2)} = Q_{1(2)}[ 1 + {C_{2}}/{Q^2_{1(2)}}]\,$, $\, \alpha_{12}= \sqrt{Q_{1}Q_{2}}[ 1 - {C_{1}}/{Q_{1}Q_{2}}] ,\, \beta_{12} = \sqrt{Q_{1}Q_{2}}[ 1 -{C_{2}}/{Q_{1}Q_{2}}]\,$. As in the Hubbard I approximation, we neglect number fluctuations $\langle \delta N_i \delta N_j \rangle_{(i \neq j)}\,$ but take into account contributions from the spin correlation functions for the n.n. and the n.n.n. sites: $$C_{1} = \langle {\bf S}_i{\bf S}_{i\pm a_{x}/a_{y}} \rangle, \quad C_{2} = \langle {\bf S}_i{\bf S}_{i\pm a_{x}\pm a_{y}} \rangle . \label{n3}$$ The renormalization of the QP spectra (\[n1\]), (\[n2\]) caused by strong spin correlations in the underdoped region results in suppression of the n.n. hopping which changes the shape of the spectra and reduces the bandwidth. For instance, if we consider the limiting case of the long-range AF Néel state with the n.n. correlation function $C_{1} \simeq - 1/4 $ at half-filling, $Q_{1} = Q_{2} = 1/2$, we obtain $\alpha_{1(2)} = 0$. This results in complete suppression of the n.n. hopping and transformation of the spectra (\[n2\]) to the n.n.n. hopping $ \propto t' \gamma'({\bf k})$ as was discussed in [@Plakida95]. For the diagonal components of the zero-order GF (\[m7\]) we have $$G_{11(22)}^{\, 0}({\bf k},\omega)= \frac{Q_{1(2)}\,[1 - b({\bf k})]}{\omega - {\varepsilon}_{1(2)}({\bf k})} + \frac{Q_{1(2)}\,b({\bf k})}{\omega - {\varepsilon}_{2(1)} {\bf(k)}} , \label{n4}$$ where the parameter $$b({\bf k}) = \frac{{\varepsilon}_{2} ({\bf k}) - \omega_{2}({\bf k})} {{\varepsilon}_{2} ({\bf k}) - {\varepsilon}_{1} ({\bf k})}= \frac{1}{2} - \frac{\omega_{2}({\bf k}) - \omega_1({\bf k})}{ 2 \Lambda({\bf k})} \label{n5}$$ determines the contribution due to the hybridization. Self-energy Corrections ----------------------- Dyson equation (\[m8\]) for the GF is convenient to write in the form $${\hat G}_\sigma({\bf k},\omega)= \left(\omega \hat{\tau}_{0} - \hat {\varepsilon}({\bf k}) - \tilde{\Sigma}_{\sigma}({\bf k}, \omega)\right)^{-1} \hat {Q}, \label{s1}$$ where the self-energy reads $$\tilde{\Sigma}_{\sigma}({\bf k}, \omega) =\langle\!\langle {\hat Z}_{\sigma}^{(ir)} \!\mid\! {\hat Z}_{\sigma}^{(ir)\dagger} \rangle\!\rangle^{(prop)}_{{\bf k}, \omega}\;{\hat{Q}}^{-1} . \label{s2}$$ To make the problem tractable, we can neglect in the self-energy matrix (\[s2\]) the off-diagonal components $\tilde{\Sigma}_{12,\sigma}({\bf k},\omega)$ in comparison with the hybridization parameters $\, W({\bf k})\, $ in (\[n2\]). This enables us to write the diagonal components of the full GF (\[s1\]) in the form similar to (\[n4\]): $$\begin{aligned} {\hat G}_{11(22)}({\bf k},\omega) = \frac{Q_{1(2)} \, [1 - b({\bf k})]} {\omega - {\varepsilon}_{1(2)}({\bf k}) - \tilde{\Sigma}_{11(22)}({\bf k},\omega)} \nonumber \\ + \frac{Q_{1(2)} \, b({\bf k})} {\omega - {\varepsilon}_{2(1)}({\bf k}) - \tilde{\Sigma}_{22(11)}({\bf k}, \omega)}\, . \label{s3}\end{aligned}$$ Here the hybridization parameters $b({\bf k})$ are determined by the formula similar to (\[n5\]) which gives an accurate approximation for a small doping at $n \sim 1 $. Now we calculate the self-energy (\[s2\]) in the non-crossing (NCA) or the self-consistent Born approximation (SCBA) by neglecting vertex renormalization. As follows from the equation of motion (\[m10\]), the ${\hat Z}_{\sigma}^{(ir)} $ operators determined by (\[m5\]) are essentially a product of a Fermi-like $ X_{j}(t)$ and Bose-like $ B_{i}(t)$ operators. In SCBA, the propagation of these excitations of different types in the many-particle GF in (\[s2\]) are assumed to be independent of each other. Therefore, they can be decoupled in the time-dependent correlation functions for lattice sites $\,(i \neq j,\, l \neq m)$ as follows $$\langle B_{i}(t) X_{j}(t) B_{l} X_{m} \rangle \simeq \langle X_{j}(t) X_{m} \rangle \langle B_{i}(t) B_{l} \rangle. \label{s4}$$ Using the spectral representation for these correlation functions, we obtain the following formula for the diagonal self-energy components $\,\tilde{\Sigma}_{11(22)}({\bf k},\omega)= {\Sigma}({\bf k},\omega)\,$ which are the same for two subbands: $$\begin{aligned} {\Sigma}({\bf k},\omega) &= &\frac{1}{ N } \,\sum\sb{\bf q} \int\limits\sb{-\infty}\sp{+\infty} \!\!{\rm d}z K(\omega,z|{\bf q},{\bf k - q}) \nonumber \\ & \times & (- {1}/{ \pi })\,\mbox{Im}\, [{G}_{1}({\bf q},z) + {G}_{2}({\bf q},z) ] , \label{s8}\end{aligned}$$ where the corresponding subband GFs are: $${G}_{1 (2)}({\bf q},\omega) = \frac{1} {\omega - {\varepsilon}_{1 (2)} ({\bf q})- {\Sigma}({\bf q},\omega)} \, . \label{s9}$$ The kernel of the integral equation (\[s8\]) has the following form: $$\begin{aligned} && K(\omega,z|{\bf q},{\bf k - q })= | t({\bf q})|\sp{2}\; \frac{1}{2\pi} \int\limits\sb{-\infty}\sp{+\infty}\; \frac{{\rm d}\Omega}{\omega - z - \Omega} \nonumber\\ &\times&[ \tanh ({z}/{2T}) + \coth ({\Omega}/{2T})] \, \mbox{Im} \, \chi\sb{sc}({\bf k -q},\Omega), \quad \label{s6}\end{aligned}$$ where the interaction is defined by the hopping parameter $t({\bf q}) $ (\[m1a\]). The spectral density of bosonic excitations is determined by the dynamic susceptibility of the Bose-like operators $ B_{i}(t)$ in  (\[s4\]) – the spin and number (charge) fluctuations: $$\begin{aligned} \chi\sb{sc}({\bf q},\omega) = - [ \langle\!\langle {\bf S\sb{q} | S\sb{-q}} \rangle\!\rangle\sb{\omega} + ({1}/{4}) \langle\!\langle \delta N\sb{\bf q} | \delta N\sb{-\bf q} \rangle\!\rangle\sb{\omega} ] \,, \label{s7}\end{aligned}$$ where we introduced the commutator GF for the spin ${\bf S \sb{q}} $ and the number $\delta N_{\bf q} = N_{\bf q} - \langle N_{\bf q} \rangle$ operators. Thus we obtain a self-consistent system of equations for the GFs  (\[s9\]) and the self-energy (\[s8\]). A similar system of equations was obtained within the composite operator method [@Krivenko05]. In comparison with the $t$-$J$ model studied by us in [@Plakida99], for the Hubbard model (\[m1\]) we have two contributions in the self-energy  (\[s8\]) determined by the two Hubbard subbands, while in the $t$-$J$ model only one subband is considered. However, depending on the position of the chemical potential, a substantial contribution to the self-energy comes only from the GF of those subband which is close to the Fermi energy. A contribution from the GF of another subband which is far from the Fermi energy, is suppressed due to a large charge-transfer energy $ \Delta $ in the denominator of those GF. Neglecting the latter contribution, we obtain a self-consistent system of equations for one GF close to the Fermi energy and the corresponding self-energy function similar to the $t$-$J$ model  [@Plakida99]. Results and discussion ====================== Self-consistent system of equations {#system} ----------------------------------- To solve the system of equations for the self-energy (\[s8\]) and the GFs (\[s9\]) we should specify a model for the spin-charge susceptibility (\[s7\]). Below we take into account only the spin-fluctuation contribution $\, \chi_{s}({\bf q},\omega) = - \langle \langle {\bf S}_{q}\mid {\bf S}_{-q} \rangle \rangle _{\omega}$ for which we adopt a model suggested in numerical studies [@Jaklic95] $$\begin{aligned} && {\rm Im}\, \chi_{s}({\bf q},\omega+i0^+) = \chi_{s}({\bf q}) \; \chi_{s}^{''}(\omega) \nonumber\\ & = & \frac {\chi_0}{1+ \xi^2 (1+ \gamma({\bf q}))} \; \tanh \frac{\omega}{2T} \frac{1}{1+(\omega/\omega_{s})^2}\, . \label{r1}\end{aligned}$$ The ${\bf q}$-dependence in $\chi_{s}({\bf q})$ is determined by the AF correlation length $\xi$ which doping dependence is defined below. The static susceptibility $\chi_0$ at the AF wave vector ${\bf Q = (\pi,\pi)}$ is fixed by the normalization condition: $$\begin{aligned} && \langle {\bf S}_{i}^2\rangle = \frac{1}{N}\sum_{i} \langle {\bf S}_{i}{\bf S}_{i} \rangle \nonumber \\ &=& \frac{1}{\pi} \, \int\limits_{-\infty}^{+\infty } \frac{dz}{\exp{(z/T)} - 1} \chi_{s}^{''}(z)\; \frac{1}{N} \sum_{\bf q} \chi_{s}({\bf q}) , \label{r2}\end{aligned}$$ which gives the following value for this constant: $$\chi_{0}= \frac{2}{\omega_{s}}\,\langle {\bf S}_{i}^2\rangle \left \{\frac{1}{N} \sum_{\bf q} \frac{1} {1+\xi^2[1+\gamma({\bf q})]} \right\}^{-1} . \label{r3}$$ In (\[r2\]) we introduced $\langle {\bf S}_{i}^2\rangle = 3 \langle S^z_{i}\,S^z_{i}\rangle= ({3}/{4}) \langle (1 - X_i^{00} - X_i^{22})\rangle \simeq ({3}/{4})(1- |\delta|)$ where at the hole doping $ \delta \simeq \langle X_i^{22}\rangle$, while at the electron doping $ \delta \simeq - \langle X_i^{00}\rangle $. The spin correlation functions (\[n3\]) in the single-particle excitation spectra (\[n1\]) in MFA are defined by equations $$C_1 = \frac{1}{N} \sum_{\bf q}\, C_{\bf q}\, \gamma({\bf q}), \quad C_2 = \frac{1}{N} \sum_{\bf q} \, C_{\bf q}\, \gamma'({\bf q}). \label{r4}$$ The static correlation function $\, C_{\bf q}\, $ can be calculated from the same model (\[r1\]) as follows $$C_{\bf q} = \langle {\bf S}_{\bf q}{\bf S}_{-\bf q} \rangle = \frac{C(\xi)} {1+\xi^2[1+ \gamma({\bf q})]} \, , \label{r5}$$ where the factor $C(\xi) = \chi_{0}\,( {\omega_{s}}/{2})$. To specify the doping dependence of the AF correlation length $\xi (\delta)$ at low temperature, we fit the correlation function $\, C_1 \,$ calculated from (\[r4\]) to the numerical results of an exact diagonalization for finite clusters  [@Bonca89]. The values of the AF correlation length, calculated values of $C_2$ and the correlation function $\, C(\xi) = \langle {\bf S}_{\bf q}{\bf S}_{-\bf q} \rangle\,$ at the AF wave-vector ${\bf q = Q} = (\pi, \pi)\,$ are given in Table \[Table1\]. [lrrrrrr]{} $\delta =$ & 0.03 & 0.05 & 0.10 & 0.15 & 0.20 & 0.30\ \ $C_1$ &-0.36 &-0.26 &-0.21 &-0.18 &-0.14 &-0.10\ $C_2$ & 0.27 & 0.16 & 0.11 & 0.09 & 0.06 & 0.04\ $C(\xi)$ &22.0 & 5.91 & 3.58 & 2.67 & 1.93 & 1.40\ $ \xi$ & 8.0 & 3.40 & 2.50 & 2.10 & 1.70 & 1.40\ To perform numerical calculations, we introduce the imaginary frequency representation for the GF (\[s9\]): $${G}_{1 (2)}({\bf q},i\omega_n) = \frac{1} {i\omega_n - {\varepsilon}_{1 (2)} ({\bf q})- {\Sigma}({\bf q},i\omega_n)} \, . \label{r6}$$ where $ i\omega_{n}=i\pi T(2n+1), \; n = 0,\pm 1, \pm 2,$ ... . For the self-energy (\[s8\]) we obtain the following representation: $$\begin{aligned} {\Sigma}({\bf k}, i\omega_{n}) &=& - \frac{T}{N}\sum_{\bf q} \sum_{m} [{G}_{1}({\bf q}, i\omega_{m}) + {G}_{2}({\bf q}, i\omega_{m})] \nonumber \\ &\times & \lambda({\bf q, k-q} \mid i\omega_{n}-i\omega_{m})\, . \label{r7}\end{aligned}$$ The interaction function is given here by the equation $$\begin{aligned} \lambda({\bf q, k-q} \mid i\omega_{\nu}) = - |t({\bf q})|^{2} \, \chi_{s}({\bf k-q}) \; F_{s}(i\omega_{\nu}), \label{r8}\end{aligned}$$ where the spectral function: $$F_s(\omega_\nu)=\frac{1}{\pi} \; \int_{0}^{\infty}\frac {2x dx}{x^2 + (\omega_\nu/\omega_s)^2} \, \frac{1}{1+x^2} \, \tanh \frac {x\,\omega_{s} }{2T}. \label{r9}$$ Let us compare the self-consistent system of equations for the GF (\[r6\]) and the self-energy (\[r7\]) with results of other theoretical approaches. In our theory based on the HO technique we start from the two-subband representation for the GF (\[m4\]) which rigorously takes into account strong electron correlations determined by the Coulomb energy $U_{eff}$. This results in the Mott gap at large $U_{eff}$ (see below) as in the DMFT. On the other hand, the kinematic interaction, generic to HOs, induces the electron scattering by spin (charge) dynamical fluctuations (\[s7\]) which are responsible for the pseudogap formation as in the two-particle self-consistent approach (TPSC)  [@Vilk95; @Tremblay06] or the model of short-range static spin (charge) fluctuations – the $\Sigma_{\bf k}$-model [@Sadovskii01]. To prove this, let us consider the classical limit for the self-energy (\[r7\]) by taking into account only the zero Matsubara frequency $i\omega_{\nu} =0$ in the interaction (\[r8\]) which gives $i\omega_{m}= i\omega_{n}$ in (\[r7\]). In the limit of large AF correlation length $\xi \gg 1 $ the static spin susceptibility $\chi_{s}({\bf q})$ in (\[r1\]) shows a sharp peak close to the AF wave-vector ${\bf Q} = (\pi,\pi)$ and can be expanded over the small wave-vector ${\bf p = q - Q}$: $$\chi_{s}({\bf q})\simeq \, \frac {\chi_0 } {1 + \xi^2 \,{\bf p}^2} \simeq \frac {A}{\kappa^{2}+{\bf p}^2} . \label{ar1}$$ where we introduced $\kappa = \xi^{-1} $ and took into account that the constant (\[r3\]) $\chi_0 \simeq A\,\xi^2$ with $\, A= ({8\pi}/{\omega_{s}}) \langle {\bf S}_{i}^2 \rangle [\ln(1 + 4\pi \, \xi^2)]^{-1}$ for the square lattice. In this limit we get the following equation for the self-energy (\[r7\]): $$\begin{aligned} &&{\Sigma}({\bf k}, i\omega_{n}) \simeq |g({\bf k-Q})|^{2} \, \frac{T}{N}\sum_{\bf p} \,\frac {1} {{\kappa^{2}+ p^2}} \nonumber \\ && \times [{G}_{1}({\bf k - Q -p}, i\omega_{n}) + {G}_{2}({\bf k-Q -p}, i\omega_{n})], \quad \label{ar7}\end{aligned}$$ where the effective interaction $$|g({\bf q})|^{2} = A\,|t({\bf q})|^{2}\, F_s(0) . \label{ar8}$$ Expanding the QP energy $\,\varepsilon_{1 (2)}({\bf k-Q -p}) \simeq \varepsilon_{1 (2)}({\bf k-Q}) - {\bf p \cdot v}_{1 (2), \bf k-Q}\,$ we obtain for the GFs in (\[ar7\]) the following representation: $$\begin{aligned} &&{G}_{1(2)}({\bf k - Q -p}, i\omega_{n}) \simeq \{i\omega_n - {\varepsilon}_{1 (2)} ({\bf k- Q}) \nonumber \\ &+& {\bf p \cdot v}_{1 (2), \bf k-Q}- \Sigma({\bf k -Q},i\omega_n)\}^{-1} . \label{ar6}\end{aligned}$$ The system of equations for the GFs (\[ar6\]) and the self-energy (\[ar7\]) is similar to those one derived in the TPSC approach [@Vilk95]) and the $\Sigma_{\bf k}$-model [@Sadovskii01] apart from the interaction function and the two-subband system of equations. In our approach the vertex (\[ar8\]) is determined by the hopping parameter $|t({\bf k-Q})|^2 $, while in the TPSC and the $\Sigma_{\bf k}$-model the coupling constant is induced by the Coulomb scattering, e.g., in [@Kuchinskii06] $g^2 = U^2 ( \langle n_{i\uparrow} n_{i \downarrow}\rangle / n^2 )\langle {\bf S}_{i}^2\rangle /3$. However, the values of these vertices are close: the averaged over the BZ value $ \langle\sqrt{|t({\bf k})|^2}\rangle_{{\bf k}} \sim 2t$ is comparable with the coupling constant $ g \leq 2t$ used in [@Sadovskii05]. In the spin-fermion model the self-energy is also determined by spin-fluctuations (see, e.g., [@Eschrig05]) with the coupling constant fitted from ARPES experiments $g \sim 0.7$ eV$\sim 2t$ of the same order. As in the TPSC theory, in the limit $\xi \rightarrow \infty $ the AF gap $\Delta_{AF}({\bf k}) \propto |t({\bf k-Q})|^{2} $ in the QP spectra emerges in the subband located at the Fermi energy. This result readily follows from the self-consistent equations for the GF (\[r6\]) with the self-energy (\[ar7\]) where in the right-had side GF (\[ar6\]) is taken at ${\bf p} =0$. Thus, in our approach the pseudogap formation is mediated by the AF short-range order similar to TPSC theory and the model of short-range static spin fluctuations in the generalized DMFT  [@Kuchinskii06]. In the next sections we consider the results of self-consistent calculations of the GFs (\[r6\]) and the self-energy (\[r7\]) in the hole doped case for various hole concentration $\delta = n - 1 > 0$. In Sects. \[DA\] – \[SE\] the calculations were performed at temperature $T = 0.03 t \simeq 140$ K and $T = 0.3 t$ for $\Delta =8 t,\, t \simeq 0.4$ eV and $t' = - 0.3t $. Several results are reported for $\Delta =4 t,\, t' = - 0.13t, \, t'' = 0.16t $ in Sect. \[D4\]. For the spin-fluctuation energy in (\[r1\]) we take $\omega_s = 0.4t$. The AF correlation length $\xi(\delta)$ and the static correlation functions $C_1, C_2$ in (\[n3\]) are defined in Table \[Table1\]. Dispersion and spectral functions {#DA} --------------------------------- In ARPES measurements and QMC simulations the spectrum of single-electron excitations is determined by the spectral function $\, {A}_{(el)}({\bf k}, \omega) = {A}_{(h)}({\bf k}, -\omega) \,$. The spectral function for holes can be written as follow: $$\begin{aligned} &&{A}_{(h)}({\bf k}, \omega)= - \frac{1}{\pi}\,{\rm Im}\, \langle\langle a_{{\bf k}\sigma}\, | \, a_{{\bf k}\sigma}^{\dag} \rangle\rangle_{\omega + i0^+} \nonumber \\ & = & [Q_1 + P({\bf k})]{A}_{1}({\bf k}, \omega) +[Q_2 - P({\bf k})]{A}_{2}({\bf k}, \omega). \label{r10}\end{aligned}$$ Here we introduced for the hole annihilation $ a_{{\bf k}\sigma}$ and creation $ a_{{\bf k}\sigma}^{\dag}$ operators the definition in terms of the Hubbard operators $\, a_{{\bf k}\sigma} = X\sb{i}\sp{0\sigma} + 2\sigma X\sb{i}\sp{\bar\sigma 2}, \quad a_{{\bf k}\sigma}^{\dag} = X\sb{i}\sp{\sigma 0} + 2\sigma X\sb{i}\sp{2\bar\sigma } \,$ and used all four components of the matrix GF (\[s1\]) $\,{\hat G}_{\alpha\beta}({\bf k},\omega)\,$ with the diagonal components given by (\[s3\]). In (\[r10\]) we introduced also the one-band spectral functions determined by the GFs (\[s9\]): $\,A_{1 (2)}({\bf k}, \omega) =- (1/\pi)\,{\rm Im} {G}_{1 (2)}({\bf q},\omega) \,$. The hybridization effects are allowed for by the parameter $\,P({\bf k})= (n-1) b({\bf k}) - 2 \sqrt{Q_1 Q_2}\, W({\bf k})/\Lambda({\bf k})\,$. The dispersion curves given by maxima of spectral functions (\[r10\]) were calculated for hole doping $\delta = 0.05 - 0.3$. At low hole doping, $\delta = 0.05, \, 0.1$, the dispersion reveal a rather flat hole-doped band at the Fermi energy (FE) ($\omega =0$) as shown in the upper panel in Fig. \[figDA1-05\]. The corresponding spectral function (the bottom panel) demonstrates weak QP peaks at the Fermi energy. ![Dispersion curves (upper panel) and spectral functions (bottom panel) in units of $t$ along the symmetry directions $\Gamma(0, 0)\rightarrow M(\pi,\pi) \rightarrow X (\pi, 0) \rightarrow \Gamma(0, 0)$ for $\delta = 0.05$.[]{data-label="figDA1-05"}](fig1a.eps "fig:") ![Dispersion curves (upper panel) and spectral functions (bottom panel) in units of $t$ along the symmetry directions $\Gamma(0, 0)\rightarrow M(\pi,\pi) \rightarrow X (\pi, 0) \rightarrow \Gamma(0, 0)$ for $\delta = 0.05$.[]{data-label="figDA1-05"}](fig1bcopy.eps "fig:") With doping, the dispersion and the intensity of the QP peaks at the Fermi energy substantially increase as demonstrated in Fig. \[figDA1-3\] though a flat band in $X (\pi, 0) \rightarrow \Gamma(0, 0)$ direction is still observed in accordance with ARPES measurements in the overdoped La$_{1.78}$Sr$_{0.22}$CuO$_4$ [@Yoshida01]. To study an influence of AF spin-correlations on the spectra, we calculate the spectral functions at high temperature $\,T = 0.3t\,$ for $\delta = 0.1\,$ by neglecting spin correlation functions (\[n3\]) in the single-particle excitation spectra (\[n1\]) in MFA and taking a small AF correlation length $(\xi = 1.0)$ in the spin-susceptibility (\[r1\]). Figure \[figDA1-1T\] shows a strong increase of the dispersion and the intensity of the QP peaks at the Fermi energy as in the overdoped region, $\delta = 0.3 $, which proves a strong influence of AF spin-correlations on the spectra. ![The same as Fig. \[figDA1-05\] for hole concentration $\delta = 0.3$.[]{data-label="figDA1-3"}](fig2a.eps "fig:") ![The same as Fig. \[figDA1-05\] for hole concentration $\delta = 0.3$.[]{data-label="figDA1-3"}](fig2bcopy.eps "fig:") A crude estimation of the Fermi velocity from the dispersion curve in the $\Gamma(0, 0)\rightarrow M(\pi,\pi)$ direction in Fig. \[figDA1-3\] for the overdoped case gives the value $V_{F} \simeq 7.5 t $ Å $\simeq 3$ (eV$\cdot$Å) for the hopping parameter $t = 0.4$ eV which can be compared with experimental results $V_{F} \simeq 2.2$ (eV$\cdot$Å) for overdoped La$_{1.78}$Sr$_{0.22}$CuO$_4$ [@Yoshida01] and $V_{F} \simeq 3.9$ (eV$\cdot$Å) for overdoped Bi-2212 [@Kordyuk05]. ![The same as Fig. \[figDA1-05\] but for the hole concentration $\delta = 0.1$ and at high temperature $T=0.3t$.[]{data-label="figDA1-1T"}](fig3a.eps "fig:") ![The same as Fig. \[figDA1-05\] but for the hole concentration $\delta = 0.1$ and at high temperature $T=0.3t$.[]{data-label="figDA1-1T"}](fig3bcopy.eps "fig:") With doping, the electronic density of states (DOS) shows a weight transfer from the upper one-hole subband to the lower two-hole singlet subband as shown in Fig. \[figDOS\]. However, even in the overdoped case a noticeable part of the DOS retains in the upper one-hole subband. ![(Color online) Doping dependence of the electronic density of states.[]{data-label="figDOS"}](fig4.eps) It is interesting to compare our results with those obtained in the generalized DMFT [@Sadovskii05] which should be close to each other as discussed at the end of Sect. \[system\]. In fact, the spectral function shown in Fig. 8 in [@Sadovskii05] for $t' = -0.4$ demonstrates a similar flat QP bands in $\Gamma(0, 0)\rightarrow X (\pi, 0)$ and $\Gamma(0, 0)\rightarrow M(\pi,\pi) $ directions, as in our Fig. \[figDA1-05\] and Fig. \[figDA1-3\], a strong intensity transfer from the lower electronic Hubbard band (LHB) to the upper Hubbard band (UHB) at the $M(\pi,\pi) $ point of the BZ and a splitting of the LHB close to the $X (\pi, 0)$ point. An analogous temperature and doping ($ \xi$) behavior of the spectral functions and the pseudogap revealed in the both theories supports the spin-fluctuation scenario of the pseudogap formation. A similar behavior was observed also in the cluster perturbation theory [@Tremblay06] (see Fig. 2 (a) in [@Senechal04]). Fermi surface and occupation numbers {#FS} ------------------------------------ The Fermi surface for the two-hole subband was determined by a conventional equation: $$\varepsilon_{2}({\bf k_{\rm F}}) + {\rm Re}\,\Sigma({\bf k}_{\rm F}, \omega=0) = 0 , \label{r11}$$ as shown in Fig. \[figFS\], and then compared with those one obtained from maxima of the spectral function $A_{el}({\bf k}, \omega = 0) $ on the $(k_x, k_y)$-plane for $\delta = 0.1, \, 0.2$ shown in Fig. \[figF1-2\]. The FS changes from a hole arc-type at $\delta = 0.1$ to an electron-like one at $\delta =0.3$. Experimentally an electron-like FS was observed in the overdoped La$_{1.78}$Sr$_{0.22}$CuO$_4$ [@Yoshida01]. The doping dependent FS transformation can be also observed by studying the electron occupation numbers. ![(Color online) Doping dependence of the FS for $\delta = 0.1$ (full line at $T=0.03t$ and dotted line at $T=0.3t$), $\delta = 0.2$ (dashed line), and $\delta = 0.3$ (dot-dashed line).[]{data-label="figFS"}](fig5.eps) ![$A({\bf k},\omega =0)$ on the FS for $\delta = 0.1$ (left panel), $\delta = 0.2$ (right panel).[]{data-label="figF1-2"}](fig6a.eps "fig:") ![$A({\bf k},\omega =0)$ on the FS for $\delta = 0.1$ (left panel), $\delta = 0.2$ (right panel).[]{data-label="figF1-2"}](fig6b.eps "fig:") The electron occupation numbers in $({\bf k})$-space for one spin-direction equal to $N_{(el)}({\sigma},{\bf k})= 1- N_{(h)}(\sigma,{\bf k})$ where the hole occupation numbers $N_{(h)}(\sigma,{\bf k}) \equiv N_{(h)}({\bf k})$ according to (\[m2\]) are determined only by the diagonal GFs (\[s3\]). From the latter equation and (\[s9\]) we get: $$\begin{aligned} N_{(h)}({\bf k}) &=& [Q_1 + (n-1)b({\bf k})]\, {N}_{1}({\bf k}) \nonumber\\ & + & [ Q_2 - (n-1)b({\bf k})\, {N}_{2}({\bf k}), \nonumber\\ {N}_{1(2)}({\bf k})&=& -\frac{1}{\pi}\,\int^{\infty}_{-\infty} \frac{d\omega}{e^{\omega/T} +1}\, \mbox{Im}\, { G}_{1(2)}({\bf k},\omega) \nonumber\\ & = & \frac{1}{2}+ \frac{T}{2} \sum_{m=-\infty}^{\infty } \, G_{1(2)}({\bf k}, i\omega_{m}). \label{r12}\end{aligned}$$ ![(Color online) The electronic occupation numbers $N_{\bf k}$ for $\delta = 0.1$ at $T=0.03t$ (upper panel) and at $T=0.3t$ (bottom panel).[]{data-label="figNk1-1"}](fig7acopy.eps "fig:") ![(Color online) The electronic occupation numbers $N_{\bf k}$ for $\delta = 0.1$ at $T=0.03t$ (upper panel) and at $T=0.3t$ (bottom panel).[]{data-label="figNk1-1"}](fig7bcopy.eps "fig:")\ ![(Color online) The electronic occupation numbers $N_{\bf k}$ at $T=0.03t$ for $\delta = 0.3$.[]{data-label="figNk3"}](fig8copy.eps "fig:")\ The electron occupation numbers in a quarter of the BZ $\,(0 < k_x, k_y < \pi )\, $ are shown in Fig. \[figNk1-1\] for $\delta = 0.1$ at low temperature $T = 0.03t$ and at high temperature $T = 0.3t$. With doping the the shape of the $N_{\bf k}$ is changing revealing a transition of the hole-like FS to the electron-like in the overdoped case $\delta = 0.3$ as plotted in Fig. \[figNk3\]. While in the underdoped case at $\delta = 0.1$ the drop of the occupation numbers at the Fermi level crossing is rather small, $\Delta N_{(el)} \simeq 0.15$, for high temperature $T = 0.3t$ or in the overdoped case at $\delta = 0.3$ when the AF spin correlations are suppressed, the occupation number drops are substantially increased: $\Delta N_{(el)} \simeq 0.45, \, 0.55$, respectively. Thus, the arc formation and a small change of the electron occupation numbers at the FS crossing at low doping further prove a large contribution of the spin correlations in the renormalization of QP spectra. The obtained result concerning the “destruction" of the FS caused by the arc formation shown in Fig. \[figF1-2\] and Fig. \[figFA4\] for low doping, which corresponds to large $ \xi $, correlates well with the studies within the generalized DMFT [@Kuchinskii05]. As shown in Fig. 2 in [@Kuchinskii05], the spectral density intensity plots clearly demonstrate the arc formation on the FS for large coupling constant $\lambda_{sf} = \Delta = 2t$ and $ \xi = 10$, while the FS determined from (\[r11\]) gives several solutions as in our Fig. \[figFS4\] for $U_{eff} = 4\, t \,$ in Sect. \[D4\]. Self-energy and kinks {#SE} --------------------- Energy dependence of the real and imaginary parts of the self-energy $\Sigma({\bf k}, \omega)$ for $\delta = 0.1,\, 0.3$ at the $\Gamma(0,0)$, $ S(\pi/2,\pi/2)$ and $ M(\pi,\pi)$ points are shown in Fig. \[figSE-1-3\]. These plots demonstrate a strong dependence of the self-energy on the wave-vector and the hole concentrations. With doping, the coupling constant substantially decreases as seen by the decreasing of the imaginary part and the slope of the real part at the FS crossing which determines the coupling constant $\lambda = - (\partial\,{\rm Re}\tilde{\Sigma}({\bf k}, \omega)/\partial \omega)_{\omega =0}\,$. As shown in Fig. \[figSEn\], the coupling constant in the $\Gamma(0, 0)\rightarrow M(\pi,\pi)$ direction decreases from $\lambda \simeq 7.86$ at $\delta = 0.1\,$ to $\lambda \simeq 3.3$ at $\delta = 0.3$. ![(Color online) Energy dependence of the real and imaginary parts of the self-energy $\Sigma({\bf k}, \omega)$ at the $\,\Gamma(0,0)$, $\, S(\pi/2,\pi/2)\,$ and $\, M(\pi,\pi)\,$ points at $\delta = 0.1$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figSE-1-3"}](fig9a.eps "fig:") ![(Color online) Energy dependence of the real and imaginary parts of the self-energy $\Sigma({\bf k}, \omega)$ at the $\,\Gamma(0,0)$, $\, S(\pi/2,\pi/2)\,$ and $\, M(\pi,\pi)\,$ points at $\delta = 0.1$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figSE-1-3"}](fig9b.eps "fig:") ![(Color online) ${\rm Re}\Sigma({\bf k}, \omega)$ in the $\Gamma(0, 0)\rightarrow M(\pi,\pi)$ direction at the FS.[]{data-label="figSEn"}](fig10.eps) At large binding energies (greater than the boson energy responsible for the interaction) the self-energy effects vanish and the electron dispersion should return to the bare value, giving a sharp bend, the so-called “kink" in the electron dispersion. The amplitude of the kink and the energy scale where it occurs are related to the strength of the electron-boson interaction and the boson energy, respectively. In ARPES the kink is observed as a changing of the slope for an intensity plot for the spectral function $ {A}({\bf k}, \omega)$ in a particular ${\bf k}$-wave vector direction below the Fermi level $\omega \leq 0$ (for electrons). Usually two directions are studied: the nodal $(\Gamma \rightarrow M)$ and the antinodal $(X \rightarrow M)$ ones. Intensity plots for the spectral function $ {A}({\bf k}, \omega)$ at $\delta = 0.1$ are shown in Fig. \[figK1-1\] in the nodal direction (left panel) and the antinodal one (right panel). The same plots at $\delta = 0.3$ are shown in Fig. \[figK1-3\] in the nodal direction (left panel) and $X (\pi, 0) \rightarrow \Gamma(0,0)$ direction (right panel). ![(Color online) Dispersion curves along the symmetry directions $M(\pi,\pi) \rightarrow \Gamma(0, 0) $ (left panel) and $M(\pi,\pi) \rightarrow X (\pi, 0) $ (right panel) in units of $t$ for $\delta = 0.1,\,T = 0.03 t$. Fermi level crossing is shown by vertical dotted line.[]{data-label="figK1-1"}](fig11a.eps "fig:") ![(Color online) Dispersion curves along the symmetry directions $M(\pi,\pi) \rightarrow \Gamma(0, 0) $ (left panel) and $M(\pi,\pi) \rightarrow X (\pi, 0) $ (right panel) in units of $t$ for $\delta = 0.1,\,T = 0.03 t$. Fermi level crossing is shown by vertical dotted line.[]{data-label="figK1-1"}](fig11b.eps "fig:") ![(Color online) The same as Fig. \[figK1-1\] but for $\delta = 0.3$ along the symmetry directions $M(\pi,\pi) \rightarrow \Gamma(0, 0) $(left panel) and $X (\pi, 0) \rightarrow \Gamma(0,0)$ (right panel).[]{data-label="figK1-3"}](fig12a.eps "fig:") ![(Color online) The same as Fig. \[figK1-1\] but for $\delta = 0.3$ along the symmetry directions $M(\pi,\pi) \rightarrow \Gamma(0, 0) $(left panel) and $X (\pi, 0) \rightarrow \Gamma(0,0)$ (right panel).[]{data-label="figK1-3"}](fig12b.eps "fig:") A change of dispersion is clearly seen with increasing binding energy below the FS shown by dotted line. For the underdoped case the kink is larger than for the overdoped one. A crude estimation of the strength of the kink from the ratio of the dispersion slope $V_{\rm F }$ close to the FS $(\omega = 0)$ to those one $V_{\rm F }^{0}$ at large binding energy $(\omega \sim 0.2 t)$, $V_{\rm F }^{0}/V_{\rm F } = (1 +\lambda )$, gives the following values: $(1 +\lambda ) \sim 7.6,\, 3.5$ at $\delta = 0.1$ for the nodal and antinodal directions, respectively. In the overdoped case the nodal value is much smaller, while in the the antinodal $X (\pi, 0) \rightarrow \Gamma(0,0)$ direction is still quite large: $(1 +\lambda ) \sim 2.5$. These estimations are in accord with the evaluation of the coupling constant $ \lambda $ from the slope of the real part of the self-energy discussed above. It is important to stress that in our theory the self-energy effects and the corresponding kinks are induced by the spin-fluctuation spectrum in the form of the continuum (\[r1\]) which at low temperature $T \sim 0.03t \ll \omega_s = 0.4t$ has a large intensity already at small energy $\omega \sim 0.03t $ and decreases slowly up to a high energy $\omega \sim t$. In the spin-fermion model the kink phenomenon is usually explained by electron interaction with the spin-resonance mode $\Omega_{\rm res} \simeq 40$ meV observed in the superconducting state. This results in a break of the electron dispersion (“kink”) at a certain energy $\omega \sim \Omega_{\rm res} + \Delta_{0}$ where $\Delta_{0}$ is the superconducting gap (see, e.g. [@Eschrig05]). In the normal state considered in our theory the spin-resonance mode is inessential. Its contribution amounting only few percents of the total spin fluctuation spectrum (\[r2\]) should not change our results which reveal a rather strong interaction with a smooth energy variation without any specific kink energy. Dispersion and FS at $\bf U_{eff} = \Delta = 4t$ {#D4} ------------------------------------------------ The effective Coulomb energy in the Hubbard model (\[m1\]) $ U_{eff} = 8t$ results in a large charge-transfer gap $\Delta \simeq 3$ eV for $t = 0.4$ eV even in the overdoped case, Fig. \[figDA1-3\], while experiments point to a smaller value of the order of $1.5 - 2$ eV. To correct this inconsistency, we present in this section the results obtained for a smaller value of $ U_{eff} = \Delta = 4t$. We also take into account the hoping parameter for the n.n.n. $\, \pm 2 a_{x}, \pm 2 a_{y}$ sites and fix the hoping parameter in the model dispersion  (\[m1a\]) as suggested for the effective Hubbard model based on the tight-binding fitting the LDA calculations for La$_2$CuO$_4$ [@Korshunov05] as follow: $\, t' = - 0.13t, \, t'' = 0.16 t$ with $t \simeq 0.7$ eV. ![(Color online) Dispersion curves for $\Delta = 4t$ along the symmetry directions $\Gamma(0, 0)\rightarrow M(\pi,\pi) \rightarrow X (\pi, 0) \rightarrow \Gamma(0, 0)$ at $\delta = 0.05$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figD4-05-3"}](fig13a.eps "fig:") ![(Color online) Dispersion curves for $\Delta = 4t$ along the symmetry directions $\Gamma(0, 0)\rightarrow M(\pi,\pi) \rightarrow X (\pi, 0) \rightarrow \Gamma(0, 0)$ at $\delta = 0.05$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figD4-05-3"}](fig13b.eps "fig:") Main results for the dispersion and the spectral functions are not changed much in comparison with the previous ones as shown in Fig. \[figD4-05-3\]. Larger hybridization between the subbands at small value of $ U_{eff}$ results in increase of the dispersion and the intensity of the upper one-hole subband. This trend is also seen in the DOS in Fig. \[figDOS4\]. At weak doping the Mott gap between the subbands is observed despite the intermediate Coulomb energy $U_{eff} = 4t$, only a half of the bare bandwidth $W \simeq 8t$. This can be explained by a reduction of the bandwidth caused by strong spin correlations in the underdoped region up to $\tilde{W} \sim 8| t'| \,$ as discussed in Sect. \[MFA\], below the equation (\[n3\]). In the overdoped case at $\delta = 0.3$ when the spin correlations become weak the gap between the subbands vanishes. ![(Color online) Doping dependence of the DOS for $\Delta = 4t$.[]{data-label="figDOS4"}](fig14.eps) Noticeable changes are observed for the FS shown in Fig. \[figFS4\] and in Fig. \[figFA4\]. In the first plot where the FS was determined by the equation (\[r11\]) we see a large pocket at small doping $\, \delta = 0.1\,$ which opens with doping or temperature increase. At the overdoping for $\delta = 0.3$, the FS transforms to the electron-like as in the previous calculations. This transformation is confirmed by calculations of the electron occupation numbers shown in Fig. \[figNk4\]. ![(Color online) Doping dependence of the FS for $\delta = 0.1$ (full line at $T=0.03t$ and dotted line at $T=0.3t$), $\delta = 0.2$ (dashed line), and $\delta = 0.3$ (dot-dashed line) for $\Delta = 4t$.[]{data-label="figFS4"}](fig15.eps) ![(Color online) $A({\bf k},\omega =0)$ on the FS at $\delta = 0.05$ (left panel) and $\delta = 0.1$ (right panel) at $T=0.03t$ for $\Delta = 4t$.[]{data-label="figFA4"}](fig16a.eps "fig:") ![(Color online) $A({\bf k},\omega =0)$ on the FS at $\delta = 0.05$ (left panel) and $\delta = 0.1$ (right panel) at $T=0.03t$ for $\Delta = 4t$.[]{data-label="figFA4"}](fig16b.eps "fig:") It should be noted that a pronounced hole pocket in the new set of the model parameters is caused by the $t''$ contribution which results in a large dispersion in the $(\pi,0) \rightarrow (0, \pi)$ direction $(\propto t''\,(\cos 2 k_x +\cos 2 k_y))$ disregarded in the previous set of the parameters. A remarkable feature of these results is that the part of the FS close to the $\Gamma(0, 0)$ point in the nodal direction in Fig. \[figFS4\] does not shift much with doping (or temperature) being pinned to a large FS as observed in ARPES experiments (see, e.g. [@Kordyuk05]). In fact, only this part of the FS was detected in ARPES experiments where the spectral function $A_{el}({\bf k}, \omega = 0) $ shown in Fig. \[figFA4\] was measured. ![(Color online) The electronic occupation numbers $N_{\bf k}$ at $T=0.03t$ for $\delta = 0.05$ (upper panel) and at $\delta = 0.3$ (bottom panel) for $\Delta = 4t$.[]{data-label="figNk4"}](fig17acopy.eps "fig:") ![(Color online) The electronic occupation numbers $N_{\bf k}$ at $T=0.03t$ for $\delta = 0.05$ (upper panel) and at $\delta = 0.3$ (bottom panel) for $\Delta = 4t$.[]{data-label="figNk4"}](fig17bcopy.eps "fig:") Concerning the self-energy effects and kinks, they are similar to the case for $\Delta = 8t$ and confirm a strong influence of spin correlations on the QP spectra renormalization. As shown in Fig. \[figSE4\], the coupling constant $\lambda = - (\partial\,{\rm Re}\tilde{\Sigma}({\bf k}, \omega)/\partial \omega)_{\omega =0}\,$ being large at small doping distinctly decreases with overdoping at $\delta = 0.3$ accompanied by suppression of the imaginary part of the self-energy. ![(Color online) Energy dependence of the real and imaginary parts of the self-energy $\Sigma({\bf k}, \omega)$ for $\,\Delta = 4t$ at the $\Gamma(0,0)$, $\,S(\pi/2,\pi/2)\,$ and $\, M(\pi,\pi)$ points at $\delta = 0.1$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figSE4"}](fig18a.eps "fig:") ![(Color online) Energy dependence of the real and imaginary parts of the self-energy $\Sigma({\bf k}, \omega)$ for $\,\Delta = 4t$ at the $\Gamma(0,0)$, $\,S(\pi/2,\pi/2)\,$ and $\, M(\pi,\pi)$ points at $\delta = 0.1$ (upper panel) and $\delta = 0.3$ (bottom panel).[]{data-label="figSE4"}](fig18b.eps "fig:") In conclusion, the alternative set of parameters with a moderate effective Coulomb energy $ U_{eff} = 4t$ in the Hubbard model  (\[m1\]) confirms an important role of AF correlations in the electronic structure of system with large single-site Coulomb interaction. Conclusion ========== In the present paper the theory of electronic spectra in the strong correlation limit for the Hubbard model (\[m1\]) in a paramagnetic state has been formulated. By employing the Mori-type projection technique for the thermodynamic GFs in terms of the Hubbard operators, we consistently took into account charge carrier scattering by dynamical spin fluctuations and derived the self-consistent system of equations for the GF (\[s9\]) and the self-energy (\[s8\]) evaluated in the NCA which neglects the vertex corrections. Though in the Hubbard model (\[m1\]) the electron coupling to spin-fluctuations is not weak, it is of the order of the hopping parameter, the vertex corrections should not be so important in this case due to kinematic restrictions imposed on the spin-fluctuation scattering. As was shown for the $t$-$J$ model [@Liu92], the leading two-loop crossing diagram identically vanishes, while the next three-loop crossing diagram gives a small contribution to the self-energy. In any case, the NCA for the self-energy can be considered as a starting approximation for a model with strong coupling. As we discussed at the end of Sect. \[system\], the self-consistent system of equations for the self-energy in the classical limit in our approach are similar to the two-particle self-consistent approach (TPSC) [@Vilk95] or the model of short-range static spin (charge) fluctuations [@Kuchinskii06]. Numerical results for the spectral density and the FS in the NCA approximation for the self-energy are quite similar to the studies within the generalized DMFT [@Sadovskii05; @Kuchinskii06] where all diagrams for electron scattering by spin (or charge) fluctuations in the static approximation were taken into account. Our results are also in accord with calculations based on the cluster approximation [@Tremblay06] and the TPSC [@Vilk95]. In the present paper we have not presented a fully self-consistent theory for the single-electron GF and the dynamical spin and charge susceptibility. This demands rather involved calculations of the collective spin and charge excitation spectra which is beyond the scope of the present paper. Instead, we have used a model for the dynamical spin susceptibility (\[r1\]) which is usually employed in phenomenological approach. However, a variation of the electron (hole) interaction with spin fluctuations in our theory is strongly restricted since the vertex of the interaction is given by the hopping parameters (\[m1a\]) in the Hubbard model, while an intensity of spin fluctuations at the AF wave-vector $\,{\bf Q}\,$ ($\, C(\xi)\,$ in the Table \[Table1\]) determined by the AF correlation length $\xi$ is fixed by the sum rule (\[r2\]). A variation of the cut-off energy $\omega_s$ does not affect noticeably the numerical results, as we have checked. The resulting coupling constant $\lambda$ obtained in our calculations (see Sect. \[SE\]) seems to be too large in comparison with ARPES results. This discrepancy can be caused by disregarding scattering on charge fluctuations in the dynamical susceptibility model (\[s7\]) and electron-phonon interaction which may reduce the contribution from the electron-spin interaction. The main conclusion of the present study is that a decisive role in renormalization of the electronic spectrum in strongly correlated system as cuprate superconductors is played by electron interaction with spin-fluctuations which is in accord with other studies (e.g., [@Eschrig05; @Tremblay06; @Kuchinskii06]). The numerical results for the electron dispersion in Sect. \[DA\], the FS and the occupation numbers in Sect. \[FS\], and the self-energy in Sect. \[SE\] unambiguously approved this conclusion. With doping or temperature increasing, spin correlations are suppressed which results in transition from a strong to a weak correlation limit. These observations were confirmed also by a consideration of the model with intermediate Coulomb correlations in Sect. \[D4\]. A theory of superconducting transition within the present theory will be considered elsewhere. One of the authors (N.P.) is grateful to Prof. P. Fulde for the hospitality extended to him during his stay at MPIPKS, Dresden, where a major part of the present work has been done. [99]{} A. Damascelli, Z. Hussain, and Z.-X. Shen, Rev. Mod. Phys. [**75**]{}, 473 (2003). M.V. Sadovskii, Usp. Phys. Nauk [**171**]{}, 539 (2001) \[Physics-Uspekhi [**44**]{}, 515 (2001)\]. M. Eschrig, Advances in Physics [**55**]{}, 47 (2006). P.W. Anderson, Science **235**, 1196 (1987); P.W. Anderson, [*The theory of superconductivity in the high-$T\sb{c}$ cuprates*]{}. Princeton University Press, Princeton (1997). J. Hubbard, Proc. Roy. Soc., [**A 276**]{}, 238 (1963); ibid, [**A 284**]{}, 401 (1964). N. Bulut, [ Advances in Physics]{} [**51**]{}, 1587 (2002). G. Ovchinnikov and V. V. Val’kov, [*Hubbard Operators in the Theory of Strongly Correlated Electrons*]{}, Imperial College Press, London, (2004). F. Mancini and A. Avella, Advances in Physics [**53**]{}, 537 (2004). A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, [ Rev. Mod. Phys.]{} [**68**]{}, 13 (1996). G. Kotliar, S. Y.  Savrasov, K. Haule, V.S. Oudovenko, O. Parcollet, and C.A. Marianetti, [ Rev. Mod. Phys.]{} (to be published), cond-mat/0511085. Th. Maier, M. Jarrel, Th. Pruschke, and M.H. Hettler, [ Rev. Mod. Phys.]{} [**77**]{}, 1027 (2005). A-M.S. Tremblay, B. Kyung and D. Sénéchal, Fizika Nizkikh Temperatur (J. Low Temp. Phys.) [**32**]{}, 561 (2006). M.V. Sadovskii, I.A. Nekrasov, E.Z. Kuchinskii, Th. Pruschke, and V.I. Anisimov, Phys. Rev. B [**72**]{}, 155105 (2005). E.Z. Kuchinskii, I.A. Nekrasov, M.V. Sadovskii, Pis’ma v Zh. Exp. Teor. Fiz. [**82**]{}, 217 (2005) \[JETP Letters [**82**]{}, 198 (2005)\]. E.Z. Kuchinskii, I.A. Nekrasov, M.V. Sadovskii, Fizika Nizkikh Temperatur (J. Low Temp. Phys.) [**32**]{}, 528 (2006). S. Krivenko, A. Avella, F. Mancini, and N. Plakida, Physica B [**359-361**]{}, 666 (2005). Y. Kakehashi and P. Fulde, Phys. Rev. B [**70**]{}, 195102 (2004); J. Phys. Soc. Jpn. [**74**]{}, 2397 (2005). D.N. Zubarev, Usp. Fiz. Nauk, **71**, 71 (1960) \[Sov. Phys. Uspekhi **3**, 320 (1960)\]. N.M. Plakida, R. Hayn and J.-L. Richard, [Phys. Rev. B]{} **51**, 16599 (1995). N.M. Plakida and V.S. Oudovenko, [ Phys. Rev. B]{} **59**, 11949 (1999). N.M. Plakida, L. Anton, S. Adam, and Gh. Adam, [ Zh. Exp. Theor. Fyz]{} **124**, 367 (2003), (JETP **97**, 331 (2003)). L.F. Feiner, J.H. Jefferson, and R. Raimondi, [ Phys. Rev. B]{} **53**, 8751 (1996). V.Yu. Yushankhai, V.S. Oudovenko, and R. Hayn, Phys. Rev. B, **55**, 15562 (1997). V.J. Emery, [ Phys. Rev. Lett.,]{} **58**, 2794 (1987); C.M. Varma, S. Schmitt-Rink, and E. Abrahams, Solid State Commun. **62**, 681 (1987). J. Jaklič and P. Prelovśek, Phys. Rev. Lett. [**74**]{}, 3411 (1995); [*ibid.*]{} [**75**]{}, 1340 (1995). J. Bonca, P. Prelovśek, and I. Sega, Europhys. Lett. [**10**]{}, 87 (1989). Y. Vilk and A.-M. Tremblay, J. Phys. Chem. Solids (UK) [**56**]{}, 1769 (1995). T. Yoshida, X.J. Zhou, M. Nakamura, et al. Phys. Rev. B [**63**]{} 220501, (2001). A.A. Kordyuk, S.V. Borisenko, A. Koitzsch, J. Fink, M. Knupfer, and H. Berger, Phys. Rev. B [**71**]{}, 214513 (2005). M.M. Korshunov, V.A. Gavrichkov, S.G. Ovchinnikov, I.A.  Nekrasov, Z.V. Pchelkina, and V.I. Anisimov, Phys. Rev. B, **72**, 165104 (2005). D. Senechal1 and A.-M. S. Tremblay, Phys. Rev. Lett. [**92**]{},126401 (2004). Z. Liu and E. Manousakis, Phys. Rev. B [**45**]{}, 2425 (1992).
--- abstract: 'We used a one-zone chemical evolution model to address the question of how many masses and metallicities are required in grids of massive stellar models in order to ensure reliable galactic chemical evolution predictions. We used a set of yields that includes seven masses between 13 and $30\,$M$_\odot$, 15 metallicities between 0 and 0.03 in mass fraction, and two different remnant mass prescriptions. We ran several simulations where we sampled subsets of stellar models to explore the impact of different grid resolutions. Stellar yields from low- and intermediate-mass stars and from Type Ia supernovae have been included in our simulations, but with a fixed grid resolution. We compared our results with the stellar abundances observed in the Milky Way for O, Na, Mg, Si, Ca, Ti, and Mn. Our results suggest that the range of metallicity considered is more important than the number of metallicities within that range, which only affects our numerical predictions by about 0.1 dex. We found that our predictions at \[Fe/H\] $\lesssim-2$ are very sensitive to the metallicity range and the mass sampling used for the lowest metallicity included in the set of yields. Variations between results can be as high as 0.8 dex. At higher \[Fe/H\], we found that the required number of masses depends on the element of interest and on the remnant mass prescription. With a monotonic remnant mass prescription where every model explodes as a core-collapse supernova, the mass resolution induces variations of 0.2 dex on average. But with a remnant mass prescription that includes islands of non-explodability, the mass resolution can cause variations of about 0.2 to 0.7 dex depending on the choice of the lower limit of the metallicity range. With such a remnant mass prescription, explosive or non-explosive models can be missed if not enough masses are selected, resulting in over- or under-estimations of the mass ejected by massive stars.' author: - | Benoit Côté,$^{1,2,3,10,11}$ Christopher West,$^{4,10}$ Alexander Heger,$^{5,6,10,11}$Christian Ritter,$^{1,10,11}$ Brian W. O’Shea,$^{2,3,7,10}$ Falk Herwig,$^{1,10,11}$Claudia Travaglio$^{8,9,11}$ and Sara Bisterzo$^{8,10,11}$\ $^{1}$Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8W 2Y2, Canada\ $^{2}$Department of Physics and Astronomy, Michigan State University, East Lansing, MI, 48824, USA\ $^{3}$National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI, 48824, USA\ $^{4}$Center for Academic Excellence, Metropolitan State University, St Paul, MN, 55106, USA\ $^{5}$Monash Centre for Astrophysics, Monash University, Melbourne, Victoria, 3800, Australia\ $^{6}$Center for Nuclear Astrophysics, Department of Physics and Astronomy,\ Shanghai Jiao-Tong University, Shanghai, 200240, P. R. China\ $^{7}$Department of Computational Mathematics, Science and Engineering, Michigan State University, East Lansing, MI, 48824, USA\ $^{8}$INAF, Astrophysical Observatory Turin, Strada Osservatorio 20, I-10025, Pino Torinese (Turin), Italy\ $^{9}$B2FH Association, Turin, Italy\ $^{10}$Joint Institute for Nuclear Astrophysics - Center for the Evolution of the Elements, USA\ $^{11}$NuGrid Collaboration, <http://nugridstars.org> date: 'Accepted XXX. Received XXX; in original form XXX' title: Mass and Metallicity Requirement in Stellar Models for Galactic Chemical Evolution Applications --- \[firstpage\] Galaxy: chemical evolution – Stars: supernovae – Stars: yields Introduction ============ Stellar yields are fundamental ingredients in chemical evolution models and simulations. To reproduce the chemical enrichment of galaxies over their entire lifetime, those yields need to include low-mass, intermediate-mass, and massive stars along with a wide range of metallicities, ideally from zero metallicity up to solar composition. Several grids of stellar models are available in the literature with different numbers of masses and metallicities. In general, for massive stars, those grids either offer a limited number of masses within a certain range of metallicities (e.g., @ww95 [@pcb98; @cl04; @k06; @p13]) or a large number of masses for one specific metallicity (e.g., @lc06 [@wh07; @hw10; @e12]).\ In this paper we address the question of how many masses and metallicities for massive stars are required in a grid of stellar models to ensure convergence in galactic chemical evolution studies. To do so, we conduct an experiment with a set of yields which has seven masses from 13 to 30M$_\odot$ and 15 metallicities (see Section \[sect\_yields\]), where we sample subsets of stellar models to create new sets of yields with lower mass and metallicity resolutions. We then fold those yields into simple stellar populations (SSPs) and include them in a one-zone chemical evolution model to quantify the impact of the stellar grid resolution on our predictions. For now, we do not consider the impact of massive binary systems which could significantly alter the evolution and the ejecta of massive stars (e.g., @ddv04 [@s12; @dm13]). Although our sensitivity study only focuses on massive stars, we included the contribution of low-and intermediate-mass stars and Type Ia supernovae (SNe Ia) in our calculations (see Section \[sect\_yields\]).\ It is generally believed that the most massive stars are more likely to form a black hole and lock away most of the heavy elements synthesized during their evolution (e.g., @whw02 [@h03; @zwh08]). This seems to be also supported by the lack of observed more massive progenitors for common supernovae (e.g., @s09 [@w14]). This sustains the idea that there must be a transition mass above which massive stars stop to contribute to the chemical evolution of galaxies (but see discussion in Section \[sect\_yields\]). Recent studies with high mass resolutions, however, suggest that such a transition may not exist and that black hole formation is in fact sparsely distributed across the stellar initial mass spectra, forming islands of non-explodability (e.g., @u12 [@e15; @s15]).\ The yields used in this work for massive stars have been calculated with four different remnant mass and black hole formation prescriptions, which enables us to study the impact of such prescriptions on our grid resolution study. As two extreme cases, we consider the prescription of [@e15] that generates islands of non-explodability, and the *no-cutoff* prescription, which is a monotonic remnant mass distribution where all models explode with minimum fallback (see Figures \[fig\_remnant\_high\] and \[fig\_remnant\_low\]). The remnant mass is the baryonic final mass of a star (i.e., not its gravitational mass) and refers to its initial mass minus the total mass ejected during its lifetime, which includes the explosive ejecta, if any, and stellar winds. In our specific case, any remnant mass larger than $\sim3\,$M$_\odot$ implies a black hole formation instead of an explosion. We also assumed that, if a star does not explode, the entire star disappears as a back hole and no supernova yield is produced. Whereas this may be a simplification in some cases, e.g., the formation of long-duration gamma-ray bursts or some types of hypernovae, it should be reasonable to assume that in this case the bulk, and in particular the inner parts of the stars, are not being ejected. Throughout this paper, we compare our numerical predictions with observations to provide a visual reference to evaluate the importance of the stellar grid resolution in our simulations. We chose the Milky Way because of the large amount of stellar abundances data and because of the wide metallicity range covered by those data. Although one-zone models do not capture the complexity of the formation of massive systems such as the Milky Way, we believe they are sufficient, at least as a first order approximation, to address the specific question of what is the impact of the stellar grid resolution and the remnant mass prescription in the context of galactic chemical evolution. Our results may differ from the ones generated by two-zone and three-zone models (e.g., @f92 [@pfm95]), since all of our stellar ejecta is returned and recycled in a unique gas reservoir instead of being distributed within the different galactic structures, such as the halo and the thick and thin discs. It is not our goal to produce the most realistic model of the Milky Way. More sophisticated simulations for our Galaxy can be found in the literature (e.g., @cmr01 [@t04; @kn11; @mmr13; @m13; @mcgg15; @shen15; @vv15; @w15]). ![Remnant mass as a function of stellar initial mass for the eight highest metallicities available in the yields described in Section \[sect\_yields\], using the [@e15] (upper and middle panels) and the no-cutoff (lower panel) prescriptions. Remnant masses larger than $\sim3\,$M$_\odot$ implies the formation of a black hole. $[Z]=\mathrm{log}_{10}(Z/Z_\odot)$ where $Z_\odot=0.0153$.[]{data-label="fig_remnant_high"}](fig_1.eps){width="3.6in"} This paper is organized as follow. In Section \[sect\_gal\_model\], we describe our chemical evolution code and input physics. We describe our stellar abundances data selection in Section \[sect\_sad\]. The impact of the mass and metallicity resolutions is presented in Section \[sec\_y\_mrm\] for stellar yields with monotonic remnant masses, and in Section \[sect\_y\_ine\] for stellar yields with islands of non-explodability. We summarize our results and give our conclusions in Section \[sect\_s\_c\]. ![Same as Figure \[fig\_remnant\_high\] but for the 7 lowest metallicities available in the yields described in Section \[sect\_yields\].[]{data-label="fig_remnant_low"}](fig_2.eps){width="3.5in"} Galaxy Model {#sect_gal_model} ============ We use OMEGA, our One-zone Model for the Evolution of GAlaxies code (@c16), to follow the chemical evolution of the Milky Way. The input parameters of the closed-box version and the treatment of SSPs using SYGMA, which stands for Stellar Yields for Galactic Modeling Applications (C. Ritter et al., in prep.), are described in details in [@c15]. For the present study, we consider an open box model that includes and inflows of primordial gas and galactic outflows. All of our codes are available online with the NuGrid NuPyCEE package[^1]. Open Box Model {#sect_obm} -------------- OMEGA uses the classical equations of single-zone chemical evolution models (@p09). At a certain time, $t$, the mass of the gas reservoir, $M_{\mathrm{gas}}$, is calculated by $$M_{\mathrm{gas}}(t+\Delta t)=M_{\mathrm{gas}}(t) + \Big[\dot{M}_{\mathrm{in}}(t) - \dot{M}_{\mathrm{out}}(t) +\dot{M}_{\mathrm{ej}}(t) - \dot{M}_{\mathrm{\star}}(t)\Big]\Delta t, \label{eq_main}$$ where $\Delta t$ is the length of the timestep and the four terms in brackets are, from left to right, the inflow rate, the outflow rate, the stellar mass loss rate of all the SSPs, and the star formation rate. The mass ejected by each SSP is calculated using the initial mass function of [@c03] and the stellar yields described in Section \[sect\_yields\]. We use a star formation history that has a similar shape than the one derived from the two-infall Milky Way model of [@cmg97; @cmr01], which we normalized to produce a current stellar mass of $5\times10^{10}\,$M$_\odot$ (@f06 [@mcm11; @bv13; @ln15]) at the end of our simulations. Outflow Rate ------------ The evolution of the galactic outflow rate is defined by (@mqt05) $$\dot{M}_{\mathrm{out}}(t) = \eta(t) \dot{M}_{\mathrm{\star}}(t).$$ To calculate the time evolution of $\eta$, the mass-loading factor, we use the *MA* prescription described in [@c16], $$\label{eq_eta_z} \eta(z)\propto M_\mathrm{vir}(z)^{-\gamma/3}(1+z)^{-\gamma/2},$$ where $M_\mathrm{vir}$ is the total virial mass of the system. The redshift, $z$, is converted into time using the cosmological parameters measured in [@d09], assuming the end of our simulations represents $z=0$. We set $\gamma$ to unity to consider outflows driven by radiative pressure (see @mqt05). We assume the dark matter halo mass of the Milky Way follows the equations derived by [@fak10] which represents the average dark matter accretion rates extracted from the Millennium simulations (@s05 [@bk09]). In each series of simulations, we tune the final value of the mass-loading factor to ensure that the peak of the predicted metallicity distribution function occurs at \[Fe/H\] $\sim0$. Mass of Gas and Inflow Rate --------------------------- At every time $t$, we assume the star formation follows the Kennicutt-Schmidt law (@s59 [@k98]) in the adapted form of (@baugh06 [@sd15]) $$\label{eq_SF_law} \dot{M}_\mathrm\star(t)=f_\star\frac{M_\mathrm{gas}(t)}{\tau_\mathrm\star(t)},$$ where $f_\mathrm\star$ and $\tau_\mathrm\star$ are respectively the star formation efficiency and the star formation timescale. Because $\dot{M}_\star$ is a known quantity in our code, we can reverse equation (\[eq\_SF\_law\]) and derive the evolution of the mass of gas as a function of time. The inflow rate, the only unknown in equation (\[eq\_main\]), can then be isolated and calculated for each timestep. This approach has been used in previous works to calculate the chemical evolution of local dwarf spheroidal galaxies (@fggl06 [@g07; @h15]). Star Formation Efficiency and Timescale {#sect_sfet} --------------------------------------- We assume that the star formation timescale is proportional to the dynamical timescale, $\tau_\mathrm{dyn}$, of the virialized system hosting the galaxy (e.g. @kcdw99 [@clbf00; @swtk01]), and is defined by $\tau_\star = f_\mathrm{dyn} \tau_\mathrm{dyn}\approx~f_\mathrm{dyn}R_\mathrm{vir}/V_\mathrm{vir}$, where $f_\mathrm{dyn}$ is the proportional constant and $R_\mathrm{vir}$ and $V_\mathrm{vir}$ are respectively the virial radius and the circular velocity of the system. With the relation for $R_\mathrm{vir}$ defined in [@wf91], $$R_\mathrm{vir}=0.1H_0^{-1}(1+z)^{-3/2}V_\mathrm{vir},$$ where $H_0$ is the current Hubble constant, the dynamical timescale is then given by $$\label{eq_tau_z} \tau_\mathrm{dyn}=0.1H_0^{-1}(1+z)^{-3/2}\;.$$ With our set of equations, the $f_\mathrm\star/f_\mathrm{dyn}$ ratio is used to control the initial and final mass of gas in our simulations. The initial mass of gas sets the speed and the concentration of the early enrichment and therefore the metallicity at which SNe Ia start to contribute to the chemical evolution, whereas the final mass of gas sets the final metallicity and the fraction of gas converted into stars. We fixed $f_\star/f_\mathrm{dyn}$ to $0.4$ so that the final mass of gas in our simulated galaxy is $\sim10^{10}$M$_\odot$, consistent with the current state of the Milky Way (see Table 1 in @kpa15). This represents a star formation efficiency of 0.04 when $f_\mathrm{dyn}=0.1$. This choice, however, implies a relatively low initial mass of gas and generates a fast early enrichment that pushes the appearance of SNe Ia up to \[Fe/H\] of $\sim-0.5$, which is too high compared to the canonical value of $-1.0$ constrained by observations (see @mg86 [@cmr01]). To solve this issue, we introduced a free parameter, $\mu$, in the exponent of the redshift dependency term of equation (\[eq\_tau\_z\]) so that the star formation timescale is now described by $$\label{eq_t_star_mu} \tau_\star=0.1f_\mathrm{dyn}H_0^{-1}(1+z)^{-3\mu/2}\;.$$ This assumes that the gas fraction in our galaxy model does not necessarily scale linearly with the dynamical timescale of the virialized system. It allows us to control the growth of the gas content and to tune the initial mass of gas independently of the final mass of gas to make sure that SNe Ia occur at \[Fe/H\] $\sim-1$. The value of $\mu$ depends on the choice of stellar yields and the amount of Fe ejected by massive stars. We recall that our one-zone model is mostly designed to mimic the evolution of known galaxies rather than to study how that evolution is driven. Although the $\mu$ parameter has been introduced for fine-tuning, we believe it is necessary in order to recover the global properties of the Milky Way with our simple model. It allows us to apply our chemical evolution calculations on top of a reasonable gas evolution pattern. ![Remnant mass as a function of stellar initial mass and metallicity for the [@e15] prescription. Yellow stars and black dots represent explosive and non-explosive models, respectively.[]{data-label="fig_2D_remnant"}](fig_3.png){width="3.7in"} Stellar Yields {#sect_yields} -------------- Using the KEPLER stellar evolution, nucleosynthesis, and supernova code (@wzw78 [@r02]), we computed a set of non-rotating stellar models and their nucleosynthesis yields with seven different initial masses (see Table \[tab\_sample\]) and for 15 different metallicities of \[$Z$\] $=$ 0.3, 0.2, 0.1, 0.0, $-0.2$, $-0.4$, $-0.6$, $-0.8$, $-1.0$, $-1.5$, $-2.0$, $-2.5$, $-3.0$, $-4.0$, and $Z=0$, where $[Z]=\mathrm{log}_{10}(Z/Z_\odot)$ and $Z_\odot=0.0153$. In this work, however, we do not include \[$Z$\] $=$ 0.3 since our numerical predictions do not reach such high metallicity. Initial abundances were taken from the galactic chemical history model of [@wh13]. During the hydrostatic burning phases, mass loss is treated using the [@ndj90] rate, taking into account the metallicity-dependence with a power law of exponent 0.5. All models were exploded using a flat explosion energy of 1.2 B (1 B $= 10^{51}\,$erg) for the final kinetic energy of the ejecta due to the lack of predictive power of current best supernova explosion models for the explosion energy in the mass range studied here. Supernova fallback was obtained self-consistently using the 1D hydro of KEPLER. We also assume a *standard* amount of mixing during the supernova explosion as in which was adjusted to match supernova light curves (@r02). Detail of this grid will be published in West & Heger (in prep.); the $Z=0$ models are from [@hw10]. Nomenclature Sample from the original grid ------------------------ ------------------------------------------------------------------------------------- **Mass \[M$_\odot$\]** 7 M All masses (13, 15, 17, 20, 22, 25, 30) 4 M A 13, 15, 20, 25 4 M B 13, 17, 22, 30 **Metallicity** 14 Z All $[Z]$ except 0.3 (see Figures \[fig\_remnant\_high\] and \[fig\_remnant\_low\]) 6 Z $Z=0.0$, $[Z]=-2$, $-1$, $-0.4$, $-0.2$, $0.1$ 5 Z $[Z]=-2$, $-1$, $-0.4$, $-0.2$, $0.1$ \[tab\_sample\] : Mass and metallicity samples extracted from the original set of yields described in Section \[sect\_yields\]. The nomenclature is used in all the figures presented in this study. ----------------------------------------------------------------------------------------- ------------- ----------- -------- Milky Way [@kpa15] No cutoff [@e15] **Parameters** $\eta$, mass-loading factor (see equation \[eq\_eta\_z\]) — 0.25 0.0 $f_\mathrm\star/f_\mathrm{dyn}$, star formation efficiency (see Section \[sect\_sfet\]) — 0.4 0.4 $\mu$, growth of gas content (see equation \[eq\_t\_star\_mu\]) — 0.3 0.7 **Final properties** Stellar mass$^a$ \[$10^{10}$ M$_\odot$\] 3.0 - 4.0 5.0 5.0 Gas mass \[$10^9$ M$_\odot$\] $8.1\pm4.5$ 9.3 9.3 Star formation rate \[M$_\odot$ yr$^{-1}$\] 0.65 - 3 2.55 2.55 Infall rate \[M$_\odot$ yr$^{-1}$\] 0.6 - 1.6 1.4 1.2 Core-collapse SN rate \[per 100 yr\] $2\pm1$ 2.6 2.6 Type Ia SN rate \[per 100 yr\] $0.4\pm0.2$ 0.4 0.4 \[tab\_final\_prop\] ----------------------------------------------------------------------------------------- ------------- ----------- -------- We then employed different criteria to determine whether a successful explosion would actually occur, based on different criteria in the literature for the *explodiability* given the pre-supernova structure of the star at onset of core collapse. The first simple case was to assume all stars explode. In this case, the entire non-fallback mass of all stars including winds contribute to the yields. Next we explored different prescriptions for explodability based on formula readily available in the literature. We used the compactness parameter of [@oo11], $$\xi_M=\left.\frac{M/\mathrm{M}_\odot}{R(M_\mathrm{bary}=M)/1000\,\mathrm{km}}\right|_{t=\mathrm{bounce}}\;,$$ with $M=2.5\,\mathrm{M}_\odot$ and cut-off values of $0.25$ as suggested by [@oo11] and $0.45$ as suggested by [@s14], and the prescription by [@e15] with the normalization to Model s19.8. When the criteria for black hole formation was fulfilled, we assumed the entire star would collapse to a black hole instead of producing supernova nucleosynthesis. Only contribution from mass loss due to winds prior to collapse would be present in this case. In cases were no black hole is formed, the full yields as described at the beginning of this paragraph would be used.\ As we found that the prescription by [@e15] is the most extreme in the sense that it makes the most black holes (see Figure \[fig\_2D\_remnant\]), we only use this here for comparison. In the following, we skip the models using the more dated prescription of [@oo11] for the sake of clarity of the discussion. This prescription produced only some intermediate results compared to the two extreme cases considered in the present study. We combined these massive star yields with the low- and intermediate-mass models calculated by NuGrid (@p13; C. Ritter et al. in prep.), which are available online[^2]. Although we do not focus on elements significantly produced by those lower-mass models (e.g., carbon), we decided to include them to account for their contribution in the amount of hydrogen returned in the gas reservoir. The ejecta coming from SNe Ia is calculated with the yields of [@tny86] assuming a delay-time distribution function in the form of a power law with an index of $-1$ (see @mmn14). We refer to [@c15] for more information about the treatment of stellar yields in our chemical evolution code. We deliberately do not consider the mass ejected by stars more massive than $30\,$M$_\odot$. Some of the yields used in our work possess islands of non-explodability, and we did not want to complement our set of yields with other models that do not show this feature, as they could bias and hide the importance those islands in our analysis. But, such massive stars can eject a significant amount of light elements during their pre-supernova evolution in the form of winds or eruptions (e.g., @h07 [@cl13]) and can therefore contribute to the chemical evolution, in spite of the slope of the stellar initial mass function. The predictions for the mass loss of the most massive stars are however rather uncertain (e.g., @h03) and possibly significantly affected by binary star evolution. For that reason, our numerical predictions are probably underestimating the abundances of certain elements such as O and Mg. For the purpose of this paper, however, this is not a limitation because we are only interested in the differential changes due to assumptions in the mass and metallicity resolution of the yield grid. Stellar Abundances Data {#sect_sad} ======================= To provide a visual reference of the impact of the grid resolution of stellar yields, we compare our results with the stellar abundances observed in the Milky Way for O, Na, Mg, Si, Ca, Ti, and Mn. The data has been plotted using the STELLAB module, which stands for STELLar ABundances. This python code is also available online with the NuGrid NuPyCEE package. It uses a stellar abundances database and plot any abundance ratio for the Milky Way, Sculptor, Carina, Fornax, and the Large Magellanic Cloud. It should be stressed that the database is not curated and for now consist of a collection of data that has been blindly taken from the literature. For this work, however, we only took a sample of the entire STELLAB Milky Way database to provide a cleaner and more representative view of the global chemical evolution trends. Although our data selection includes the Galactic halo and thick and thin discs, we remind the reader that we use a one-zone model and do not consider those three components independently. In our sample, there is no star duplication at \[Fe/H\] above $-2$, as all data in this metallicity range come from only one source, which is either [@bfo14] for O, Na, Mg, Si, Ca, and Ti, or [@bb15] for Mn. At lower \[Fe/H\], the stellar abundances mostly come from [@c04] and [@c13] and may contain star duplications, except for O for which the data only come from [@c04]. There is a lack of data around \[Fe/H\] $\sim-2$ in our selection. We could have complemented our selection with other studies like [@i12; @i13], but we decided to limit the amount of data to improve the clarify of our figures and to make the reading of our numerical predictions easier. This data gap should then be considered as a selection bias. At low \[Fe/H\], O abundances have been derived using the \[\]$\lambda$6300 line along with a correction for 3D effects, while at high \[Fe/H\], O abundances have been derived using the $\lambda$7774 triplets along with a correction for NLTE effects. We excluded carbon-enhanced metal-poor stars from the [@c13] dataset to better isolate the global chemical evolution trends, which represent the best target for one-zone models. Na abundances can significantly be affected by LTE departure, especially at low \[Fe/H\] (e.g., @j15). Because [@c13] did not include NLTE effects for Na, we replaced those data by the NLTE-corrected Na abundances of [@r14]. All data and numerical predictions presented in the following figures are normalized to the solar abundances found in [@a09]. ![Predicted metallicity distribution function generated using stellar yields with the no-cutoff (upper panel) and the [@e15] (lower panel) remnant mass prescriptions. The solid line represents the raw output extracted from our one-zone model, while the dashed and dot-dashed lines represent a convolution between the raw output and gaussian functions with a standard deviation of 0.1 and 0.2, respectively. The blue histogram has been extracted from the APOGEE R12 dataset for the Milky Way.[]{data-label="MDF"}](fig_4.png){width="3.54in"} $ $ ![image](fig_5.png){width="6.9in"} ![Impact of the metallicity resolution for different mass resolutions on the predicted evolution of O and Na, relative to Mg, as a function of \[Mg/H\] using the no-cutoff prescription for the remnant mass of massive stars. The lines and the observational data are the same as in Figure \[X\_Fe\_no\_Z\_res\].[]{data-label="O_Na_Mg_no_Z_res"}](fig_6.png){width="3.54in"} $ $ Yields with Monotonic Remnant Masses {#sec_y_mrm} ==================================== In this section, we explore the impact of different mass and metallicity resolutions on galactic chemical evolution predictions, using the no-cutoff prescription for the remnant mass of massive stars. Throughout the following figures, we use the nomenclature defined in Table \[tab\_sample\] to label which stellar models were sampled from the original grid. As mentioned in Section \[sect\_yields\], the stellar yields at $[Z]=0.3$ are not considered, since the final metallicity in our numerical predictions typically does not exceed \[Fe/H\]$\sim0.2$. With the *5 Z* and *6 Z* samples, we use the yields at $[Z]=0.1$ when the composition of the galactic gas exceeds this metallicity. The adopted values for some of our key parameters as well as the final properties of our Milky Way models are given in Table \[tab\_final\_prop\]. Our predicted metallicity distribution function (MDF), for the present choice of yields, is shown in the upper panel of Figure \[MDF\] and compared with the MDF we extracted from the APOGEE R12 dataset (@h_ap15 [@s_ap15; @g_ap16]). We chose APOGEE to maximize the statistics without having star duplication. To broaden our MDF, we convolved it with gaussian functions using different values for the standard deviation parameter, $\sigma$. In our case, this serves to mimic non-uniform mixing and stochastic processes that are not included yet in our one-zone model. This convolution process has been used before (e.g., @pg12 [@cote13; @pilk13]) and shows that, with additional scatter, our predicted MDF could be in reasonable agreement with observations. But our narrow raw MDF (solid line in Figure \[MDF\]) implies that our model currently does not capture the full complexity of the formation of the Milky Way, even if our model reproduces its current global properties (see Table \[tab\_final\_prop\]). In particular, our model is probably not suited to reproduce the early evolution of the Galactic halo. But given the purpose of the present study, we still start our simulations with primordial gas in order to cover the metallicity range included in our stellar yields. ![image](fig_7.png){width="6.9in"} Mass and Metallicity Resolutions -------------------------------- Figure \[X\_Fe\_no\_Z\_res\] presents our predictions for six elements, relative to Fe, against the stellar abundances observed in the Milky Way. For a given colour, the different line styles illustrate the impact of using different mass samplings and resolutions. For a given line style, the red and blue lines illustrate the impact of using different metallicity resolutions. With all the 14 metallicities sampled (blue lines), this last figure shows that different mass samplings produce different results when the number of masses is reduced to four (dashed and dot-dashed lines). At \[Fe/H\] $\lesssim-2$, with the stellar yields used in this work, different selections of masses generate variations of about $0.2-0.3$ dex for O, Mg, and Ti, and $0.5-0.7$ dex for Si and Ca. At higher \[Fe/H\], our predictions are less sensitive to the selection of masses as variations are generally found within 0.1 dex. Within this \[Fe/H\] range, Ti and Mn are relatively insensitive to the mass resolution. The variations seen at low \[Fe/H\] also highlight the impact of the mass range considered in the yields. The Mass Sampling A and B (see Table \[tab\_sample\]) include a maximum stellar mass of 25 and 30M$_\odot$, respectively. Those massive star models, for the lowest metallicity included in the yields, are the first to enrich the galactic gas at early time. Below \[Fe/H\] $\sim-3$, the predictions generated by the Mass Sampling A and B (dashed and dot-dashed lines in Figure \[X\_Fe\_no\_Z\_res\]) therefore represent, respectively, the ejecta of the 25 and the 30M$_\odot$ models. In fact, we made a test where we modified the Mass Sampling A to include both the 25 and the 30M$_\odot$ models. In that case, for \[Fe/H\] $\lesssim-3$, the dashed lines in Figure \[X\_Fe\_no\_Z\_res\] became similar to the solid lines where all seven masses are sampled. However, above \[Fe/H\] $\sim-3$, the variations between the different mass resolutions and samplings remained unchanged. This shows how sensitive our numerical predictions at early time are to the selection of the first stellar models that participate in the enrichment process. ![Impact of the metallicity range for different mass resolutions on the predicted evolution of O and Na, relative to Mg, as a function of \[Mg/H\] using the no-cutoff prescription for the remnant mass of massive stars. The lines and the observational data are the same as in Figure \[X\_Fe\_no\_Z\_res\].[]{data-label="fig_O_Na_Mg_no"}](fig_8.png){width="3.54in"} $ $ Reducing the number of metallicities does not produce any significant change in our numerical predictions, as shown by the red and blue lines in Figure \[X\_Fe\_no\_Z\_res\]. Figure \[O\_Na\_Mg\_no\_Z\_res\] presents an analogous of Figure \[X\_Fe\_no\_Z\_res\] but for the evolution of O and Na relative to Mg, as these three elements are all mainly produced during the pre-supernova phases. These predictions also converge toward the idea that the mass resolution and sampling are more important than the metallicity resolution. Indeed, in the case of Na, the different mass samplings generate more than 0.5 dex of variations at almost every \[Mg/H\] value, whereas reducing the number of metallicities only produce variations of about 0.1 dex. We note that the variations seen for elements that do not match observations, such as Na and Ti, may not be representative, since we know stellar yields need to be examined in more details. However, we still decided to show these elements to highlight where improvements are needed in our stellar models. ![image](fig_9.png){width="6.9in"} Mass Resolution and Metallicity Range ------------------------------------- Figure \[X\_Fe\_no\] shows the impact of using $[Z]=-2$ instead of $Z=0$ as the lower boundary of the metallicity range covered by our set of yields. Because of the wide range of considered metallicities, our yields interpolation scheme between the sampled metallicities is done in the log space. This, unfortunately, prevents us from including $Z=0$ in the interpolation, as its logarithm value is not a finite number. Therefore, when the zero-metallicity yields are included, we only use them for stars formed in primordial gas, and switch to $[Z]=-2$ as soon as the gas gets enriched by the first stellar ejecta. In the case without the zero-metallicity yields, we use the $[Z]=-2$ yields all the way until the metallicity of the gas actually reaches $[Z]=-2$, above which we interpolate between metallicities.\ When excluding the zero-metallicity yields (black lines), the choice of the mass sampling has generally the biggest impact at \[Fe/H\] $\sim-2$, as opposed to \[Fe/H\] $\lesssim-3.5$ in the case where the zero-metallicity yields are included (red lines). This suggests that the lowest metallicity available in the yields plays a dominant role in the numerical predictions at low \[Fe/H\]. As a matter of fact, depending on the mass sampling, including or not the zero-metallicity yields generally produces different results that do not converge before reaching \[Fe/H\] of $\sim-2$. The situation also occurs when looking at the evolution of O and Na as a function of \[Mg/H\] (Figure \[fig\_O\_Na\_Mg\_no\]). The lowest \[Fe/H\] and \[Mg/H\] values of our numerical predictions correspond to the first timestep that includes chemical enrichment, and depend on the amount of Fe and Mg ejected by the most massive stellar model sampled in the set of yields, for the lowest metallicity. When excluding the zero-metallicity yields (black lines), the relatively small variations at \[Fe/H\] $\lesssim-3$, especially for O, Mg, and Si, are due to a similarity in the ejecta composition of the 25 and 30M$_\odot$ models at $[Z]=-2$, which is the opposite in the models at $Z=0$. ![image](fig_10.png){width="6.9in"} ![image](fig_11.png){width="6.9in"} Number of Masses in NuGrid Yields --------------------------------- NuGrid stellar yields currently include five metallicities ($Z=0.02$, 0.01, 0.006, 0.001, and 0.0001) and four models per metallicity for massive stars ($M=12$, 15, 20, and $25\,$M$_\odot$). The question that ignited the present study was whether NuGrid should add more masses in their set of yields. To answer that question, we assumed that the current state of NuGrid could be represented by the Mass Sampling A - 5 Z sample (see Table \[tab\_sample\]) with the no-cutoff remnant mass prescription, as NuGrid does not for the moment consider islands of non-explodability. Figure \[X\_Fe\_NuGrid\] illustrates what would happen if seven masses were used instead of four. For this comparison, we did not start our simulations with primordial composition. As shown in the previous sections, numerical predictions are significantly affected by the choice of stellar yields associated with the first stellar ejecta. To eliminate this complication, and to only focus on the current metallicity range covered by NuGrid, we started our simulations with the gas composition calculated in [@wh13] for $Z=0.000153$. Given this configuration, we conclude from Figure \[X\_Fe\_NuGrid\] that adding more masses is not a major concern for NuGrid. But still, the ideal case would be to provide a finer grid that do not produce variations when a few models are removed from the set of yields. Yields with Islands of Non-Explodability {#sect_y_ine} ======================================== In this section we repeat the experiment made in the previous section, but using the set of yields generated with the remnant mass prescription of [@e15], which included islands of non-explodability. The adopted input parameters and our predicted MDF, for this choice of yields, are presented in Table \[tab\_final\_prop\] and in the lower panel of Figure \[MDF\], respectively. The MDFs generated with our two remnant mass prescriptions are roughly similar. The minor differences are due to different Fe ejection rates which are affected by the number of exploding models at a given time. Mass and Metallicity Resolutions -------------------------------- Figure \[X\_Fe\_ertl\_Z\_res\] shows the impact of the mass and metallicity resolutions using the remnant mass prescription of [@e15]. At \[Fe/H\] below $\sim-2.0$, the results with six and 14 metallicities are still indistinguishable. At \[Fe/H\] $\sim-1.5$, for Mass Sampling B, the metallicity resolution only generates variations of 0.1 dex at most. At \[Fe/H\] $\sim-0.5$, for Mass Sampling A, variations are generally between 0.05 dex and 0.15 dex, whereas almost no variation was seen with the no-cutoff prescription (see Figure \[X\_Fe\_no\_Z\_res\]). In the case of Si, Ca, and Ti, the impact of the mass resolution at \[Fe/H\] $\lesssim-3$ is less important than with the no-cutoff prescription, as opposed to O and Mg which now show variations ranging from 0.4 dex to 0.7 dex. At \[Fe/H\] between $-2$ and $-1$, the impact of the mass sampling is increased by about 0.1 dex relative to Figure \[X\_Fe\_no\_Z\_res\]. These results suggest that the mass requirement in a set of stellar yields depends on the remnant mass prescription and reinforce our conclusion that the mass resolution is more important than the metallicity resolution. Mass Resolution and Metallicity Range ------------------------------------- The first feature to notice in Figure \[X\_Fe\_ertl\] is the importance of the mass sampling when only five masses and five metallicities between 0.000153 and 0.0193 are considered (black dashed and black dash-dotted lines). With the stellar yields used in this work, this can generate variations up to $0.3-0.4$ dex for O and Mg, and up to $0.5-0.7$ dex for Si and Ca across a larger \[Fe/H\] interval than in the case with the no-cutoff prescription. Ti and Mn are still insensitive to the mass sampling and resolution. Above \[Fe/H\] $\sim$ $-0.5$, all predictions are relatively insensitive to the mass resolution compared to the variations seen at low \[Fe/H\]. When plotted relative to Mg (see Figure \[fig\_O\_Na\_Mg\_ertl\]), O shows variations up to 0.6 dex at \[Mg/H\] below $\sim$ $-2.0$, whereas Na shows variations up to 0.4 dex at \[Mg/H\] above $\sim$ $-1.0$.\ These significant variations are explained by the upper panel of Figure \[remnant\_2Z\]. At $[Z]=-2$, the islands of non-explodability are regularly dispersed across the initial stellar mass. With the Mass Sampling A, the whole selection consists of non-exploding models, except for the $13\,$M$_\odot$ model. On the other hand, the Mass Sampling B selects all the explosive models and misses the islands of non-explodability located at $15\,$M$_\odot$, $20\,$M$_\odot$, and $25\,$M$_\odot$. Therefore, when the stellar models at $[Z]=-2$ are the first to enrich the primordial gas, the mass resolution is crucial. The situation is less extreme at $Z=0$ (see the lower panel of Figure \[remnant\_2Z\]). But still, in order to resolve the islands of non-explodability, the stellar models must be judiciously sampled. For example, one could have a representative selection without the $15\,$M$_\odot$ and $20\,$M$_\odot$ models, but not without the $25\,$M$_\odot$ model. ![Same as in Figure \[fig\_O\_Na\_Mg\_no\] but with the remnant mass prescription of [@e15].[]{data-label="fig_O_Na_Mg_ertl"}](fig_12.png){width="3.54in"} ![Remnant mass as a function of initial stellar mass with the prescription of [@e15] for two different metallicities. The different coloured lines represent the different mass samplings defined in Table \[tab\_sample\].[]{data-label="remnant_2Z"}](fig_13.png){width="3.54in"} The different magnitudes of the impact of the mass sampling with and without the zero-metallicity yields, especially for Si and Ca, reinforces the idea that the lowest metallicity included in our set of stellar yields plays a major role in the numerical predictions at low \[Fe/H\]. In this case, even when all the seven masses are considered (solid lines), modifying the lowest metallicity available produces different results that do not converge before reaching \[Fe/H\] $\sim$ $-3$ for O and Si, \[Fe/H\] $\sim-2.5$ for Mg, and \[Fe/H\] $\sim$ $-1.5$ for Ca. This is consistent with observations and simulations that show a rapid increase of the average metallicity at early time in the Milky Way (e.g., @kn11 [@bfo14]). ![image](fig_14.png){width="6.9in"} Figure \[fig\_1st\_nonzero\_Z\] shows an analogous of Figure \[X\_Fe\_ertl\] where we replaced $[Z]=-2$, the lowest non-zero metallicity in our fiducial case (middle panels), by $[Z]=-3$ (lower panels) and $[Z]=-1.5$ (upper panels). The predictions generated using the zero-metallicity yields (red lines) are still similar from one case to another below \[Fe/H\] $\sim-2.5$, while variations between cases (upper, middle, and lower panels) can be seen between \[Fe/H\] $\sim-2$ and $-1$. When the zero-metallicity yields are not included (black lines), variations can be seen up to \[Fe/H\] $\sim-1$ between the different cases, especially for Ca. The black dashed lines are always flat and similar from one case to another since the Mass Sampling A only selects one explosive model at $[Z]=-3$, $-2$, and $-1.5$ (see Figure \[fig\_2D\_remnant\]). Results shown in Figure \[fig\_1st\_nonzero\_Z\] indicate once more that, below \[Fe/H\] $\sim-2$, our numerical predictions are sensitive to the first stellar models that enrich the galactic gas. However, we recall that we use a one-zone model that is not necessarily suited to reproduce the Galactic halo, which is associated with the metallicity range where most of the variations occur.\ It is worth remembering that the two sets of yields used in the present work have been calculated in the same way. The only ingredient responsible for the differences between Figures \[X\_Fe\_no\] and \[X\_Fe\_ertl\] is the remnant mass prescription, which suggests again that the mass resolution required in stellar yields depends on the remnant mass prescription used for massive stars. The stellar yields used in our experiment are not general. The fact that the extreme case of $[Z]=-2$ is masked by the zero-metallicity yields (see the comparison between the red and black lines in Figure \[X\_Fe\_ertl\]) does not mean the solution is to always use zero-metallicity yields. Depending on the modeling assumptions, the stellar evolutionary code, and the physics included, other sets of yields could have different islands of explodability for their zero-metallicity models (e.g., @s15 [@e15]). In that case, the choice of the mass sampling could have different repercussions than the ones shown in this section for the cases including zero-metallicity yields.\ The variations seen in our figures illustrate the potential of our stellar yields to reproduce some of the observed scatter at low \[Fe/H\], especially for O, Na, Mg, Si, and Ca. In inhomogeneous-mixing chemical evolution models (e.g., @g03 [@arg04; @ces15; @w15]), where the stellar initial mass function is randomly sampled, individual stellar models can transfer their ejecta into new generations of stars without being covered up by all other stellar models that would be contributing when the initial mass function is assumed to be fully sampled. The level of scatter should depends on the variety of abundance ratios seen in the ejecta of the adopted stellar models. We are currently working on a stochastic version of our chemical evolution codes to generate scatter in our predictions. This, however, is beyond the scope of the present paper. Summary and Conclusion {#sect_s_c} ====================== We used a single-zone model to address the question of how many masses and metallicities are needed in a grid of stellar yields in order to generate relevant and reliable predictions with chemical evolution models. Using the set of stellar yields described in Section \[sect\_yields\], which has seven masses and 15 metallicities for massive stars, we performed experiments where we extracted a subset of models to evaluate the impact of the grid resolution on the chemical evolution of seven elements. As a visual reference, we compared our results with the stellar abundances observed in the Milky Way to better appreciate the variations between our results. Our work suggests that there is no general answer to how many masses and metallicities are need for galactic chemical evolution applications.\ The mass resolution needed in stellar yields depends on the element considered and on the remnant mass prescription used for massive stars. We found that yields with a monotonic remnant mass distribution are generally more robust to modifications in the grid resolution. Yields that possess islands of non-explodability are more vulnerable, however, as explosive or non-explosive mass regimes can be missed if not enough models are sampled. Our results suggest that the yields from the lowest metallicity included in the grid can dominate the chemical evolution up to \[Fe/H\] $\sim-2$. Depending on the remnant mass distribution applied for the lowest metallicities, a bad mass sampling in the presence of islands of non-explodability can cause variations that exceed 0.5 dex (see Figures \[X\_Fe\_no\] and \[X\_Fe\_ertl\]). The set of yields used in this work is not a general case and islands of non-explodability could be found at different mass regimes in other yields. Under different stellar modeling assumptions, it is not excluded that extreme cases, such as the one at $[Z]=-2$ with the [@e15] prescription (see Figure \[remnant\_2Z\]), could be associated with other lower-metallicity or zero-metallicity yields. We also studied the impact of the metallicity resolution and found that a wide range is more important than the number of metallicities. As for the mass resolution, yields with monotonic remnant masses are less affected by the metallicity resolution, reinforcing our conclusion that the grid resolution required in stellar yields depends on the remnant mass prescription and on the presence or absense of islands of non-explodability. Similar results likely would be found for any major discontinuities in yields as a function of initial mass. acknowledgments {#acknowledgments .unnumbered} =============== We are thankful to Anna Frebel for relevant discussions on stellar abundance observation. This research is supported by the National Science Foundation (USA) under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements), and by the FRQNT (Quebec, Canada) postdoctoral fellowship program. AH was supported by an ARC Future Fellowship (FT120100363). BWO was supported by the National Aeronautics and Space Administration (USA) through grant NNX12AC98G and Hubble Theory Grant HST-AR-13261.01-A. He was also supported in part by the sabbatical visitor program at the Michigan Institute for Research in Astrophysics (MIRA) at the University of Michigan in Ann Arbor, and gratefully acknowledges their hospitality. FH acknowledges support through a NSERC Discovery Grant (Canada). SB acknowledges support by JINA (ND Fund \#202476). [99]{} Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481 Argast D., Samland M., Thielemann F.-K., Qian Y.-Z., A&A, 416, 997 Battistini C., Bensby T., 2015, A&A, 577, 9 Baugh C. M., 2006, RPPh, 69, 3101 Bensby T., Feltzing S., Oey M. S., 2014, A&A, 562, A71 Bovy J., Rix H. W., 2013, ApJ, 779, 115 Boylan-Kolchin, M., Springel, V., White, S. D. M., Jenkins, A., & Lemson, G. 2009, MNRAS, 398, 1150 Cayrel R., et al., 2004, A&A, 416, 1117 Cescutti G., Romano D., Matteucci F., Chiappini C., Hirschi R., 2015, A&A, 577, 139 Cohen J. G., Christlieb N., Thompson I., McWilliam A., Shectman S., Reimers D., Wisotzki L., Kirby E., 2013, ApJ, 778, 56 Cole S., Lacey C. G., Baugh C. M., Frenk C. S., 2000, MNRAS, 319, 168 Côté B., Martel H., Drissen L., 2013, ApJ, 777, 107 Côté B., O’Shea B. W., Ritter C., Herwig F., Venn K. A., 2016b, arXiv:1604.07824 Côté B., Ritter C., O’Shea B. W., Herwig F., Pignatari M., Jones S., Fryer, C. L., 2016a, ApJ, 824, 82 Chabrier G., 2003, PASP, 115, 763 Chiappini C., Matteucci F., Gratton R., 1997, ApJ, 477, 765 Chiappini C., Matteucci F., Romano D., 2001, ApJ, 554, 1044 Chieffi A., Limongi M., 2004, ApJ, 608, 405 Chieffi A., Limongi M., 2013, ApJ, 764, 21 De Donder E., Vanbeveren D., 2004, NewAR, 48, 861 de Mink S. E., Langer N., Izzard R. G., Sana H., de Koter A., 2013, ApJ, 764, 166 Dunkley J., et al., 2009, ApJS, 180, 306 Ekstr[ö]{}m S., et al. 2012, A&A, 537, 146 Ertl T., Janka H.-Th., Woosley S. E., Sukhbold T., Ugliano M., 2015, ApJ, accepted (arXiv:1503.07522) Fakhouri O., Ma C. P., Boylan-Kolchin M., 2010, MNRAS, 406, 2267 Fenner Y., Gibson B. K., Gallino R., Lugaro M., 2006, ApJ, 646, 184 Ferrini F., Matteucci F., Pardi C., Penco U., 1992, ApJ, 387, 151 Flynn C., Holmberg J., Portinari L., Fuchs B., Jahrei[ß]{} H., 2006, MNRAS, 372, 1149 García Pérez A. E., et al., 2016, AJ, 151, 144 Gibson B. K., 2007, IAUS, 241, 161 Gibson B. K., Fenner Y., Renda A., Kawata D., Lee H.-c., 2003, PASA, 20, 401 Hayden M. R., et al., 2015, ApJ, 808, 132 Heger A., Fryer C. L., Woosley S. E., Langer N., Hartmann D. H., 2003, ApJ, 591, 288 Heger A., Woosley S. E., 2010, ApJ, 724, 341 Hirschi R., 2007, A&A, 461, 571 Homma H., Murayama T., Kobayashi M. A. R., Taniguchi Y., 2015, ApJ, 799, 230 Ishigaki M. N., Aoki W., Chiba M., 2013, ApJ, 771, 67 Ishigaki M. N., Chiba M., Aoki W., 2012, ApJ, 753, 64 Jacobson H. R., et al., 2015, ApJ, 807, 171 Kauffmann G., Colberg J. M., Diaferio A., White S. D. M., 1999, MNRAS, 303, 188 Kennicutt R. C. Jr., 1998, ApJ, 498, 541 Kobayashi C., Nakasato N., 2011, ApJ, 729, 16 Kobayashi C., Umeda H., Nomoto K., Tominaga N., Ohkubo T., 2006, ApJ, 653, 1145 Kubryk, M., Prantzos, N., & Athanassoula, E. 2015, A&A, 580, 126 Licquia T. C., Newman J. A., 2015, ApJ, 806, 96 Limongi M., Chieffi A., 2006, ApJ, 647, 483 Maoz D., Mannucci F., Nelemans G., 2014, ARA&A, 52, 107 Matteucci F., Greggio L., 1986, A&A, 154, 279 McMillan P. J., 2011, MNRAS, 414, 2446 Micali A., Matteucci F., Romano D., 2013, MNRAS, 436, 1648 Minchev I., Chiappini C., Martig M., 2013, A&A, 558, 9 Moll[á]{} M., Cavichia O., Gavilán M., Gibson B. K., 2015, MNRAS, 451, 3693 Murray N., Quataert E., Thompson T. A., 2005, ApJ, 618, 569 Nieuwenhuijzen H., de Jager C., 1990, A&A, 231, 134 O’Connor E., Ott C. D., 2011, ApJ, 730, 70 Pagel B. E. J., 2009, Nucleosynthesis and Chemical Evolution of Galaxies (Cambridge University Press) Pardi M. C., Ferrini F., Matteucci F., 1995, ApJ, 444, 207 Pignatari M., et al., 2013, ApJS, submitted (arXiv:1307.6961) Pilkington K., 2013, PhDT, 321 Pilkington K., Gibson B. K., 2012, in Proc. XII Int. Symp. Nuclei in the Cosmos (NIC XII), 227 Portinari L., Chiosi C., Bressan A., 1998, A&A, 334, 505 Rauscher T., Heger A., Hoffman R. D., Woosley S. E., 2002, ApJ, 576, 323 Roederer I. U., Preston G. W., Thompson I. B., Shectman S. A., Sneden C., Burley G. S., Kelson D. D., 2014, AJ, 147, 136 Sana H., de Mink S. E., de Koter A., Langer N., Evans C. J., Gieles M., Gosset E., Izzard R. G., Le Bouquin J.-B., Schneider F. R. N., 2012, Science, 337, 444 Schmidt M., 1959, ApJ, 129, 243 Shen S., Cooke R. J., Ramirez-Ruiz E., Madau P., Mayer L., Guedes J., 2015, ApJ, 807, 115 Shetrone M., et al., 2015, ApJS, 221, 24 Somerville R. S., Davé R., 2015, ARA&A, 53, 51 Smartt S., 2009, ARA&A, 47, 63 Springel, V., White, S. D. M., Jenkins, A., et al. 2005, Nature, 435, 629 Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726 Sukhbold T., Woosley S. E., 2014, ApJ, 783, 10 Sukhbold T., Ertl T., Woosley S. E., Brown J. M., Janka H.-T., 2015, ApJ, submitted (arXiv:1510.04643) Thielemann F. K., Nomoto K., Yokoi K., 1986, A&A, 158, 17 Travaglio C., Hillebrandt W., Reinecke M., Thielemann F. K., 2004, A&A, 425, 1029 Ugliano M., Janka H.-T., Marek A., Arcones A., 2012, ApJ, 757, 69 van de Voort F., Quataert E., Hopkins P. F., Kere[š]{} D., Faucher-Giguère C.-A., 2015, MNRAS, 447 140 Weaver T. A., Zimmerman G. B., Woosley S. E., 1978, ApJ, 225, 1021 Wehmeyer B., Pignatari M., & Thielemann F. K., 2015, MNRAS, 452, 1970 West C., Heger A., 2013, ApJ, 774, 75 West C., Heger A., in prep. White S. D. M., Frenk C. S., 1991, ApJ, 379, 52 Williams B. F., Peterson S., Murphy J., Gilbert K., Dalcanton J. J., Dolphin A. E., Jennings Z. G., 2014, ApJ, 791, 105 Woosley S. E., Heger A., 2007, PhR, 442, 269 Woosley S. E., Heger A., Weaver T. A., 2002, RvMP, 74, 1015 Woosley S. E., Weaver T. A., 1995, ApJS, 101, 181 Zhang W., Woosley S. E., Heger A., 2008, ApJ, 679, 639 \[lastpage\] [^1]: <https://github.com/NuGrid/NUPYCEE> [^2]: <http://nugridstars.org/data-and-software/yields/set-1>
--- author: - 'C. Alves de Oliveira, E. Moraux, J. Bouvier, H. Bouy, C. Marmo, L. Albert' bibliography: - 'alvesdeoliveira.bib' date: 'Received December 18 2009; accepted February 27 2010' title: 'The low-mass population of the $\rho$ Ophiuchi molecular cloud [^1]' --- [Star formation theories are currently divergent regarding the fundamental physical processes that dominate the substellar regime. Observations of nearby young open clusters allow the brown dwarf (BD) population to be characterised down to the planetary mass regime, which ultimately must be accommodated by a successful theory.]{} [We hope to uncover the low-mass population of the $\rho$ Ophiuchi molecular cloud and investigate the properties of the newly found brown dwarfs.]{} [We use near-IR deep images (reaching completeness limits of approximately 20.5 mag in *J*, and 18.9 mag in *H* and *K$_{s}$*) taken with the Wide Field IR Camera (WIRCam) at the Canada France Hawaii Telescope (CFHT) to identify candidate members of $\rho$ Oph in the substellar regime. A spectroscopic follow-up of a small sample of the candidates allows us to assess their spectral type, and subsequently their temperature and membership.]{} [We select 110 candidate members of the $\rho$ Ophiuchi molecular cloud, from which 80 have not previously been associated with the cloud. We observed a small sample of these and spectroscopically confirm six new brown dwarfs with spectral types ranging from M6.5 to M8.25.]{} Introduction ============ The determination of the initial mass function (IMF) across the entire stellar and substellar mass spectrum is a fundamental constraint for star formation theories [see, for example, @Bonnell2007 and references therein]. Although there are general accepted views on the way star formation occurs and young stellar objects (YSOs) evolve to the main sequence [@Shu1987; @Larson1973], the existing theories have not yet converged to an agreed paradigm that can explain the wide range of existing observational properties of YSOs. In particular, since their discovery, hundreds of brown dwarfs (BD) with masses down to the planetary regime have been uncovered in star-forming regions and the solar neighbourhood, with a ratio of the number of BDs to stars of approximately 1/5 [see, for example, @Luhman2007c and references therein], implying that a successful star and planet formation theory must account for them. Different theories for the formation of BDs are currently debated, according to which they could either form by gravitational fragmentation and collapse of molecular cores [@Padoan2007; @Hennebelle2008], from early ejection from stellar embryos [@Reipurth2001; @Whitworth2005], or from fragmentation of massive circumstellar discs [@Stamatellos2009]. The extension of the IMF to the brown dwarf and planetary mass regime and the search for the end of the mass function is therefore crucial to determine the dominant formation process of substellar objects and its relation with the surrounding environment [@Moraux2007; @Andersen2008; @Luhman2007]. Brown dwarfs are brighter when they are young [@Chabrier2000] and their detection down to a few Jupiter masses can be attained with the current technology by studying them in young star-forming regions [@Lucas2000; @ZapateroOsorio2002; @Weights2009; @Burgess2009; @Marsh2009]. For that reason, one of the prime goals of modern observations is to achieve completeness at the lower mass end, i.e., the brown dwarf and planetary mass regime, for different environments across several young star-forming regions [@Bihain2009; @Bouy2009b; @Bouy2009a; @Lodieu2009; @Luhman2009; @Scholz2009 among many others]. The main motivation of our survey of the $\rho$ Ophiuchi molecular cloud is to uncover the low-mass population of the cluster down to the planetary regime. Despite being one of the youngest ($\sim$1 Myr) and closest star-forming regions [120 to 145 pc, @Lombardi2008; @Mamajek2008], the high visual extinction in the cloud’s core, with A$_V$ up to 50-100 mag [@Wilking1983], make it one of the most challenging environments to study low-mass YSOs. The main studies previously conducted in $\rho$ Oph have been summarised in a recent review [@Wilking2008], which includes a census with the $\sim$300 stellar members that have been associated with the cloud up to now, from which only 15 are estimated to have masses in the substellar regime. @Marsh2009 reported the discovery a young brown dwarf with an estimated mass of $\sim$2 $-$3 Jupiter masses in $\rho$ Oph, although we here question its membership to the cloud (see Sect.  \[comp:surveys\]). We conducted a deep near-IR (*J*, *H*, and *K$_{s}$*) photometric survey centred approximately on the cloud’s core and covering $\sim$1 deg$^{2}$, which we use to identify candidate members in the substellar mass regime. Near-IR surveys are particularly suitable to study this star-forming region because most of its population is visibly obscured. Previous near-IR studies of this cluster have been done from the ground down to a sensitivity limit of *K* $<$ 13-14 mag for a larger area of the cloud [@Greene1992; @Strom1995; @Barsony1997], and of *K* $<$ 15.5 mag for a smaller region [200 arcmin$^2$, @Comeron1993]. Deeper observations were done from space with a small coverage of 72 arcmin$^2$ and a sensitivity of *H* $<$ 21.5 mag [@Allen2002]. The WIRCam near-IR survey presented takes advantage of a new generation of wide-field imagers on 4 meter-class telescopes, to reach completeness limits of approximately 20.5 in *J*, and 18.9 in *H* and *K$_{s}$* over the entire degree-size area of the sky occupied by the $\rho$ Ophiuchi central cloud. This work complements the previous surveys both in the area it covers and in sensitivity. Of comparable characteristics is the near-IR survey recently conducted by @Alvesdeoliveira2008, which uses a different technique, near-IR variability, to select candidate members. Our selection method allows BDs with masses down to a few Jupiter masses (according to evolutionary models) to be detected through $\sim$20 magnitudes of extinction. Extensive use of archive data at optical and IR wavelengths is made to further characterise the candidate members. In a pilot study, a spectroscopic follow-up of a subsample of these candidates has confirmed six new brown dwarfs. In Sects. \[data\] and \[archdata\], the observations and reductions for new and archive data are described. Section \[select:cmd\] explains the methods used to select candidate members of $\rho$ Oph and the results, and in Sect. \[discussion\_phot\] we discuss their properties. Section \[spec\] describes the numerical fitting procedure used to analyse the data from the spectroscopic follow-up and the spectral classification. These results are then discussed through Sect.  \[properties\]. Conclusions are given in Sect.  \[conclusion\]. Observations and data reduction {#data} =============================== --------------- ------------ ------------- --------------- --------------------- Pointing RA Dec. Date Filters[^2]   (J2000) (J2000)     CFHTWIR-Oph-A 16 27 10.0 $-24$ 26 00 19 April 2006 *J*, *H*, *K$_{s}$* CFHTWIR-Oph-B 16 25 50.0 $-24$ 26 00 20 April 2006 *J*, *H*, *K$_{s}$* CFHTWIR-Oph-C 16 27 10.0 $-24$ 44 00 09 May 2006 *J*, *H*, *K$_{s}$* CFHTWIR-Oph-D 16 28 29.0 $-24$ 44 00 11 May 2006 *J*, *H*, *K$_{s}$* CFHTWIR-Oph-E 16 25 50.0 $-24$ 08 00 12 May 2006 *J*,*H*, *K$_{s}$* CFHTWIR-Oph-F 16 27 10.0 $-24$ 08 00 17 May 2006 *J*, *H*, *K$_{s}$* CFHTWIR-Oph-G 16 28 29.0 $-24$ 26 00 11 July 2006 *J*, *H*, *K$_{s}$* 17 July 2006 *J* \[table:1\] --------------- ------------ ------------- --------------- --------------------- ![Histogram of the number of objects detected per magnitude bin and the respective magnitude errors. The points where the histograms diverge from a linear fit to the logarithmic number of objects per magnitude bin give an approximation of the completeness limit of the survey for the different filters: 20.5 in *J*, and 18.9 in *H* and *K$_{s}$*, with errors below $\sim$0.1 magnitudes.[]{data-label="histmag"}](fig1.eps){width="\columnwidth"} We present the deep infrared photometric survey we conducted in the $\rho$ Ophiuchi cluster in the *J*, *H*, and *K$_{s}$* filters, and the infrared spectroscopic follow-up of a small sample of candidate members of this star-forming region. The WIRCam/CFHT near-IR survey {#data:wircam} ------------------------------ The WIRCam at the CFHT telescope is a wide-field imaging camera operating in the near-infrared, consisting of four Hawaii-II2-RG 2048 x 2048 array detectors with a pixel scale of 0.3$\arcsec$ [@Puget2004] . The four detectors are arranged in a 2 x 2 pattern, with a total field of view of 20$\arcmin$ x 20$\arcmin$. The data were obtained in queue-scheduled observing mode over several runs as part of a large CFHT key programme aimed at the characterisation of the low-mass population of several young star-forming regions (P.I. J. Bouvier). Seven different WIRCam pointings were needed to cover the central part of the $\rho$ Ophiuchi cluster. These were taken at different epochs over a three-month period, but all observations were done under photometric conditions, with a seeing better than 0$\farcs$8 (measured in the images to be typically between 0$\farcs$4 and 0$\farcs$5), and an airmass less than 1.2. All individual tiles were observed in the *J*, *H*, and *K$_{s}$* filters, using a seven point dithering pattern selected to fill the gaps between detectors, and accurately subtract the sky background. Table \[table:1\] shows the central position (right ascension and declination) for each of the seven tiles and the dates of the observations. For each field, short and long exposures were obtained with the *J* filter (7x4x5 s and 7x8x27 s, respectively), and shorter individual exposures with the *H* and *K$_{s}$* filters (7x8x7 s). Individual images are primarily processed by the ’I’iwi reductions pipeline at the CFHT (Albert et al., *in prep.*), which includes detrending (e.g. bias subtraction, flat-fielding, non-linearity correction, cross-talk removal), sky subtraction, and astrometric calibration. Afterwards, the data are handled by Terapix [@Marmo2007], the data reduction centre at the *Institut d’Astrophysique de Paris* (France) responsible for carrying out the final quality assessment of the individual images, determining precise astrometric and photometric calibrations, and combining the dither and individual exposures into the final stacked images. All images are merged into a single tile of $\sim$1 deg$^{2}$ centred on the cloud’s core. The photometric calibration of the WIRCam data is done with 2MASS stars in the observed frames as part of the nominal pipeline reduction. Typical estimated errors in the WIRCam zero point determination are of $\sim$0.05 mag. Ultimately the photometric accuracy for a determined field is also dependent on the number of 2MASS stars available (which is probably reduced in regions in the sky with high extinction as is the case of some regions of $\rho$ Oph), if the stars used are themselves variable [@Alvesdeoliveira2008], and lastly, photometric offsets can also occure from small problems in flatfielding and/or sky subtraction from night to night, given that the data were taken in different epochs. We extracted PSF photometry from the mosaicked images with PSFEx (PSF Extractor, Bertin et al., *in prep.*), a software tool that computes a PSF model from well-defined stellar profiles in the image, which is given as an input to the SExtractor programme [@Bertin1996] to compute the photometry for each detected object. During this first stage of the analysis, and to ensure the detection of all the faint sources present in the images, the extraction criteria used are not too stringent. An object is extracted if it complies with the required minimum to have three contiguous pixels with fluxes 1.5 $\sigma$ above the estimated background. An inspection of the images and detections showed that the number of spurious detections is minimum, while all the objects seen *by eye* are detected. Catalogues of the short and long exposures for the *J* filter are merged into one single catalogue. The overlap in magnitudes between the short and long exposures allows checking of the photometric accuracy. For objects that are common to the two catalogues, the two magnitude values for each object are compared, and the r.m.s. accuracy is measured to be below 0.05 magnitudes. The histogram of the magnitudes (not corrected for extinction) is shown in Fig. \[histmag\]. We derived approximate completeness limits of 20.5 in *J*, and 18.9 in *H* and *K$_{s}$*, with errors below $\sim$0.1 magnitudes, located at the points where the histograms diverge from the dotted lines [@Wainscoat1992; @Santiago1996], which represent a linear fit to the logarithmic number of objects per magnitude bin, calculated over the intervals of better photometric accuracy. The catalogues from all filters are combined into a single database by requiring a positional match better than 1$\arcsec$ and detections across the *J*, *H*, and *K$_{s}$* lists. The mean separation for the $\sim$27,000 detections common to the *J* band short and long catalogues is found to be $\sim$0$\farcs$05, and when combining all the bands is $\sim$0$\farcs$1. The final catalogue contains $\sim$57,000 objects. Approximately 1000 of the brightest stars have a counterpart in the 2MASS catalogues, and the mean magnitude differences between the two systems is found to be 0.05, 0.07, and 0.09 mag for *J*, *H*, and *K$_{s}$* respectively, which is of the order of expected zero point uncertainties. The dispersion of the differences can however be as high as $\sim$0.1 mag, which reflects the possible sources of error mentioned, and therefore we did not correct the photometry for these offsets. Furthermore, the WIRCam and 2MASS filters design differ substantially, and at no point are these colour effects taken into account. Throughout this work, all the WIRCam *J*, *H*, and *K$_{s}$* photometry is given in the CFHT Vega system. The SofI / NTT spectroscopic follow-up {#data:ntt} -------------------------------------- A spectroscopic follow-up was conducted for a subsample of 13 candidate members of the $\rho$ Ophiuchi Cluster, with magnitudes ranging from 12.5 to 15 in *K$_{s}$*. The selection of candidate members is described in Sect. \[select:cmd\]. We also observed GY 201, which was previously associated with the cloud based on its mid-IR colours [@Wilking2008] but was not selected with our criteria. All observations were gathered from 3-6 May 2009, using SofI (Son of ISAAC), a near-IR low resolution spectrograph mounted on the 3.6 m New Technology Telescope (NTT, La Silla, ESO). The majority of the targets were observed with the blue and red grisms, which operate from 0.95 to 1.64, and 1.53 to 2.52 $\mu$m, respectively. Some objects were too faint in the *J* band and could only be observed with the red grism. In addition, we observed nine sources that are not part of the candidate member list, but have magnitudes and colours close to those of the selection limits and can therefore serve as a test of our selection criteria (see also Sect. \[select:cmd\]). Field dwarf optical standards were also observed (LHS 234, vB 8, vB 10, LHS 2065, Kelu-1), though the final spectral classification method we adopted uses young optical standards instead (see Sect. \[spec\]). The observations were done with the long slit spectroscopy mode, with a slit width of 1$\arcsec$ or 2$\arcsec$ to better match the seeing conditions, resulting in a resolution of $\sim$500 and $\sim$300, respectively, across the spectral range. The individual exposure times were chosen according to the target’s brightness and night conditions, and repeated with an ABBA pattern for posterior sky-subtraction. Standard A0 stars were observed at regular intervals and chosen to have an airmass matching that of the target within 0.1. The slit position was aligned with the parallactic angle. The data reduction was done first with the SofI pipeline developed and maintained by the Pipeline Systems Department at the European Southern Observatory (ESO). The 2D spectra were flat fielded, aligned, and co-added. The extraction of each spectrum was done with the *APALL* routine in IRAF. All spectra were wavelength calibrated with a neon lamp. The telluric corrections were done by dividing each spectrum by that of the standard A0 stars observed at similar airmass and interpolated at the target’s airmass. Relative fluxes were recovered with a theoretical spectrum of an A0 star smoothed to the corresponding resolution. To remove the strong intrinsic hydrogen absorption lines from the spectra of the A0 standard stars, a linear interpolation was made across the lines that are more predominant at this resolution (the Paschen $\delta$ line at 1.00 $\mu$m, Paschen $\alpha$ at 1.09 $\mu$m, Paschen $\beta$ at 1.28 $\mu$m, and the Brackett series lines at 1.54, 1.56, 1.57, 1.59, 1.61, 1.64, 1.68, 1.74, 2.17 $\mu$m). The final spectra were not flux-calibrated, but simply normalised to their average flux in the 1.67-1.71 $\mu$m region. The excellent agreement in the overlapping region of the spectra taken with the blue and red grisms shows that no further calibrations are needed. Figure \[kelu\] shows the comparison of the SofI / NTT spectrum of Kelu-1 (an L dwarf optical standard) taken during this observing run with a spectrum of the same object taken with the SpeX spectrograph [@Rayner2003] mounted on the 3 m NASA Infrared Telescope Facility, provided in the SpeX Prism Spectral Libraries[^3] [@Burgasser2007]. There is a good agreement in the spectral features and the two spectra exhibit similar relative fluxes between the three photometric bands. ![Spectra of Kelu-1 (an L dwarf optical standard) taken during the observing run with SofI / NTT (*black*) and with the SpeX / IRTF (*grey*) .[]{data-label="kelu"}](fig2.eps){width="\columnwidth"} The NICS / TNG spectroscopic follow-up {#data:tng} -------------------------------------- A shorter observing run was conducted at the Telescopio Nacional Galileo (TNG, La Palma, Observatory Roque de Los Muchachos) with the Near Infrared Camera and Spectrograph (NICS), a low-resolution spectrograph working in the near-IR regime. The observations took place during two half nights on 17-18 May 2009, and three of our candidate members with magnitudes of *K$_{s}$*$\sim$13 were observed (in addition to the main observing programme, which did not directly concern this study), as well as the field dwarfs vB 10 and LHS 2924. We used the *JK’* grism, with a wavelength range from 1.15 to 2.23 $\mu$m, resulting in a resolution power of $\sim$350. The observations were done with the 1$\arcsec$ slit aligned at the parallactic angle, and A0 standard stars were observed for telluric corrections. Standard IRAF routines were employed to reduce the data, in an analogous way to that described in the previous section. The spectra were normalised to their average flux in the region between 1.67-1.71 $\mu$m. Archival data {#archdata} ============= In order to complement our selection criteria of young stellar objects in $\rho$ Oph and to better characterise the new candidate and confirmed members, we made extensive use of multi-wavelength data recovered from different archives. The datasets used and the criteria employed to extract reliable samples are briefly described in this section. ![image](fig3.eps){width="\linewidth"} Spitzer Space Telescope: C2D survey {#data:spitzer} ----------------------------------- To complement this study, *Spitzer* data from the C2D legacy project [@Evans2003 *From Cores to Disks*] were included. The $\rho$ Ophiuchi molecular cloud has been mapped with *Spitzer*’s Infrared Camera [@Fazio2004 IRAC] in the 3.6, 4.5, 5.8 and 8.0 $\mu$m bands over a region of 8.0 deg$^{2}$ and with the Multiband Imaging Camera [@Rieke2004 MIPS] in the 24 and 70 $\mu$m bands over a total of 14.0 deg$^{2}$ [@Padgett2008], which encompass the WIRCam field in its totality. The data were retrieved from the C2D point-source catalogues of the final data delivery [@Evans2005], using the NASA/ IPAC Infrared Science Archive[^4]. All fluxes were converted to magnitudes using the following zero points: 280.9$\pm$4.1, 179.7$\pm$2.6, 115.0$\pm$1.7, 64.1$\pm$0.94 (Jy), for the 3.6, 4.5, 5.8 and 8.0 $\mu$m IRAC bands, respectively, and 7.17$\pm$0.11 (Jy) for the 24 $\mu$m MIPS band. Only sources with magnitude errors below 0.3 magnitudes as well as detections above 2 $\sigma$ were kept. The *Spitzer* catalogues were merged with the WIRCam detections catalogue, requiring the closest match to be within 1$\arcsec$. A counterpart was found for $\sim$15,000 objects that were detected in one or more mid-IR bands. Infrared excess around young stellar objects (YSOs) is a direct evidence of discs and their detection is commonly used as a youth indicator [@Haisch2001]. These data are relevant to assess the likelihood of membership for the candidate members as well as to characterise their morphological properties (Sect. \[comp:spitzer\]). Subaru Telescope: $i'$ and $z'$ band archival data {#data:subaru} -------------------------------------------------- We searched the *Subaru* Mitaka Okayama Kiso Archive system [SMOKA @Baba2002] for *Subaru* Prime Focus Camera [@Miyazaki2002 Suprime-Cam] optical images overlapping with the WIRCam/CFHT survey. We found one overlapping field observed in the Sloan $i'$-band on 20 June 2007, and two fields observed in the Sloan $z'$-band on 16-17 April 2004. Table \[table:2\] gives a summary of the observations. Each field was observed in dithering mode to effectively compute and remove the sky. Weather conditions on Mauna Kea on 20 June 2007 and 17 April 2004 were photometric, as reported by the CFHT Skyprobe atmospheric attenuation measurements [@Cuillandre2004]. No data are available for 16 April 2004, but the quality and depth of the images suggests that the weather was similarly good. Seeing as measured in the images ranged from 0$\farcs$6 to 0$\farcs$8 in the $i'$ and $z'$-band images. The 10 individual CCD of the Suprime-Cam mosaic were processed with the standard reduction procedure with the recommended SDFRED package [@Yagi2002; @Ouchi2004]. The programme SDFRED performs overscan and bias subtraction, flatfielding, distortion correction, atmospheric dispersion correction, sky subtraction, masking vignetted regions, and alignment and co-addition. A sixth order astrometric solution was computed using 2MASS counterparts. The final accuracy is expected to be better than 0$\farcs$2. The programme SExtractor was used to identify all sources brigther than the 3-$\sigma$ local standard deviation over at least 3 pixels. The absolute zeropoint for each CCD was derived from the observation of a SDSS secondary standard field [SA 110, @Schmidt2002] observed the same night. They are given in Table \[table:2\] and agree with the @Miyazaki2002 measurements within 0.12 mag and 0.02 mag in the $i'$ and $z'$-band, respectively. The chip-to-chip offsets were computed from the median flux of domeflat images. ------------- ---------- ----------- --------------- ------------ ----------------- ---------------- Pointing RA Dec. Date Filters Exp. Time Zeropoint (J2000) (J2000) \[ABmag\] 1 16 27 00 -24 38 21 20 June 2007 Sloan $i'$ 20$\times$80 s 28.04$\pm$0.03 2 16 27 06 -24 13 30 17 April 2004 Sloan $z'$ 16$\times$240 s 27.03$\pm$0.03 3 16 25 06 -24 38 30 16 April 2004 Sloan $z'$ 21$\times$200 s 27.04$\pm$0.04 \[table:2\] ------------- ---------- ----------- --------------- ------------ ----------------- ---------------- Selection of substellar candidate members of $\rho$ Oph {#select:cmd} ======================================================= Colour-colour and colour-magnitude diagrams ------------------------------------------- The primary criteria used to select candidate members in the substellar regime is to compare the positions of all the WIRCam sources in various colour-colour and colour-magnitude diagrams with the predictions from the models of the YSOs colours. In the first iteration and in an attempt not to exclude any possibly interesting candidates, we performed no filtering on the initial catalogue of elongated objects (galaxies, nebulosities) or objects that could have their photometry affected by instrument artifacts. This step was done later with more stringent criteria to ensure the quality of the selected candidates. Evolutionary models are known to become increasingly uncertain at younger ages and lower masses, which is the case for our survey, and a more accurate way to select candidate members based on photometry diagrams is to use observed colours of known YSOs instead. @Luhman2010 empirically determined intrinsic colours for young stars and brown dwarfs, which the authors present in the 2MASS photometric system. Given the differences between 2MASS and the WIRCam filter system, which cause large differences in colour, and because to date no colour transformation equations between the two systems have been derived, we cannot use this approach though. We rely on evolutionary models for the selection of candidate substellar members, noting that when comparing the observed colours of YSOs from @Luhman2010 to the Dusty model (2MASS filters, 1Myr), we find the differences to be of the order of our photometric colour errors. Furthermore, using the models we recover the previously known brown dwarfs and the majority of candidate members from the literature present in the WIRCam catalogues (see Sect. \[comp:surveys\] for a detailed discussion). In the first step, candidate members of $\rho$ Oph were selected if their colours fell redward from the model isochrones in the *J* vs. *J*-*H*, *J* vs. *J*-*K$_{s}$*, and *K$_{s}$* vs. *H*-*K$_{s}$* colour-magnitude diagrams. From these, only objects that had colours consistent with them being young and substellar in the *J*-*H* vs. *H*-*K$_{s}$* colour-colour diagram (Fig. \[cmdwircam\]) were kept. The 1 Myr Dusty isochrone [@Chabrier2000] computed for WIRCam/CFHT *J*, *H*, and *K$_{s}$* filters was used down to a temperature of $\sim$1700 K, shifted to a distance of 130 pc. The adopted distance to $\rho$ Oph is a median value to that of several distance estimates existing in the literature, which indicates that distances to different regions of the cloud can vary from 120 to 145 pc [@Lombardi2008; @Mamajek2008]. According to the models, the substellar limit is at *J*$\sim$11.8, *K*$\sim$11.1, and *J*-*H*$\sim$0.3. This selection holds 178 substellar candidates. From this list, sources were removed which had a flux-radius value (flux-radius is a SExtractor parameter that measures the radii of the PSF profile at which the flux is 50% from its maximum) inconsistent with stellar profiles, i.e. close to zero or much larger than the average PSF FWHM measured on the images. This ensured that instrumental artifacts (bad pixels, cosmics) and elongated sources (galaxies, nebulosities), respectively, were correctly discarded. The remaining sources have been visually inspected, and a further rejection criterion was implemented to exclude detections susceptible of having bad photometry, like those at the edge of the field, in the overlapping regions between detectors (mostly in the detectors edges), or in zones of bright reflection nebulae. We removed from the candidate list five young brown dwarfs (GY 11, 64, 141, 202, CRBR 31, see also Table \[bd\]), which have already been spectroscopically confirmed as members [@Luhman1997; @Wilking1999; @Cushing2000; @Natta2002]. There are several objects which have previously been associated with the cloud but lack a spectroscopic confirmation (see also Sect. \[comp:surveys\]), in particular from previous IR surveys from @Greene1992, @Strom1995, and @Bontemps2001; members identified through X-ray emission [@Imanishi2001; @Gagne2004]; and candidate members proposed based on their *Spitzer* colours [@Padgett2008; @Wilking2008; @Gutermuth2009 see Sect. \[comp:spitzer\]]. These were kept in our catalogue and are referenced accordingly when mentioned. We also removed objects from the list of candidate members that turned out to be field star contaminants from our spectroscopy follow-up (see Sect. \[comspec\], the coordinates and near-IR magnitudes for these sources are given in Table \[cont\]), as well as three sources observed by @Marsh2009, which were found not to be substellar (identifiers in that study are $\#$1307, $\#$2438, and $\#$2403). The final list contains 110 substellar candidates selected from near-IR photometry alone, which are listed in Table \[stars\]. ------------- ------------- ---------------- ---------------- ---------------- RA Dec *J* *H* *$K_{s}$*     (mag) (mag) (mag) 16 25 15.92 -24 25 10.8 15.50$\pm$0.05 14.40$\pm$0.05 13.46$\pm$0.05 16 25 49.93 -24 13 43.7 17.02$\pm$0.05 15.60$\pm$0.05 14.73$\pm$0.05 16 26 22.47 -24 37 24.2 16.22$\pm$0.05 14.89$\pm$0.05 13.94$\pm$0.05 16 26 46.12 -24 21 53.8 15.75$\pm$0.05 13.97$\pm$0.05 12.88$\pm$0.05 \[cont\] ------------- ------------- ---------------- ---------------- ---------------- : WIRCam data for field star contaminants. During the spectroscopic follow-up, we observed nine sources that were not part of the substellar candidates list. These sources were chosen to pass the selection criteria in the various CMDs, but to have positions in the colour-colour magnitude diagram near to the reddening line extended from the 75 M$_{\emph{Jup}}$ model colour, which we used for the selection of candidates. The sources have magnitudes less than $\sim$14 in the *K$_{s}$* band. All these turned out to be field dwarfs, further supporting the limits used in our selection criteria, in particular at this magnitude range. These sources are also shown in Fig. \[cmdwircam\] (*crosses*). Comparison to previous surveys of the $\rho$ Ophiuchi molecular cloud {#comp:surveys} --------------------------------------------------------------------- To better evaluate the quality of our images, detection methods, and the consistency of our candidate selection criteria, we compared our results to those of previous surveys of $\rho$ Oph. In a recent compilation of previous studies in the literature, @Wilking2008 gathered a list of 316 confirmed or candidate members of the cluster, from which 295 have positions on sky within the field of our survey. The majority of these are part of the initially extracted WIRCam catalogues, with only 13 objects from the literature missed by the detection algorithm. From these, nine sources (ISO-29, 31, 60, 85, 90, 99, 125, 137, 144 [@Bontemps2001]; \[GY92\]-167, 168 [@Greene1992]; HD 147889; CRBR-36 [@Comeron1993]) are either extremely saturated in the WIRCam images or in the vicinity of those and other bright reflection nebulae, and therefore impossible to detect in the saturated pixels. The remaining four (ISO-60, 85, 90, 99) were previously associated with the cloud either from their X-ray emission or mid-IR excess. However, there is no signal detected in the *J*-band of the WIRCam images, hence they are not present in the combined *JHK$_{s}$* catalogue. Finally, sixty-five of the previously known or candidate YSOs are within the magnitude range of our near-IR candidate selection, and 35 of those fall in the substellar region (including the five brown dwarfs mentioned in the previous section), further supporting our selection criteria. Objects detected by our extraction algorithm but not within the magnitude range adopted for our candidate selection include seven sources that are too faint in the *J* band and have no reliable magnitude measurement, and 211 objects that are brighter than the survey saturation limit for one or more near-IR bands. While comparing the positions of the candidate members from the literature from the compiled list of @Wilking2008 in the near-IR diagrams, we found inconsistencies between the colours of fifteen sources and the expected colours of YSOs. In the colour-magnitude diagrams, these sources are in the substellar regime region (we caution that this is only an approximation, given the known errors in the models). However, in the colour-colour diagram, they show colours consistent with those of stars. One of these sources is an edge-on disc [@Grosso2003 known as the *Flying Saucer*]. Given the extended profile of this source, it is likely that its PSF photometry done on the WIRCam images has larger errors, because the parameters were optimised for point-source photometry. The remaining 14 sources include ROXC J162821.8-245535, which according to @Wilking2008 has been assigned membership based on X-ray emission (unpublished), and 13 sources classified as candidates from the analysis of IRAC data done by @Wilking2008, where several diagnostics for the detection of mid-IR excess were used (identification numbers used in that work for these sources are IRAC 20, 746, 763, 830, 831, 869, 901, 1016, 1086, 1212, 1343, 1350, 1401). From these, only one source (GY 376, or IRAC 746) is present in the list of candidate members of $\rho$ Oph from @Gutermuth2009 who used the same *Spitzer* dataset, and none have been otherwise previously associated with the cloud. Since @Wilking2008 have not provided details of their reduction of the data, or mid-IR magnitudes for the new candidates we cannot further comment on the validity of their selection. However, according to our near-IR dataset, these sources seem inconsistent with being members of $\rho$  Oph (see also Sect. \[comp:spitzer\]). Additionally, we included in our study recent results not found in the compilation from @Wilking2008 from the DROXO survey [@Sciortino2006 Deep Rho Ophiuchi XMM-Newton Observation], which consists of a very deep exposure (total exposure time is 515 ksec) taken with the European Photon Imaging Camera [@Struder2001; @Turner2001 EPIC] on board the XMM-*Newton* satellite [@Jansen2001], and covering a region of $\sim$0.2 deg$^{2}$ of the $\rho$ Ophiuchi cluster. The data reduction and main results from this survey can be found in @Giardino2007, @Flaccomio2009, and Pillitteri et al. (2010, *in prep.*). A total of 111 X-ray emitting sources are reported in their studies. The sensitivity of the DROXO survey and area covered are much lower than that of our survey. We found a positional match within 2 $\sigma$ from the source positional errors for two of our candidate members. We also compared the WIRCam data with the *K$_{s}$* photometry presented by @Marsh2009 for the seven candidate YSOs in $\rho$ Oph observed spectroscopically in that study (Table \[marsh\]). @Marsh2009 derive near-IR photometry from stacking deep integration *J*, *H*, and *K$_{s}$* images from the 2MASS calibration scans. We found a good agreement between their 2MASS measurements and the WIRCam photometry for four sources ($\#$1449, $\#$1307, $\#$2438, $\#$2403) with magnitude differences between 0.02 and 0.23 magnitudes. But for the remaining three sources we found larger variations, with differences in *K$_{s}$* magnitude of 0.4, 1.42, and 1.57 for sources $\#$2974, $\#$4450, $\#$3117. Two of these sources ($\#$2974, $\#$3117), were also detected in the WFCAM/UKIRT images from @Alvesdeoliveira2008, and their magnitudes have a difference of 0.14 and 0.17 to those of the WIRCam, which is of the order of their photometric errors at this magnitude range and seems to indicate that our measurements are correct. Of particular interest is the result for source $\#$4450, classified by @Marsh2009 as a young T2 dwarf, which may be the youngest and least massive T dwarf observed spectroscopically so far. From our images we derived a *K$_{s}$* magnitude of 19.14$\pm$0.20, whereas @Marsh2009 reported *K$_{s}$*=17.71. Assuming the parameters derived in the spectral analysis of @Marsh2009 are correct and repeating the same calculation as the authors to estimate the distance, we arrive at a $\pm$1$\sigma$ range in distance of 137 to 217 pc using the WIRCam magnitude value. This seems to suggest that this object is behind the $\rho$ Ophiuchi cloud and could instead be part of the Upper Sco association [@deGeus1989] located at $\sim$145 pc [@deBruijne1997] and an estimated age of 5 Myr [@Preibisch1999]. As claimed by @Marsh2009, in that case the estimated mass for this brown dwarf would still be $\le$ 3 Jupiter masses, according to the models. We could not derive a magnitude for the WIRCam *H* band because the object’s position is coincident with an artifact caused by the guiding star. In the *J* band, we derived a magnitude of 21.32$\pm$0.35. Its *J*-*K$_{s}$* colour is consistent with a T dwarf reddened by the extinction measured spectroscopically by @Marsh2009. ----------------------- ------------- ------------- ---------------- ---------------- ---------------- ID (as in @Marsh2009) RA Dec *J* *H* *$K_{s}$*       (mag) (mag) (mag) 2438 16 27 09.37 -24 32 14.9 22.21$\pm$0.39 19.53$\pm$0.14 16.97$\pm$0.05 2974 16 27 16.74 -24 25 39.0 17.26$\pm$0.06 3117 16 27 17.68 -24 25 53.5 18.64$\pm$0.07 17.27$\pm$0.05 2403 16 27 21.63 -24 32 19.2 20.86$\pm$0.10 18.17$\pm$0.06 16.50$\pm$0.05 4450 16 27 25.35 -24 25 37.5 21.32$\pm$0.35 19.14$\pm$0.20 1449 16 27 30.36 -24 20 52.2 19.59$\pm$0.06 16.96$\pm$0.05 15.65$\pm$0.05 1307 16 27 32.89 -24 28 11.4 21.41$\pm$0.15 17.44$\pm$0.05 14.97$\pm$0.05 \[marsh\] ----------------------- ------------- ------------- ---------------- ---------------- ---------------- Photometric properties of the candidate members {#discussion_phot} =============================================== We used multiwavelength data to characterise the candidate members presented in Table \[stars\]. For each candidate, a flag was included to indicate additional information: previously suggested candidate member in the literature (Sect. \[comp:surveys\]), candidate selection confirmed from optical counterpart (Sect. \[comp:optical\]), mid-IR excess as determined from *Spitzer* diagrams (Sect. \[comp:spitzer\]), variability behaviour (Sect. \[comp:var\]), or their membership confirmed spectroscopically from this study (Sect. \[spec\]). Figure \[image\] shows the position on sky of the candidate members and the known members of $\rho$ Oph, superposed on the density map of all WIRCam detections and the contours from the extinction map of the $\rho$ Ophiuchi cloud provided by the COMPLETE[^5] project [@Ridge2006; @Lombardi2008], and computed with the NICER algorithm [@Lombardi2001] using 2MASS photometry. ![Spatial distribution of the substellar candidate members of $\rho$ Ophiuchi in this study and previously known candidate and confirmed members of the cloud as compiled by , superposed on the density map of all WIRCam detections with contours from the NICER extinction map. Symbols are the same as in Fig. \[cmdwircam\].[]{data-label="image"}](fig4.eps){width="\linewidth"} ![image](fig5.eps){width="\linewidth"} Near-IR and optical CMDs {#comp:optical} ------------------------- For the candidate members with a counterpart in the $i'$ and $z'$-bands from the *Subaru* telescope, we could further test our near-IR selection criteria. A visual inspection of the match between candidate members and the optical counterparts was performed to ensure the quality of the positional association. The candidate members without an optical counterpart were either to faint in the the $i'$ or/and $z'$-bands or were not in the area covered by the optical images. Figure \[subaru\] shows the colour-magnitude diagram combining infrared and optical data together with the theoretical isochrone from the DUSTY models [@Chabrier2000] for 1 Myr age, shifted to 130 pc. All but five candidates with either $i'$ and/or $z'$-band photometry show colours consistent with those predicted by evolutionary models within the photometric measurement errors, and more important with those of the previously known brown dwarfs in $\rho$ Oph. Our selection criteria are therefore confirmed for the majority of the sources present in these diagrams. Furthermore, GY 201 shows colours that are too blue when compared to the isochrones or to the other candidates and members. ![image](fig6.eps){width="\linewidth"} Mid-IR excess from *Spitzer* data {#comp:spitzer} --------------------------------- Young stellar objects can show infrared emission, which originates from dusty envelopes and circumstellar discs surrounding the central object. Mid-IR data from the IRAC and MIPS *Spitzer* cameras allow these objects to be studied at wavelengths where the excess contribution from discs and envelopes is predominant. With several colour-colour and colour-magnitude diagrams (Fig. \[spitzer\]) we could further characterise our list of candidate members. Previous work on Spitzer mid-IR observations of the $\rho$ Ophiuchi cluster has been published by @Padgett2008 and @Gutermuth2009, where hundreds of candidate members are uncovered over a much larger area than the WIRCam/CFHT survey. The IRAC colour-colour diagram (\[3.6\]$-$\[4.5\] vs. \[5.8\]$-$\[8.0\]) in the panel (a) of Fig. \[spitzer\] can be used as a tool to separate young stars of different classes [@Allen2004; @Megeath2004] and to reject sources consistent with galaxies dominated by PAH emission and narrow-line AGN [@Gutermuth2009]. Centred in the origin are sources which have colours consistent with stellar photospheres and have no intrinsic IR-excess. These can be foreground and background stars, but also Class III stars with no significant circumstellar dust. In this region of the colour-colour plane it is impossible to differentiate between young stars and contaminants. Another preferred region for objects in the diagram is located within the box defined by @Allen2004, which represents the colours expected from models of discs around young, low-mass stars. Finally, from models of infalling envelopes, @Allen2004 predict the colours of Class I sources to have (\[3.6\]$-$\[4.5\]) $>$ 0.8 and$/$or (\[5.8\]$-$\[8.0\]) $>$ 1.1. Thirty seven of our candidate members have good photometry in the four IRAC bands and are displayed in the diagram. The remaining candidates are either too faint in the IRAC images or have detections in one of the four bands that did not match the quality criteria we applied (see Sect. \[data:spitzer\]). From those, 21 were previously associated with $\rho$ Oph. All the candidates in panel (a) have colours consistent with Class II or Class I young objects. Furthermore, none of the candidates falls in the preferred region defined by the colours of possible extragalactic contaminants (see Sect. \[contamination\]), which is also confirmed from the diagram (b). Panel (c) can be used to estimate contamination levels by broad-line AGN and is further discussed in Sect. \[contamination\]. Twenty six of the candidate members have a detection at 24 $\mu$m and are displayed in panel (d). Following the @Greene1994 mid-IR classiÞcation scheme (based on the $\alpha$ index), YSOs will lay on defined areas of the diagram, namely *K$_{s}$*$-$\[24\] $>$ 8.31 for Class I, 6.75 $<$ *K$_{s}$*$-$\[24\] $<$ 8.31 for flat-spectrum objects, 3.37 $<$ *K$_{s}$*$-$\[24\] $<$ 6.75 for Class II, and *K$_{s}$*$-$\[24\] $<$ 3.37 for photospheric colours. All our candidates with a detection at 24 $\mu$m are consistent with being young according to this classification scheme. Near-IR variability of YSOs {#comp:var} --------------------------- Variability is a characteristic of YSOs, and near-IR variability surveys in particular can probe stellar and circumstellar environments and provide information about the dynamics of the on going magnetic and accretion processes. With the Wide Field near-IR camera (WFCAM) at the UKIRT telescope, @Alvesdeoliveira2008 conducted a multi-epoch, very deep near-IR survey of $\rho$ Oph to study photometric variability. They found 137 variable objects with timescales of variation from days to years and amplitude magnitude changes from a few tenths to $\sim$3 magnitudes. We found that 17 of our candidates show photometric variability (14 are included in the list of members compiled by @Wilking2008), which further supports their membership. From their list of candidate members found through photometric variability, 18 are outside our surveyed area, and 58 are saturated in the WIRCam images. From the variables that have magnitudes and colours consistent with our selection criteria for substellar candidate members of $\rho$ Oph, we recovered all but one. The source has large photometric variations in the near-IR (0.6 and 0.4 magnitudes in the *H* and [K]{} band, respectively) and is detected in all the WIRCam images. But its position overlaps with an artefact in the *J* band image, which may affect its photometry and explain the *J*-*H* colour found, which is too blue in comparison to the theoretical models. From our list of candidates, 93 do not appear in the list of near-IR variables. These include 12 candidates that are out of the field surveyed by @Alvesdeoliveira2008 and 24 that are fainter than their completeness limits ($\sim$19 and 18 magnitudes, in *H* and [K]{}, respectively). We therefore conclude that the remaining 57 candidate members (from which 13 have been previously associated with the cloud) did not show photometric variability in the near-IR on timescales from days to one year during the epochs surveyed in that study. Contamination ------------- Photometrically selected samples of candidate members of young star-forming regions are prone to be contaminated by other objects with colours similar to those of YSOs. Only a spectroscopic analysis can reveal the true membership of the candidate members, but this requires large amounts of observing time in telescopes and that is not always achievable. The possible sources of contamination are extragalactic objects (like AGN or PAH galaxies), foreground and background field M dwarfs, and background red giants. We tried to estimate the level of contamination in our list of candidate members, calling to attention however that the membership of an individual source is ultimately dependent on its position in the field, because for a large part of the area of sky surveyed, the cloud’s extinction will substantially shield any background contamination. We removed from the discussion the 30 candidates that were previously associated with the cloud because they gather an ensemble of properties from different surveys that further supports their membership (see Sect. \[comp:surveys\]), as well as the spectroscopically confirmed members in this study, leaving 70 candidates. ### Extragalactic contaminants In an attempt to characterize the extragalactic contamination levels in YSO samples selected on the basis on IR-excess emission, in particular using *Spitzer* observations, @Gutermuth2009 explored the same data as @Stern2005 to select active galaxies and compare them to YSO selection methods. In particular, @Stern2005 found that PAH-emitting galaxies have colours that are confined to specific areas in most of the IRAC colour-colour diagrams. These regions were adopted by @Gutermuth2009 to filter out these contaminants, and they are depicted in panels (a) and (b) of Fig. \[spitzer\]. None of the candidate members present in this diagram falls into either of these regions, indicating that these group of candidates is most likely not affected by contamination from star-forming galaxies. Another source of contamination are broad-line AGN which have mid-IR colours similar to those of YSOs [@Stern2005]. With the IRAC diagram panel (c) in Fig. \[spitzer\] we try to provide an estimate for contamination level from AGN, following the methodology from @Guieu2009 for underredened colours [based on @Gutermuth2009]. The region plotted in the diagram (*in grey*) shows the area in the \[4.5\] vs. \[4.5\]$-$\[8.0\] colour space consistent with AGN-like sources. @Gutermuth2009 found that while applying this cutoff significantly improved the extragalactic filtering of catalogues, some residual contamination is still expected. Three of the YSO candidates in these diagram fall in the contamination area and are signaled out as possible contaminants in Table \[stars\]. However, only 40 of the candidates have detections at 4.5 and 8 $\mu$m. If we extrapolate this to the candidate list, we estimate a contamination level of $\sim$5 extragalactic sources, a conservative upper-limit because we are not taking into account the cloud’s extinction. ### Galactic contaminants In the Galaxy, giants are an important source of contamination. However, taking in account the high galactic latitude of the $\rho$ Oph field (+16.7) and that background contamination is reduced due to cloud extinction, fewer giants are expected because we are surveying a region of the sky above the plane and bulge, which becomes dominated by the faint end of the dwarf luminosity function. We used the Besançon model of galactic population synthesis [@Robin2003] to estimate the level of contamination by foreground late type objects and background red giants or extincted galactic sources. We retrieved a synthetic catalogue of sources within 0.78$^{2}$ toward the direction of our survey for distances in the range 0–50 kpc. Objects further away than 100 pc (hereafter background objects) were placed randomly within the field of our survey, and the corresponding extinction as given in the COMPLETE map was applied. The luminosities of objects closer than 100 pc (hereafter foreground objects) were not changed. Our selection algorithm was then applied to the output synthetic catalogue. According to these simulations, only one unrelated galactic source could have passed our selection criterion, indicating that the contamination by galactic sources must be low. The contaminant is an extincted (A$_{\emph{V}}$=7.5 mag) 1 Gyr old M8V dwarf located at 140 pc. The strong extinction in the region covered by our survey indeed most likely blocks the light of the majority of background sources up to the limit of sensitivity of the survey. Spectroscopic follow-up of candidate YSOs {#spec} ========================================= ![image](fig7.eps){width="\linewidth"} We obtained near-IR spectra for 16 candidate members of $\rho$ Oph, chosen from our WIRCam/CFHT survey, and for GY 201 a candidate member from the literature. The low resolution and modest signal-to-noise of the spectra (S/N$\sim$15, and sometimes lower in the *J* band) restrict the use of narrow spectral features for classification. However, even at this resolution, there are still significant differences between the spectra of a low-gravity young stellar object and a field dwarf which can be studied. The triangular shape of the *H* band, caused by deep H$_{2}$O absorption on either side of the sharp peak located between 1.68 and 1.70 $\mu$m in young objects, as opposed to a plateau in the spectra of field dwarfs, has been used as a signature of youth and membership in several studies of young brown dwarfs [see, for example, @Allers2007; @Lucas2001]. There are strong water absorption bands on both sides of the peak also in the *K* band spectrum of young brown dwarfs. In the *J*-band, H$_{2}$O absorption is also present at both extremes of the band for young brown dwarfs, but not for field dwarfs. Another good gravity indicator in the near-IR spectrum is the Na I absorption (present at 1.14 and 2.2 $\mu$m), which is very deep for field dwarfs, but not in young objects. Our spectral classification method relies on the comparison of the candidate spectra with those of young optically classified objects members of other star-forming regions of similar ages, which are used as standards. A numerical spectral fitting procedure was developed, which makes the simultaneous determination of spectral type and reddening possible. We also took spectra of sources outside the substellar selection limit we imposed in the colour-colour diagram (see Sect. \[select:cmd\]), to ensure that our selection criteria are not too stringent, and indeed all of those objects turned out to be stars with no water absorption features. ------------- ------------------------- -------------------------- ---------------------- CFHTWIR-Oph [A$_{\emph{V}}$ (mag)]{}   Num. Fit. H$_{2}$O Index 4 M6.50$^{+0.25}_{-0.25}$ 6.41$\pm$1 2.5$^{+0.1}_{-0.1}$ 34 M8.25$^{+0.5}_{-1.0}$ 7.98$\pm$1 9.70$^{+0.7}_{-0.4}$ 47 M7.50$^{+0.5}_{-0.5}$ 7.09$\pm$1 5.6$^{+0.3}_{-0.2}$ 57 M7.25$^{+0.75}_{-2.0}$ 7.12$\pm$1 6.10$^{+1.7}_{-0.6}$ 62 M5.50$^{+0.5}_{-1.25}$ 4.01$\pm$1 9.90$^{+1.8}_{-0.9}$ 96 M8.25$^{+0.25}_{-1.0}$ 7.51$\pm$1 1.10$^{+0.7}_{-0.1}$ 106 M6.50$^{+1.25}_{-1.0}$ 6.42$\pm$1 4.9$^{+0.6}_{-0.7}$ \[table:4\] ------------- ------------------------- -------------------------- ---------------------- : Spectral type and A$_{\emph{V}}$ determined through numerical spectral fitting. Numerical spectral fitting {#specfit} -------------------------- The procedure consists in comparing each candidate spectrum to a grid of near-IR, low-resolution template spectra of young stars and brown dwarfs with spectral types determined in the optical, reddened in even steps of A$_{\emph{V}}$. The comparison spectra are of members of the young ($\lesssim$2 Myr) star-forming regions IC 348 [@Luhman2003b] and Taurus [@Briceno2002; @Luhman2004] dereddened by their extinction published values (typically A$_{\emph{V}}$ $\lesssim$ 1), with spectral types ranging from M4 to M9.5. By combining the spectral types, half and quarter sub-classes were constructed. The reddening law of @Fitzpatrick1999 was used to progressively redden the template spectra by steps of 0.1 A$_{\emph{V}}$. The fit was performed across the complete usable wavelength range of the spectrum (1$-$1.34, 1.48$-$1.8, and 2$-$2.45 $\mu$m), i.e., excluding only the regions dominated by telluric absorption and the extremes of the spectral range where the quality of the data is poorer. It was assumed that all template spectra have approximately the same error. For the candidate spectra, the r.m.s. of the difference between the original spectrum and the smoothed spectrum (using a 15 pix boxcar) was taken as an estimation of the errors. Figure \[chi2fit\] shows the contours for the variation of $\chi$$^{2}$ with A$_{\emph{V}}$ and spectral type for one of the candidates. The solid line contour represents the 1 sigma confidence interval, and the dashed lines indicate the A$_{\emph{V}}$ and spectral type limiting values at the point of the grid closest to the 1 sigma contour, which are taken as the standard deviation for each parameter from the minimum. The dotted contours are the 1.6, 2, 2.6, and 3 sigma levels, successively from the minimum. The right-hand panel shows the resulting best fit, CFHTWIR-Oph 34 is best-fitted by an M8.25 brown dwarf (an average spectra between an M8 and M9 young brown dwarfs) and an A$_{\emph{V}}$ of 9.7 magnitudes. This procedure was applied to all the candidate spectra. ![image](fig8.eps){width="\linewidth"} Spectral classification: comments on individual sources {#comspec} ------------------------------------------------------- ### Contaminant field stars Water vapour absorption could not be detected in the spectra of four candidates, and only a limit to the spectral type could be set, i.e., they have a spectral type earlier than $\sim$M-type. Given their faint IR magnitudes, they are therefore inconsistent with being young and members of the cluster and are excluded as background contaminants (listed in Table \[cont\]). ### CFHTWIR-Oph 4, 34, 47, 57, 62, 96, 106 The sources CFHTWIR-Oph 4, 34, 47, 57, 62, 96, and 106 have spectral types and extinction values derived from the numerical procedure, with spectral types ranging from M5.5 to M8.25, and A$_{\emph{V}}$ from a 1 to 10 magnitudes. The results are summarized in Table \[table:4\], while the dereddened spectra are displayed in Fig. \[allspec\]. ### GY 201 The source GY 201 was observed with the blue and red grisms of SofI/NTT, and could neither be fitted with one of the templates in the grid, nor with comparison spectra from field dwarfs when the full spectral range was considered. If the fit was performed with only the part of the spectrum acquired with the red grism (from 1.5 to 2.5 $\mu$m), the fitting procedure converged for spectral type M5 and an A$_{\emph{V}}$ of 1.9 magnitudes. However, when the full spectrum was taken into account, no physical solution was found, with the resulting best fit indicating a negative value of A$_{\emph{V}}$. This disagreement can be explained if the object is an unresolved binary, where one of the components is an earlier type dwarf contributing to the blue part of the spectrum, and the other a late M-type dwarf dominating the red part of the spectrum and showing water vapour absorption features. Yet another plausible and more likely explanation is that the red part of its spectrum is contaminated by the emission from a nearby ($\le$10$\arcsec$) Class I young member of $\rho$ Oph previously studied by many authors [ISO 103, see for example, @Gutermuth2009; @Padgett2008; @Imanishi2001; @Bontemps2001]. The source ISO 103 is very bright in the *K* band, and is saturated in our WIRCam images. The source GY 201 was also previously associated with the cloud based on two near-IR studies [@Greene1992; @Allen2002], but was not detected or mentioned in any other study of the cloud, even if its location has been covered by the vast majority of the surveys. Its position on the various optical and near-IR CMDs presented in this paper indicates that this object could be a field dwarf, because its position is always bluewards of the models and other candidate members. It also lacks an IRAC or MIPS detection in the *Spitzer* archive catalogues, which could be explained by the difficulty of separating its PSF from that of the neighbouring Class I source, which is also very bright in the mid-IR. It has also not been detected by @Gutermuth2009 or @Padgett2008 in their two detailed *Spitzer* studies of $\rho$ Oph. The nature of this object, and in particular its association to the cloud, remains therefore uncertain. Furthermore, in the review by @Wilking2008, the authors claim this object to be a Class I source based on (unpublished) IRAC colours, a result which seems unlikely taking into account the results from the spectrum analysis in our work and its near-IR colours, and could be a mismatch between the IRAC detections and the existing literature catalogues, or IRAC photometry confusion caused by the neighbouring star. ### CFHTWIR-Oph 2, 55, 94, 97, 105 The candidate members CFHTWIR-Oph 2 and 94 show a very red spectrum, without photospheric features required for a spectral classification, like clear water vapour absorption bands. The nature of these sources is further discussed in Sect. \[properties\]. Figure \[allspec\] shows the original spectra not dereddened, because they lack a classification. These objects were observed with a grism covering the entire near-IR spectrum (NICS/TNG), while CFHTWIR-Oph 55, 97, and 105 were observed with SofI/NTT using only the red grism, because they are too faint in the *J* band. These spectra are also very red, though water vapour absorption can be seen in the spectra of CFHTWIR-Oph 97 and 105. These candidate members are classified as M5.5 and M6, respectively, though their classification is less reliable given that only a limited part of their spectra is available for the fit. The two other objects did not show clear absorption bands and have not been fitted. Their original undereddened spectra are also displayed in Fig. \[allspec\], and their properties are discussed in detail in Sect. \[properties\]. The H$_{2}$O Spectral Index --------------------------- For the seven candidate members with a spectral classification we compared the results from the numerical spectral fitting process with the H$_{2}$O spectral index defined by @Allers2007, which can be calculated as $\langle F_{\lambda=1.550-1.560} / F_{\lambda=1.492-1.502}\rangle$. @Allers2007 derive a spectral type *vs.* index relationship, which is independent of gravity and is valid for spectral types from M5 to L0, with an uncertainty of $\pm$ 1 subtype. The index is computed for all our spectra after being dereddened by the A$_{\emph{V}}$ values in Table \[table:4\], and the spectral types all agree with those determined by the fitting procedure within the uncertainties and are shown for comparison, further confirming the validity of our classification method. Properties of the spectroscopic sample {#properties} ====================================== Membership ---------- We used the compiled information for each candidate member observed spectroscopically to confirm their pre-main-sequence nature. For the seven sources that have determined spectral types and extinction values (Table \[table:4\]), we found an agreement with the spectra of young stellar objects with low gravity, and therefore identify in their spectra signatures of youth like the *H*-band triangular shape. Additionally, all objects that have an optical counterpart (CFHTWIR-Oph 34, 47, 62, 96, and 106) show colours similar to those of the previously known brown dwarfs in $\rho$ Oph. In the mid-IR diagrams, CFHTWIR-Oph 2, 34, 55, 62, 94, 96, 97, and 105 show evidence of discs (Sect. \[comp:spitzer\]). Some of these objects have been previously associated with the cloud. The source CFHTWIR-Oph 62 was previously associated with the cloud from a comparison of near-IR photometric observations to models and the detection of infrared excess [@Rieke1990; @Comeron1993; @Greene1992], and it was first observed spectroscopically by @Wilking1999, but lacked a high enough signal-to-noise to be studied. @Cushing2000 observed the same object in the near-IR (NIRC/Keck I), and claim its membership based on the detection of strong H$_{2}$ emission in the *K*-band, though the authors mention it is not clear if the emission is associated with the object. H$_{2}$ emission is also detected in our SofI/NTT spectra, but the resolution of the spectrum is too low for an accurate velocity measurement (see also Sect. \[outflow\]). The spectral type derived by @Cushing2000 is M4$\pm$1.3, which agrees within the errors, with the spectral type we found, M5.5$^{+0.5}_{-1.5}$. The sources CFHTWIR-Oph 34 and 96 have also been previously associated with the cloud based on the detection of near-IR excess [@Greene1992], but lack a spectroscopic confirmation. These three sources have mid-IR colours consistent with those expected for YSOs [@Wilking2008; @Gutermuth2009]. Finally, CFHTWIR-Oph 34 and 62 have been classified as variable sources by @Alvesdeoliveira2008, which further supports their classification as members. Despite the fact that the candidate members with very red spectra could not be classified, there is evidence they are members of the cluster. The sources CFHTWIR-Oph 55, 94, 97, and 105 have been previously associated with the cloud from IR and / or X-ray surveys [@Wilking2008]. The source CFHTWIR-Oph 2 is a new candidate member and shows colours consistent with those of a Class II object (Fig. \[spitzer\]). H$_{2}$ Outflow {#outflow} --------------- The source CFHTWIR-Oph 94 (other names are, for example, GY 312 or ISO 165) is a known member of $\rho$ Oph and has been extensively studied: @Imanishi2001 detected both quiescent and flare X-ray emission, @Natta2006 found it to be an actively accreting YSO, and @Alvesdeoliveira2008 detected photometric variability consistent with changes in the surrounding disc or envelope. @Bontemps2001 classified it as a Class II object, i.e., with an IR excess and a spectral energy distribution (SED) which can be explained by models of YSOs surrounded by circumstellar discs. More recently, using *Spitzer* data, @Gutermuth2009 have classified it as a Class I, given its strong IR excess. Our mid-IR colour-colour magnitude diagrams agree with the later classification. We found further evidence of the protostellar nature and therefore youth of this object. We detected a H$_{2}$ 1 - 0 S(0) emission (2.12 $\mu$m) in the spectrum of CFHTWIR-Oph 94 (Fig. \[allspec\]), a signature of a molecular outflow. Given the extreme red spectrum of this object, we cannot estimate its spectral type. Further observations are needed to determine the association of the outflow with the source [see, for example, @Bourke2005; @Fernandez2005], and also its mass. Candidate edge-on disc {#candidate_disc} ---------------------- ![SED of a candidate edge-on disc in $\rho$ Oph. The SED of CFHTWIR-Oph 62 is compared to the SED of a known Class III young stellar object in $\rho$ Oph with a similar spectral type, reddened by an extinction amount to match that of the candidate. Both sources are scaled at the *H*-band flux. This object is underluminous at optical wavelengths, does not show an excess emission from the near-IR up to 8 $\mu$m, but has a large excess at 24 $\mu$m.[]{data-label="transdisc"}](fig9.eps){width="\columnwidth"} We investigated the mid-IR colours of a candidate edge-on disc, CFHTWIR-Oph 62, which shows very red colours at 24 $\mu$m but is not present in the IRAC diagrams (the object is detected at 3.6, 4.5, and 5.8 $\mu$m, but has only an upper limit detection at 8 $\mu$m). We compared the colours of CFHTWIR-Oph 62 to those of WSB 50, a young stellar object member of $\rho$ Oph with a spectral type close to that of CFHTWIR-Oph 62, between M4.5 [@Wilking2005] and M4 [@Luhman1999], and classified as a Class III source, which should therefore show a *photospheric* SED. The magnitudes are reddened to those of CFHTWIR-Oph 62 (with an A$_{\emph{V}}$ of 8., WSB 50 has an A$_{\emph{V}}$=1.9 magnitudes) with the reddening laws from @Rieke1985 and @Flaherty2007. The near-IR magnitudes were taken from the 2MASS catalogue because WSB 50 is saturated in the CFHT/WIRCam images, and it is preferred to use a common photometric system. Figure \[transdisc\] shows the two SEDs, which nicely shows the characteristic SED of CFHTWIR-Oph 62. @Cushing2000 found this object to be underluminous at optical wavelengths, which combined with the lack of IR excess up to 8 $\mu$m could be explained by the geometry of a nearly edge-on disc: at short wavelengths the disc is optically thick and acts as a natural coronograph (explaining the underluminosity when compared to members of similar T$_{eff}$, and why @Cushing2000 concluded this object to be older), while at longer wavelengths the thermal emission of the disc dominates, causing the sharp rise in the SED [see, for example @Sauter2009; @Duchene2009]. That the flux at 24 $\mu$m is at the level of the *photospheric* part of the SED further supports this scenario. Further observations and modelling are needed to understand and better characterise this complex young object. In particular, measurements at longer wavelengths than 24 $\mu$m can provide an important constraint in the nature of this source. Temperatures and luminosities ----------------------------- ![image](fig10.eps){width="\linewidth"} To place the new candidate members that were spectroscopically confirmed in the Hertzsprung-Russell (HR) diagram, which is commonly used to estimate ages and masses, we needed to derive their effective temperature and bolometric luminosity. To convert spectral types to temperatures, the temperature scale from @Luhman2003 was adopted, which is derived for young members of the star-forming region IC 348 ($\sim$2 Myr), and has provided consistent results when applied to other young star-forming regions [for example, @Luhman2009]. The adopted errors for the temperature are the one sigma limits in spectral type from the numerical fitting procedure. For each candidate, the bolometric luminosity was calculated from the dereddened *J* magnitude (using A$_{\emph{V}}$ derived from the numerical fit and the reddening law from @Rieke1985), applying the bolometric corrections for the respective spectral type (from @Kenyon1995 for $<$M6 and @Dahn2002 for $\ge$M6), and using a distance to the cloud of 130 pc. The errors were propagated to include the photometric error in *J*, the one sigma errors in A$_{\emph{V}}$ from the spectral fitting, and by assuming an error of $\pm$10 pc in the distance to the cloud. For comparison purposes, we compiled the previously known members of $\rho$ Oph with assigned spectral types later then $\gtrsim$ M3 from the literature. From the $\sim$300 objects associated with the cloud, members that have a counterpart and are not saturated in the WIRCam/CFHT *J*-band images were kept. Some of these objects are not part of the final WIRCam catalogue (see Sect. \[data\]), because they are saturated either in the *H* or the *K$_{s}$* bands. Only objects that have spectral types determined from spectroscopic surveys and extinction values published are included. The final compilation contains 36 young low-mass stars and brown dwarfs [@Luhman1997; @Luhman1999; @Wilking1999; @Cushing2000; @Natta2002; @Wilking2005]. Bolometric luminosities and temperatures were derived in the same way as for the CFHTWIR-Oph candidates. The previously confirmed members and candidate members from this study were placed in the HR diagram (Fig. \[hr\]) and compared to theoretical evolutionary models [NextGen, because our candidates have T$_{\emph{eff}}$ $>$ 2500 *K*, @Baraffe1998]. The 1, 5, 10, and 30 Myr isochrones are shown, labelled with mass in units of M$_{\sun}$. We adopted the 0.08 M$_{\sun}$ mass track as the stellar / substellar boundary that corresponds to spectral types $\sim$M6.25 to M6.5 for a young member of $\rho$ Oph with an age of 1 to 2 Myr. In our sample, we find that CFHTWIR-Oph 62 is a very low-mass star, and the other sources are six new brown dwarfs of $\rho$ Oph (CFHTWIR-Oph 4, 34, 47, 57, 96, 106). A large spread in ages is seen in the HR diagram, which goes from $<$1 to $\sim$10 Myr, and with some objects laying already closer to the 30 Myr isochrone. The estimated age for $\rho$ Oph is of 0.3 Myr in the core [@Greene1995; @Luhman1999], and 1-5 Myr in the surrounding regions [@Bouvier1992; @Martin1998; @Wilking2005]. The star formation history of this cluster is thought to be connected to that of the Sco-Cen OB association, with two different episodes of star formation taking place, one caused by a supernova 1 to 1.5 Myrs ago, and the other happening in parallel to the formation of Upper Scorpius $\sim$5 Myrs ago, caused by an expanding shell from the Upper Centaurus-Lupus OB subgroup [see @Wilking2008 and references therein for a review]. These ideas are still debated, but would mean that a range of ages is therefore expected in the HR diagram. Taking into account the typical uncertainties involved in the temperature and A$_{\emph{V}}$ determination, that would explain the position of the members of the cluster from above the 1 Myr isochrone up to the 10 Myr models. Most of the members in the HR diagram are consistent with this age estimate, which is also the case for our candidates CFHTWIR-Oph 34, 57, and 96. The other CFHTWIR-Oph candidates have luminosities that suggest an age older than 10 Myr up to 30 Myr. We already mentioned, however, that CFHTWIR-Oph 62 is underluminous [@Cushing2000], most probably due to observations being done through scattered light from a surrounding disc (Sect. \[candidate\_disc\]). The old ages implied in the diagram for CFHTWIR-Oph 4, 47, and 106, do not seem plausible if we assume them to be members of the cluster. The source CFHTWIR-Oph 106 shows mid-IR colours consistent with those of young objects surrounded by discs, and it is possible that it is observed through scattered light, which would explain its lower luminosity. Both CFHTWIR-Oph 4 and 47 do not show a signature of IR excess and could be more evolved young objects. As mentioned in Sect. \[specfit\], their spectra are well fitted by those of young objects, showing distinctive features of youth, in particular the triangular shape of the *H*-band. Figure \[youngfield\] shows the dereddened spectrum of CFHTWIR-Oph 4 together with the best-fit spectrum from the young grid of templates, and a comparison spectrum of a field dwarf of the same spectral type, putting into evidence the pronounced and broad water absorption bands associated with low surface gravity objects [@Kirkpatrick2006]. The same check was done for all our classified spectra. Furthermore, three other brown dwarfs taken from the literature (CRBR 31, GY 11, GY 141) fall into the same part of the HR diagram. We do not find any relation between the different positions of brown dwarfs in the HR diagram (younger or older than the 10 Myr isochrone) and their positions on sky in relation to the cluster’s core, for example. Nor do we find a relation between their position in the HR diagram and their SED class assigned from mid-IR colours. In particular, all but one of the previously known brown dwarfs in the cloud with an assigned SED class are Class II objects. ![Dereddened spectrum of CFHTWIR-Oph 4 together with the best fit obtained. The spectrum shows a very good match to that of an intermediate spectrum between an M6 and M7 template young brown dwarfs (solid line) and clear differences to the spectrum of the field dwarf 2MASS J13272391+0946446 with a comparable spectral type (dotted line).[]{data-label="youngfield"}](fig11.eps){width="\columnwidth"} ------------------ ----------- ---------------- --------------- ---------------------- ----------- Name *J*[^6] A$_{\emph{V}}$ Sp. Type T$_{\emph{eff}}$[^7] Ref.[^8]    (mag) (mag)   (*K*)   CFHTWIR-Oph 4 14.88 2.5 M6.50 2935$^{+55}_{-55}$ this work CFHTWIR-Oph 34 15.97 9.70 M8.25 2710$^{+550}_{-170}$ this work CFHTWIR-Oph 47 15.94 5.6 M7.50 2795$^{+85}_{-85}$ this work CFHTWIR-Oph 57 15.06 6.10 M7.25 2880$^{+170}_{-245}$ this work CFHTWIR-Oph 96  14.60 1.10 M8.25 2710$^{+310}_{-170}$ this work CFHTWIR-Oph 106  15.36 4.9 M6.50 2935$^{+225}_{-122}$ this work CRBR 14 14.79 10.0 M7.5(M5.5,M7) 2795 2,3,5 CRBR 31 16.28 8.6 M6.7 2935 4 GY 3 12.27 0.0 M8(M7.5) 2710 6,5 GY 5 12.44 2.8 M7(M6) 2880 2,5 GY 10 15.52 14.0 M8.5(M6.5) 2555 2,3 GY 11 16.17 4.8 M6.5(M8.5) 2935 2,5 GY 64 16.35 11.0 M8 2710 2 GY 141 15.05 0.7 M8.5(M8) 2555 1,4 GY 202 16.79 13.0 M7(M6.5) 2880 2,3 GY 204 12.47 0.5 M6 2990 5 GY 264 12.78 0.0 M8 2710 6 GY 310 13.23 5.7 M8.5(M7,M6) 2555 2,3,5 GY 350 13.74 7.0 M6 2990 5 oph-160 14.05 6.0 M6 2990 5 oph-193 13.61[^9] 7.5 M6 2990 5 \[bd\] ------------------ ----------- ---------------- --------------- ---------------------- ----------- The old ages implied by the isochrones could instead be related to the several sources of error associated with this diagram, like the less reliable photometry for sources located in high nebulosity regions (abundant in $\rho$ Oph), near-IR variability in YSOs (@Alvesdeoliveira2008 detected near-IR variations as large as 0.5 magnitudes for objects plotted in the HR diagram), unresolved binaries, the large difference in estimated distances to the cloud, or the possibility that some objects may be seen through scattered light. Although the individual contribution of these uncertainties can be quantified, it is not possible to have a clear picture of the net effect when one or more of the mentioned problems are involved. Furthermore, recent results in modelling of young brown dwarfs [@Baraffe2009] suggest that episodic strong accretion might explain the observed spread in HR diagrams at ages of a few Myr years, a scenario supported by recent observations of protostars, some of which were carried out in $\rho$ Oph [@Enoch2009]. According to these results, even after accretion has halted, young low-mass objets can keep a memory from these strong accretion events, altering the expected path in their contraction along the Hayashi track, and therefore their position in the HR diagram. Another possible explanation could also be that some of these brown dwarfs lie behind $\rho$ Oph and are instead members of Upper Sco, which would mean the luminosity is underestimated in the HR diagram presented. If we compute the bolometric luminosities using a distance of 165 pc instead (approximate boundary of Upper Sco), all sources are within the 10 Myr isochrone or younger in the HR diagram. Though it is unlikely this is the case for all the sources, it is possible that some of the brown dwarfs associated with $\rho$ Oph could rather be Upper Sco members. Given these uncertainties, we have therefore not assigned a definite age or mass to the newly confirmed brown dwarfs and the very low-mass star discovered in this work. We can claim though, based on their position in the HR diagram relative to the other members of the cloud, that they have ages and luminosities which agree with those of the known substellar members. From their location in the HR diagram, these new candidates indeed appear to be amongst the lowest mass objects of the cluster. ### $\rho$ Ophiuchi: census update of the substellar population We compiled from previous studies a list of spectroscopically confirmed members of $\rho$ Oph with spectral types later than $\sim$M6 that are therefore likely to be brown dwarfs (according to the evolutionary models of @Baraffe1998) and present them in Table \[bd\] together with the six new brown dwarfs found in this work. All surveys were conducted in the main cloud, L1688. All but one (not in the WIRCam/CFHT survey coverage) of the brown dwarfs are plotted in the HR diagram in Fig. \[hr\]. We did not include in this list the brown dwarfs discovered by @Allers2007 in a region to the north west of the central cloud, because it has been suggested that two of the three are associated with an older population of Sco-Cen [@Luhman2007a] and therefore their membership to the $\rho$ Ophiuchi cloud complex has not been confirmed [see also, @Close2007]. We did not include in this list the T2 dwarf found by @Marsh2009 either, because the main argument used to claim membership to $\rho$ Oph relies on the distance determination using the apparent *K*-band magnitude of the object, which we claim might be wrong. This list provides an updated census of the substellar members of $\rho$ Oph known to date. Conclusion ========== We identify 110 substellar candidate members of $\rho$ Ophiuchi from a deep, near-IR photometric survey, from which 80 were not previously associated with the cloud. By extensive use of archive multi-wavelength data, we find evidence of mid-IR excess for 27% of the candidates and a variability behaviour consistent with that of YSOs for 15%, further supporting the membership of these candidates. We started a spectroscopic follow-up of the substellar candidate members, and present the first results for 16 sources. We identify six new members of $\rho$ Ophiuchi with spectral types ranging from $\sim$M6.5 to $\sim$M8.25, and classify them as new confirmed brown dwarfs according to the evolutionary models of @Baraffe1998. We confirm the spectral type derived by @Cushing2000 for a previously known very low-mass star close to the substellar limit, and based on the SED constructed from optical to mid-IR photometry, we report the discovery of a candidate edge-on disc around this star. We cannot derive accurate spectral types for five sources which have extremely red spectra. Two of these show water absorption features and are classified with spectral types M5 and M6. However, since they lack a *J*-band spectra and given the poor fit they remain as candidate members. The remaining three sources could be T Tauri star members of the cluster, because they show strong mid-IR excess and one of them is emitting in X-rays. We found signatures of outflow activity in two of the sources studied spectroscopically where H$_{2}$ 1 - 0 S(0) emission (2.12 $\mu$m) was detected. Four sources out of the 16 were found to be contaminant field dwarfs. We thank Dr. Kevin Luhman for his useful comments as a referee, and for providing template spectra of young stars and brown dwarfs. We thank Dr. Ignazio Pillitteri, Dr. Ettore Flaccomio, and collaborators from the DROXO team for providing some of their results prior to publication. We thank the QSO team at CFHT for their efficient work at the telescope and the data pre-reduction as well as the Terapix group at IAP for the image reduction. This work is based in part on data products produced and image reduction processes conducted at TERAPIX. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has also made use of the SIMBAD database, operated at CDS, Strasbourg, France. [^1]: Based on observations obtained with WIRCam, a joint project of CFHT, Taiwan, Korea, Canada, France, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l’Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. Based on observations made at the ESO La Silla and Paranal Observatory under program 083.C-0092. Based in part on data collected at Subaru Telescope, and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. Research supported by the Marie Curie Research Training Network CONSTELLATION under grant no. MRTN-CT- 2006-035890. [^2]: For the *J* filter, two sets of images were taken: short (7x4x5 s) and long (7x8x27 s) exposures. [^3]: Available at http://www.browndwarfs.org/spexprism/ [^4]: Available at http://irsa.ipac.caltech.edu/ [^5]: Available at http://www.cfa.harvard.edu/COMPLETE [^6]: PSF photometry magnitude from WIRCam/CFHT survey, unless noted otherwise. [^7]: From the SpT- T$_{\emph{eff}}$ relation of @Luhman2003. [^8]: Spectral types determined from the following studies: 1. @Luhman1997; 2. @Wilking1999; 3. @Luhman1999; 4. @Cushing2000; 5. @Natta2002; 6. @Wilking2005. [^9]: This object is not part of the WIRCam surveyed region. Photometry is from 2MASS public catalogue.
--- author: - Hoang Son Pham - Gwendal Virlet - Dominique Lavenier - Alexandre Termier title: Statistically Significant Discriminative Patterns Searching --- Discriminative patterns, Discriminative Measures, Statistical Significance, Anti-Monotonicity. Introduction {#sec:introduction} ============ Problem Definition {#sec:defs} ================== Enumeration Strategy {#sec:enum} ==================== SSDPS: Algorithm Design and Implementation {#sec:algo} ========================================== Experimental Results {#sec:expes} ==================== Conclusion and Perspectives {#sec:conc} =========================== [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[ l@\#1 =l@\#1 \#2]{}]{} G. Dong and J. Li, “Efficient mining of emerging patterns: Discovering trends and differences,” in *Fifth ACM SIGKDD*, ser. KDD ’99.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 1999, pp. 43–52. S. Bay and M. Pazzani, “Detecting group differences: Mining contrast sets,” *Kluwer Academic Publishers*, vol. 5, no. 3, pp. 213–246–, 2001. H. Cheng, X. Yan, J. Han, and P. S. Yu, “Direct discriminative pattern mining for effective classification,” ser. ICDE ’08.1em plus 0.5em minus 0.4emWashington, DC, USA: IEEE Computer Society, 2008, pp. 169–178. M. García-Borroto, J. Martínez-Trinidad, and J. Carrasco-Ochoa, “A survey of emerging patterns for supervised classification,” *Springer Netherlands*, vol. 42, no. 4, pp. 705–721, 2014. J. Li and Q. Yang, “Strong compound-risk factors: Efficient discovery through emerging patterns and contrast sets,” *Information Technology in Biomedicine, IEEE Transactions*, vol. 11, no. 5, pp. 544–552, 2007. T. Guns, S. Nijssen, and L. D. Raedt, “Itemset mining: A constraint programming perspective,” *Elsevier*, 2011. G. Fang, G. Pandey, W. Wang, M. Gupta, M. Steinbach, and V. Kumar, “Mining low-support discriminative patterns from dense and high-dimensional data,” *Knowledge and Data Engineering, IEEE Transactions on*, vol. 24, no. 2, pp. 279–294, Feb 2012. G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu, “Mining top-k covering rule groups for gene expression data,” ser. SIGMOD ’05.1em plus 0.5em minus 0.4emNY, USA: ACM, 2005, pp. 670–681. T. Guns, S. Nijssen, and L. D. Raedt, “k-pattern set mining under constraints,” *IEEE Transactions on Knowledge and Data Engineering*, vol. 25, no. 2, pp. 402–418, Feb 2013. P. Terlecki and K. Walczak, *Efficient Discovery of Top-K Minimal Jumping Emerging Patterns*.1em plus 0.5em minus 0.4emBerlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 438–447. M. van Leeuwen and A. Knobbe, “Diverse subgroup set discovery,” *Data Mining and Knowledge Discovery*, vol. 25, no. 2, pp. 208–242, 2012. M. Boley, C. Lucchese, D. Paurat, and T. Gärtner, “Direct local pattern sampling by efficient two-step random procedures,” ser. KDD ’11.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 2011, pp. 582–590. M. García-Borroto, J. F. Martínez-Trinidad, J. A. Carrasco-Ochoa, M. A. Medina-Pérez, and J. Ruiz-Shulcloper, “Lcmine: An efficient algorithm for mining discriminative regularities and its application in supervised classification,” *Pattern Recognition*, vol. 43, no. 9, pp. 3025–3034, Sep. 2010. L. Ma, T. L. Assimes, N. B. Asadi, C. Iribarren, T. Quertermous, and W. H. Wong, “An “almost exhaustive” search-based sequential permutation method for detecting epistasis in disease association studies,” *Genetic Epidemiology*, vol. 34, no. 5, pp. 434–443, 2010. J. A. Morris and M. J. Gardner, “Statistics in medicine: Calculating confidence intervals for relative risks (odds ratios) and standardised ratios and rates,” *British Medical Journal*, vol. 296, no. 6632, pp. 1313–1316, May 1988. R. Agrawal, T. Imieliński, and A. Swami, “Mining association rules between sets of items in large databases,” *SIGMOD Rec.*, vol. 22, no. 2, pp. 207–216, Jun. 1993. N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal, “Discovering frequent closed itemsets for association rules,” ser. ICDT ’99.1em plus 0.5em minus 0.4emLondon, UK, UK: Springer-Verlag, 1999, pp. 398–416. T. Uno, M. Kiyomi, and H. Arimura, “Lcm ver. 2: Efficient mining algorithms for frequent/closed/maximal itemsets,” in *Workshop Frequent Item Set Mining Implementations*, 2004. J. Han, J. Pei, and Y. Yin, “Mining frequent patterns without candidate generation,” *SIGMOD Rec.*, vol. 29, no. 2, pp. 1–12, May 2000. F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. J. Zaki, “Carpenter: Finding closed patterns in long biological datasets,” in *Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, ser. KDD ’03.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 2003, pp. 637–642. T. Guns, S. Nijssen, and L. De Raedt, “Itemset mining: A constraint programming perspective,” *Artificial Intelligence*, vol. 175, no. 12, pp. 1951–1983, 2011. R. J. Klein, C. Zeiss, E. Y. Chew, J.-Y. Tsai, R. S. Sackler, C. Haynes, A. K. Henning, J. P. SanGiovanni, S. M. Mane, S. T. Mayne, M. B. Bracken, F. L. Ferris, J. Ott, C. Barnstable, and J. Hoh, “Complement factor h polymorphism in age-related macular degeneration,” *Science*, vol. 308, no. 5720, pp. 385–389, 2005. Support document {#support-document .unnumbered} ================
--- author: - 'C. Tasse' bibliography: - 'references.bib' date: 'Received &lt;date&gt; / Accepted &lt;date&gt;' title: 'Non-linear Kalman filters for calibration in radio interferometry' --- I thank Ludwig Schwardt for helping me understand some important aspects of Kalman filters. Those open-ended discussions were very helpful to develope the framework presented in this paper. Thanks to Trienko Grobler and Oleg Smirnov for giving useful comments on the paper draft.
--- author: - 'C. Ceruti' - 'S. Bassis' - 'A. Rozza' - 'G. Lombardi' - 'E. Casiraghi' - 'P. Campadelli' title: ': Dimensionality from Angle and Norm Concentration' ---
--- abstract: 'Dimer vacancy (DV) defect complexes in the Si(001)$2\times1$ surface were investigated using high-resolution scanning tunneling microscopy and first principles calculations. We find that under low bias filled-state tunneling conditions, isolated ‘split-off’ dimers in these defect complexes are imaged as pairs of protrusions while the surrounding Si surface dimers appear as the usual “bean-shaped” protrusions. We attribute this to the formation of $\pi$-bonds between the two atoms of the split-off dimer and second layer atoms, and present charge density plots to support this assignment. We observe a local brightness enhancement due to strain for different DV complexes and provide the first experimental confirmation of an earlier prediction that the 1+2-DV induces less surface strain than other DV complexes. Finally, we present a previously unreported triangular shaped split-off dimer defect complex that exists at S$\rm_B$-type step edges, and propose a structure for this defect involving a bound Si monomer.' author: - 'S. R. Schofield' - 'N. A. Marks' - 'N. J. Curson' - 'J. L. O’Brien' - 'G. W. Brown' - 'M. Y. Simmons' - 'R. G. Clark' - 'M. E. Hawley' - 'H. F. Wilson' date: 'May 5, 2003' title: 'Split-off dimer defects on the Si(001)2 $\times$ 1 surface' --- Introduction ============ There are currently several exciting proposals to use the (001) surface of silicon for the construction of atomic-scale electronic devices, including single electron transistors [@tu-ijcta-00-553], ultra-dense memories [@qu-n-01-265] and quantum computers [@ka-na-98-133; @ob-prb-01-161401]. However, since any random charge or spin defects in the vicinity of these devices could potentially destroy their operation, a thorough understanding of the nature of crystalline defects on this surface is essential. The Si(001) surface was first observed in real space at atomic resolution using scanning tunneling microscopy (STM) by Tromp *et. al.*[@tr-prl-85-1303] in 1985. In this study they observed the surface consisted of rows of “bean-shaped” protrusions which were interpreted as tunneling from the $\pi$-bonds of surface Si dimers, thereby establishing the dimer model as the correct model for this surface. Since then, STM has been instrumental in further elucidating the characteristics of this surface, and in particular atomic-scale defects present on the surface[@ha-jvsta-89-2854; @zh-srl-96-1449; @ha-ss-00-156]. The simplest defect of the Si(001) surface is the single dimer vacancy defect (1-DV), shown schematically in Figs. \[def1\](a) and \[def1\](b). This defect consists of the absence of a single dimer from the surface and can either expose four second-layer atoms (Fig. \[def1\](a)) or form a more stable structure where rebonding of the second-layer atoms occurs [@wa-prb-93-10497] as shown in Fig. \[def1\](b). While the rebonded 1-DV strains the bonds of its neighboring dimers it also results in a lowering of the number of surface dangling bonds and has been found to be more stable than the nonbonded structure. [@ow-ss-95-L1042; @wa-prb-93-10497] Single dimer vacancy defects can also cluster to form larger defects such as the double dimer vacancy defect (2-DV) and the triple dimer vacancy defect (3-DV). More complex clusters also form, the most commonly observed[@ko-prb-95-17269; @wa-prb-93-10497] example is the 1+2-DV consisting of a 1-DV and a 2-DV separated by a single surface dimer, the so-called “split-off dimer”. The accepted structure of the 1+2-DV, as proposed by Wang *et. al.* based on total energy calculations,[@wa-prb-93-10497] is shown in Fig. \[def1\](c) and consists of a rebonded 1-DV (left), a split-off dimer, and a 2-DV with a rebonding atom (right). Recently we have observed another DV complex that contains a split-off dimer, called the 1+1-DV, which consists of a rebonded 1-DV and a nonbonded 1-DV separated by a split-off dimer, as shown in Fig. \[def1\](d). ![Ball and stick models of dimer vacancy defects: (a) non-bonded 1-DV, (b) rebonded 1-DV, (c) 1+2-DV, and (d) 1+1-DV. Si atoms that have a dangling bond are shaded black. Height is indicated in the top views by the diameter of the balls, with the surface atoms having the largest diameter. The true minimum energy configurations for these structures involve buckling of the dimers in alternating directions along the dimer row. However, since the dimers switch between their two possible buckling orientations at room temperature, the atomic positions shown here represent the average positions of the atoms.[]{data-label="def1"}](srs03b_fig1.eps) Here we present a detailed investigation of DV defect complexes that contain split-off dimers. Using high-resolution, low-bias STM we observe that split-off dimers appear as well-resolved pairs of protrusions under imaging conditions where normal Si dimers appear as single “bean-shaped” protrusions. We show that this difference arises from an absence of the expected $\pi$-bonding between the two atoms of the split-off dimer but instead the formation of $\pi$-bonds between the split-off dimer atoms and second layer atoms. Electron charge density plots obtained using first principles calculations support this interpretation. We observe an intensity enhancement surrounding some split-off dimer defect complexes in our STM images and thereby discuss the local strain induced in the formation of these defects. Finally, we present a model for a previously unreported triangular-shaped split-off dimer defect complex that exists at S$\rm_B$-type step edges. High-resolution variable-bias STM imaging of defect complexes ============================================================= Experiments were performed in two separate but identical variable temperature STM systems (Omicron VT-STM). The base pressure of the ultra-high vacuum (UHV) chamber was $< 5\times10^{-11}$ mbar. Phosphorus doped $10^{15}$ and $10^{19}$ cm$^{-3}$ wafers, orientated towards the \[001\] direction were used. These wafers were cleaved into $2\times10$ mm$^2$ sized samples, mounted in sample holders, and then transferred into the UHV chamber. Wafers and samples were handled using ceramic tweezers and mounted in tantalum/molybdenum/ceramic sample holders to avoid contamination from metals such as Ni and W. Sample preparation[@sw-jvsta-89-2901] was performed in vacuum without prior [*ex-situ*]{} treatment by outgassing overnight at 850 K using a resistive heater element, followed by flashing to 1400 K by passing a direct current through the sample. After flashing, the samples were cooled slowly ($\sim3$ K/s) from 1150 K to room temperature. Split-off dimers ---------------- The sample preparation procedure outlined above routinely produced samples with very low surface defect densities. However, the density of defects, including split-off dimer defects, was found to increase over time with repeated sample preparation and STM imaging, as reported previously.[@ha-jvsta-00-1933] It is known that split-off dimer defects are induced on the Si(001) surface by the presence of metal contamination such as Ni, [@za-prl-95-3890] and W [@ma-jjap-00-4518]. The appearance of these defects in our samples therefore points to a build up of metal contamination, either Ni from in-vacuum stainless steel parts, or more likely W contamination from the STM tip. After using an old W STM tip to scratch a $\sim$ 1 mm line on a Si(001) sample in vacuum and then reflashing, the concentration of split-off dimer defects on the surface was found to have dramatically increased, confirming the STM tip as the source of the metal contamination. Figure \[SODs\] shows an STM image of a Si(001) surface containing a $\sim$ 10% coverage of split-off dimer defects. The majority of the defects in this image can be identified as 1+2-DVs, however, two 1+1-DVs are also present, as indicated. The most striking feature of this image is the difference in appearance of the split-off dimers in contrast to the surrounding normal surface dimers. Each split-off dimer in this image appears as a double-lobed protrusion, while the surrounding normal Si dimers each appear as a single “bean-shaped” protrusion, as expected at this tunneling bias. [@ha-prb-99-8164] Line profiles taken across a 1+2-DV both parallel and perpendicular to the dimer row direction are shown in Fig. \[SODs\](b). The line profile parallel to the dimer row direction agrees with previously reported profiles over 1+2-DVs and fits well with the accepted structure, [@zh-srl-96-1449; @ow-ss-95-L1042] as shown by the overlayed ball and stick model. The line profile taken perpendicular to the dimer row direction, however, clearly shows that the split-off dimer of this defect is separated into two protrusions while the neighboring Si dimers are single protrusions. This is the first recognition and explanation of split-off dimers appearing as double-lobed protrusions. ![A low bias filled-state STM image of a Si(001)2$\times$1 surface with split-off dimer defects is shown in (a). Tunneling conditions for this image were $-1$ V sample bias and 0.8 nA tunnel current. Line profiles are taken across a single 1+2-DV both parallel, X – X$'$ (b), and perpendicular, Y – Y$'$ (c), to the dimer row direction, as indicated in (a). The schematic (d) is a top view ball and stick model of a 1+2-DV with the approximate positions of $\pi$-bonds indicated by shaded ellipses.[]{data-label="SODs"}](srs03b_fig2.eps) To understand why split-off dimers appear as double-lobed protrusions we must consider the structure of these defects shown in Figs. \[def1\](c) and \[def1\](d). Normally Si(001) surface dimers appear as “bean-shaped” protrusions in STM images because the dangling bonds of each Si dimer atom mix to form a $\pi$-bond between the two dimer atoms. However, if we examine the split-off dimer structure closely (Figs. \[def1\](c) and \[def1\](d)) we see that unlike normal surface dimers, the split-off dimer has two nearest neighbor second layer atoms that each have a dangling bond. The separation distance between the split-off dimer atoms and these second layer atoms is sufficiently close to allow the formation of $\pi$-bonds. The resulting four-atom structure can therefore be referred to as a *tetramer*. We propose that the four dangling bonds of the split-off dimer tetramer interact primarily along the backbonds between the split-off dimer atoms and the second layer atoms to form $\pi$-bonds down the backbonds, as drawn schematically in Fig. \[SODs\](c). These two spatially separated $\pi$-bonds therefore lead to the double-lobed appearance of the split-off dimers under low bias filled-state tunneling conditions, which we confirm in section \[theory1\] with charge density calculations. In an attempt to fully characterize the appearance of these split-off dimers in STM images, we have performed a series of experiments observing split-off dimers with changing STM sample bias. Figure \[SODv\] summarizes our results, showing images where a 1+2-DV and a 1+1-DV located next to each other are observed at four different sample biases – two filled-state images and two empty-state images. In the filled-state image of Fig. \[SODv\](a) we see that at $-0.8$ V the split-off dimer of both the 1+2-DV and the 1+1-DV appear as double-lobed protrusions similar to those in Fig. \[SODs\](a). However, when the filled-state bias is increased in magnitude to $-2$ V, Fig. \[SODv\](b), the split-off dimers become single-protrusions and appear very similar to the surrounding normal Si surface dimers. This is because as the bias magnitude is increased towards $-2$ V, the dimer $\sigma$-bond and bulk states contribute increasingly to the tunneling current [@ha-prb-99-8164] and the image of the split-off dimer reverts to the bean-shaped protrusion in the same manner as normal surface Si dimers. In both of the empty-state images, Figs. \[SODv\](c) and \[SODv\](d), acquired at +0.8 V and +2 V, respectively, the appearance of the split-off dimers is very similar to that of the surrounding normal surface dimers. This is because under empty-state tunneling conditions electrons tunnel into the $\pi^*$-antibonding orbitals of the dimers, resulting in the normal Si dimers appearing as double-lobed protrusions. [@qi-prb-99-7293] It is therefore only under low bias magnitude filled-state tunneling conditions that split-off dimers appear significantly different to the surrounding normal Si surface dimers. ![Variable bias STM images of a 1+2-DV adjacent to a 1+1-DV. The split-off dimer of the 1+2-DV is indicated with a black arrow, while the split-off dimer of the 1+1-DV is indicated by a white arrow. All four images were acquired with 0.13 nA tunnel current and the sample bias for each image is (a) $-0.8$ V, (b) $-2$ V, (c) $+0.8$ V, (d) $+2$ V.[]{data-label="SODv"}](srs03b_fig3.eps) Experimental observation of surface strain in complex defects {#strainsection} ------------------------------------------------------------- Another noticeable feature of Figs. \[SODs\](a) and \[SODv\](a) is the enhanced brightness of the 1+1-DV compared to the 1+2-DV. This is a reproducible effect that we attribute to an increased amount of surface strain induced by the 1+1-DV. Figure \[strain\] shows a series of adjacent defects forming a short vacancy line channel in the surface. This channel is composed of individual 1-DV, 3-DV, 1+2-DV, and 1+1-DV defects (see figure caption). In the filled-state image, Fig. \[strain\](a), there is a clear brightening of the dimers on one end of the 1+1-DVs and the dimers on both ends of the 1-DV, which is not present for the 1+2-DVs. In the empty-state image of the line of defect complexes, Fig. \[strain\](b), we notice that there is a darkening of the same dimers that are enhanced in the filled-state image. ![Filled and empty-state STM images ($-1.2$ V, $+1.6$ V, 0.15 nA) of a short chain of DVs in a Si(001) surface. The individual defects are (from top left to bottom right): 1+1-DV, 1+1-DV, 1+2-DV, 3-DV, 1+2-DV, 1+2-DV, 1+1-DV, 1-DV, and 1+2-DV. Note the strain-induced brightening of the 1-DV and 1+1-DVs in the filled-state (a) and the corresponding darkening in the empty-state (b)[]{data-label="strain"}](srs03b_fig4.eps) Owen *et. al.*, [@ow-ss-95-L1042] have shown using low bias STM and first principles calculations, that the dimers neighboring a rebonded 1-DV are enhanced in low bias filled-state STM images due to the strain induced by the defect shifting the surface states upwards in energy toward the Fermi energy. This effect can be seen for the 1-DV in Fig. \[strain\](a), where the neighboring dimers in the same row as the 1-DV are enhanced in intensity, with the magnitude of the enhancement decaying with distance from the 1-DV. A very similar enhancement can be seen around the 1+1-DV sites in this image, with the split-off dimer in particular appearing much brighter than the surrounding normal surface dimers. However, for the 1+1-DV only the dimers on one end of the defect are enhanced in intensity while the dimers on the other end of the defect are not. This observation can be readily explained since the 1+1-DV is composed of a rebonded 1-DV adjacent to a nonbonded 1-DV (Fig. \[def1\](d)) and Owen *et. al.* [@ow-ss-95-L1042] have shown that while the rebonded 1-DV results in strain-induced image enhancement, the nonbonded 1-DV does not. The observation of an asymmetric strain-induced enhancement of the 1+1-DV in Fig. \[strain\](a) can therefore be taken as an experimental confirmation of the structure of this defect (Fig. \[def1\](d)) and the first application of the method of Owen *et. al.* [@ow-ss-95-L1042] for identifying strain in more complex surface defect structures. The fact that the 1+2-DV causes no enhancement of its neighboring dimers over the surrounding normal surface dimers suggests that the 1+2-DV, unlike the 1-DV and 1+1-DVs, does not increase the strain of the surface. This at first seems strange, since the 1+2-DV involves a rebonded 1-DV similar to the 1+1-DV structure. However, Wang *et. al.* [@wa-prb-93-10497] have shown, using total energy calculations, that the junction formed between the 1-DV and the 2-DV to create the 1+2-DV releases the surface strain that is present when these two defects exist separately from one another. The STM data that we have presented here is therefore the first experimental verification of this calculation. The fact that both the 1-DV and the 1+1-DV show local enhancement due to strain, while the 1+2-DV does not, indicates that the 1+2-DV structure induces less local strain than the 1-DV. In their paper, Owen *et. al.* do not present empty state STM images, nor do they consider empty states in their tight binding calculations. In Fig. \[strain\](b), we show an empty state image of the same line of defects shown in Fig. \[strain\](a). Interestingly, in this empty state image the dimers that were enhanced in brightness surrounding the 1-DV and 1+1-DVs in the filled-state image are less bright than the surrounding Si dimers in the empty-state image. This suggests that the strain associated with these defects causes the lowest unoccupied molecular orbital (LUMO) of the adjacent dimers to also shift higher in energy, away from the Fermi energy. Density functional characterization of defect complexes {#theory1} ======================================================= To confirm the interpretation of our STM images, we have performed first principles electronic structure calculations of both the 1+2-DV and 1+1-DV complexes using the Car-Parrinello Molecular Dynamics program. [@cpmd] Valence electrons were described using Goedecker pseudopotentials [@go-prb-96-1703] expanded in a basis set of plane waves with an energy cutoff of 18 Rydbergs and the exchange-correlation functional was of the BLYP form. [@be-pra-88-3098; @le-prb-98-785] Slab calculations contained between 124 and 128 Si atoms in a $31.070\times7.675\times19.253$ Å$^3$ supercell, corresponding to six layers of vacuum in the $z$-direction, and all calculations were performed with gamma point sampling of the Brillouin zone only. A reference calculation was performed with no surface vacancies and assuming the $p(2\times2)$ structure in which the dimers buckled alternately along the row. A single 256 atom calculation with a duplication along the y-axis confirmed that the effect of dispersion across the rows is minor as has been noted elsewhere. [@po-jvstb-87-945] Both zero temperature geometry optimization and high temperature molecular dynamics calculations were used to explore a variety of surface and second-layer bonding configurations for the 1+2-DV and 1+1-DV. The results confirm the configurations in Figs. \[def1\](c) and \[def1\](d) are the lowest energy geometries of both defect complexes. The dimers are drawn symmetric in these schematics, however, the true minimum energy structure at zero temperature involved charge-transfer buckling of the Si dimers. It is well known that at room temperature the barrier is sufficiently small for the dimers to flip-flop between the two equivalent configurations. [@ra-prb-95-14504; @wo-prl-92-2636] Our calculations show that the split-off dimer tetramer also has two symmetrically equivalent buckling configurations, with charge transfer between the atoms of the tetramer buckling adjacent atoms in alternate directions. By analogy with the normal dimers we can expect room-temperature STM measurements of the tetramer to image the average of the two configurations. The chemical potential was determined from a 512-atom bulk calculation, which yielded a formation energy of 0.85 eV for the 1+2-DV, similar to the value of 0.65 eV computed by Wang *at. al*. [@wa-prb-93-10497] The 1+1-DV formation energy has not been previously reported, and we found it to be 1.13 eV. We note that this value is high, but this is consistent with the rarity of observation of the 1+1-DV in STM experiments. In Fig. \[1+2-dv\] we present a series of calculated electron density slices through various regions of the 1+2-DV marked by (a), (b), (c), and (d) in the ball and stick schematic. The charge density shown in the figure is the sum of the occupied Kohn-Sham orbitals within 0.25 eV of the highest occupied molecular orbital (HOMO). Taking into account the $\sim0.5$ eV surface band gap of Si(001) and the n-type doping of the experimental samples, these states correspond approximately to the accessible states for a $\sim0.75$ V sample bias and can therefore be directly compared to the experimental data in Fig. \[SODv\](a), which was acquired with a $-0.8$ V sample bias. ![Cross-section electron density plots for filled states within 0.25 eV of the highest occupied molecular orbital (HOMO) for several cuts through the 1+2-DV complex. The planes a,b,c and d through the top-view ball and stick model (e) indicate the direction and position of the cuts, and the shaded ellipses indicate the $\pi$-bonding as inferred from the electron density (see text). Each electron density plot is an average of both buckling configurations, and the atomic positions and bonds are shown as black balls and sticks. The slices are (a) rebonded 1-DV edge dimer, (b) split-off dimer, (c) split-off dimer backbonds, (d) 2-DV edge dimer.[]{data-label="1+2-dv"}](srs03b_fig5.eps) The four charge density slices in Fig. \[1+2-dv\] show: Fig. \[1+2-dv\](a) the 1-DV edge dimer, Fig. \[1+2-dv\](b) the split-off dimer, Fig. \[1+2-dv\](c) the backbond of the split-off dimer, and Fig. \[1+2-dv\](d) the 2-DV edge dimer, as indicated schematically in Fig. \[1+2-dv\](e). The charge densities of both buckling configurations of the dimers and backbond atoms are averaged, and the positions of the dimer and tetramer atoms are shown superimposed in both buckling configurations. In the case of the backbonds, the two configurations are not coincident, and so the atoms and bonds are shown in projection onto the plane in Fig. \[1+2-dv\](c). The 1-DV edge dimer in Fig. \[1+2-dv\](a) shows a clear three-lobed character with significant overlap between the up-atom charge density of the two buckling orientations, and a single lobe beneath the plane of the surface at the mid-point of the dimer. Density functional calculations by Hata *et. al.* [@ha-prb-99-8164] and tight-binding Green’s function calculations by Pollman *et. al.* [@po-jvstb-87-945] have separately identified this three-lobed feature as being characteristic of $\pi$-bonding in flip-flop dimers on the silicon surface, and we can therefore take this three-lobed feature as a signature of $\pi$-bonding in this work. The backbond of the split-off dimer in Fig. \[1+2-dv\](c) connects a first-layer atom to a second-layer atom and also shows a three-lobed structure. By analogy with the surface dimer in Fig. \[1+2-dv\](a) we characterize this bond as having $\pi$-character and have indicated this by the shaded ellipse (c) shown in Fig. \[1+2-dv\](e). The split-off dimer itself in Fig. \[1+2-dv\](b), however, does not exhibit three-lobed character. Instead, the split-off dimer has four lobes; two located above the up-atoms of the dimer in each buckling configuration, and a second pair of spatially separated lobes beneath the bond. The calculations thus show that $\pi$-bonding occurs down the backbonds of the split-off dimer, but not across the dimer itself. The absence of the $\pi$-bond across the split-off dimer correlates with the double-protrusions observed in the STM images. Finally, we also consider the charge density of the 2-DV edge dimer, Fig. \[1+2-dv\](d), and note that it also exhibits three-lobed character, indicative of $\pi$-bonding. This gives the 2-DV edge dimer a bean-shaped appearance in the STM image, as for the 1-DV dimer in Fig. \[1+2-dv\](a). A similar situation exists for the 1+1-DV charge density slices shown in Fig. \[1+1-dv\]. The first three charge density slices, Figs. \[1+1-dv\](a) – \[1+1-dv\](c), are analogous to the slices for the 1+2-DV As was the case for the 1+2-DV, the rebonded 1-DV edge dimer, Fig. \[1+1-dv\](a) and the split-off dimer backbonds, Fig. \[1+1-dv\](c) exhibit three-lobed $\pi$-like character, while the split-off dimer, Fig. \[1+1-dv\](b) exhibits four-lobed character, consistent with an end-on view of $\pi$-bonding down the backbonds. Finally, another slice is presented in Fig. \[1+1-dv\](d), which is through the nonbonded 1-DV edge dimer as indicated schematically in Fig. \[1+1-dv\](e). It can be seen that the nonbonded 1-DV edge dimer appears quite different to the charge density slices discussed so far. In particular, we notice that the nonbonded 1-DV edge dimer has a much reduced charge density compared to the other slices, Fig. \[1+1-dv\](a) – \[1+1-dv\](c). Examination of the structure identifies strain as the characteristic that differentiates the dimer in Fig. \[1+1-dv\](d) from the other dimers. Since the dimer in Fig. \[1+1-dv\](d) is part of a tetramer, one might expect its appearance to resemble the split-off dimer which is also part of the tetramer shown Figs. \[1+1-dv\](b) and \[1+1-dv\](c). However, a detailed examination of the simulated structure reveals that the nonbonded 1-DV tetramer is relaxed, since there is one adjacent dimer present, while the split-off tetramer is highly strained because of the rebonding in the second-layer. Since the nonbonded 1-DV tetramer is much less strained, its occupied states lie further from the Fermi level, explaining the charge reduction observed in calculations in Fig. \[1+1-dv\](d). As discussed in Ref. , the minimum energy arrangement of the electrons in a tetramer is one where the $\pi$-states are delocalized across the four atoms, to form three bonding segments, as indicated by the ellipses in Fig. \[1+1-dv\](e). The charge density slice of Fig. \[1+1-dv\](d) is consistent with such an arrangement where the charge density is shared between $\pi$-like bonds on both backbonds and across the dimer atoms. We conclude that this charge density arrangement forms for the nonbonded 1-DV tetramer because it is allowed to relax. In the case of the split-off dimer, the tetramer is constrained by the rebonding and instead forms a higher energy configuration in which the $\pi$-bonds conjugate to form two $\pi$-bonds down its backbonds. ![Cross-section electron density plots for filled states within 0.25 eV of the HOMO for several cuts through the 1+1-DV complex. The planes a,b,c and d through the top-view ball and stick model (e) indicate the direction and position of the cuts, and the shaded ellipses indicate the $\pi$-bonding as inferred from the electron density (see text). Each electron density plot is an average of both buckling configurations, and the atomic positions and bonds are shown as black balls and sticks. The slices are (a) rebonded 1-DV edge dimer, (b) split-off dimer, (c) split-off dimer backbonds, (d) nonbonded 1-DV edge dimer.[]{data-label="1+1-dv"}](srs03b_fig6.eps) New step edge defect ==================== Having presented a detailed understanding of the electronic structure of previously observed split-off dimer defects in the Si(001) surface using both STM and first-principles calculations, we now turn our attention to elucidating the structure of a previously unreported split-off dimer defect. In Figs. \[triangular\](a) and \[triangular\](b) we show filled- and empty-state STM images of DV defects at a single-layer S$\rm_B$-type step edge. At the top of these images white arrows indicate are three defects known as S$\rm_B$-DVs, which are rebonded 1-DVs at the step edge, which leave a single split-off dimer as the last dimer before the lower terrace begins. [@ko-prb-96-10308] As was the case for the 1+1-DV and 1+2-DV, the split-off dimers in S$\rm_B$-DVs appear as double-lobed protrusions under low-bias filled-state imaging conditions, Fig. \[triangular\](a). At the bottom of Fig. \[triangular\](a) two similar DV complexes can be observed, as indicated by black arrows, however these defects have a third protrusion giving them a triangular appearance. In empty-state imaging, Fig. \[triangular\](b), however, the additional third feature is not present. These triangular-shaped defects have not been reported on the Si(001) surface before and most likely arise due to the presence of W contamination. ![(a), (b) Filled- and empty-state images ($\pm$1.2 V) of DV defects at an S$\rm_B$-type step edge. White arrows indicate S$\rm_B$-DVs, [@ko-prb-96-10308] while black arrows point to a previously unreported defect that exhibits a third protrusion in the filled-state giving it a triangular appearance. We propose the structure (c) as a model for this defect. Calculated charge density slices at a constant $z$-height for the dashed region of (c) are shown in (d) and (e) (for Kohn-Sham orbitals summed over 0.45 eV below the HOMO and 0.45 eV above the LUMO, respectively). These contour slices are in good agreement with the STM images in (a) and (b), in particular predicting the correct spacing of 6.4 Å between the split-off dimer and third protrusion and also the disappearance of the third protrusion in the empty-state. The horizontal tic-marks in (d) and (e) indicate the dimer positions on the defect-free surface.[]{data-label="triangular"}](srs03b_fig7.eps) Our proposed structural model of the triangular-shaped defects in Fig. \[triangular\](a) is shown in Fig. \[triangular\](c). This model consists of a nonbonded 1-DV defect at an S$\rm_B$-type step edge, followed by a rebonded split-off dimer and a bound Si monomer. Swartzentruber has previously observed Si monomers on the Si(001) surface using high-resolution STM after depositing a few percent of a monolayer of Si atoms to the surface. [@sw-jcg-98-1] These monomers were bound at rebonded S$\rm_B$-type step edges, confirming the minimum energy binding position predicted by first principles calculations. The binding position of the monomer in our proposed structure, Fig. \[triangular\](c), is essentially the same position observed by Swartzentruber, with the difference being the presence of the DV defect adjacent to the step edge. Swartzentruber also observed that the Si monomers bound at S$\rm_B$-type step edges were visible in one bias polarity (empty-state) but invisible in the other (filled-state). Our images reveal a similar effect, however the feature we observe appears in filled-state images while being invisible in empty-state images. We have performed first-principles calculations to produce charge density contours for our proposed structure. Figure 7(d) shows a constant $z$-height contour slice taken 1.2 Å above the monomer for occupied Kohn-Sham orbitals within 0.45 eV of the HOMO. We see in this charge density contour slice the two lobes expected for the split-off dimer as well as a third lobe due to the bound monomer. Moreover, the distance between the split-off dimer lobes and the monomer lobe is 6.4 Å in agreement with the separation seen in the STM image. In Fig. \[triangular\](e) we show an empty-state slice taken at the same $z$-height and summed over Kohn-Sham orbitals up to 0.45 eV above the LUMO. In this contour the double lobe of the split-off dimer is still present but the monomer lobe is significantly lessened in intensity. The results of our first-principles calculations therefore give good agreement between our proposed structure and the observed defect. The presence of the split-off dimer must therefore be responsible for the reversal of the filled- and empty-state monomer characteristics when compared to those observed for monomers bound to rebonded S$\rm_B$-type step edges. Summary ======= We have investigated split-off dimers on the Si(001)2$\times$1 surface using high resolution STM and first principles calculations. We find that split-off dimers form $\pi$-bonds with second layer atoms which gives them a double-lobed appearance in low bias filled-state STM images. We apply the method of Owen *et. al.* [@ow-ss-95-L1042] for identifying local areas of increased surface strain to dimer vacancy defect complexes and thereby present the first experimental confirmation of the predicted strain relief offered by the 1+2-DV. Finally, we have presented a previously unreported triangular-shaped defect on the Si(001) surface and a proposed model for this structure involving a bound Si monomer. [27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , ****, (). , ****, (). , , , , , , , , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , , , , ****, (). , , , ****, (). , ****, (). , , , , , , , , (). , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ** (, , ), chap. . , , , , , , ****, (). , ****, ().
--- abstract: 'We consider rapid cooling processes in classical, 3-dimensional, purely repulsive binary mixtures in which an initial infinite-temperature configuration is instantly quenched to zero temperature. It is found that such systems display both kinds of possible continuous nonequilibrium transition, characterized by either a conserved or non-conserved order parameter. The type of transition that is observed can be controlled by tuning the interactions between unlike particles, with strong inter-species repulsion leading to chemical ordering in terms of an unmixing process, whereas weak repulsion gives rise to spontaneous crystallization, maintaining chemical homogeneity. In contrast to common first-order equilibrium freezing transitions, this nonequilibrium crystallization phenomenon is continuous in nature, being barrierless and producing grain-size distributions that display scale-invariant features. Furthermore, the results suggest that the dual-type transition behavior is universal for repulsive pair interaction potential-energy functions in general, with the propensity for the continuous freezing transition being related to their behavior in the neighborhood of zero separation.' author: - 'Pedro Antonio Santos-Flórez' - Maurice de Koning title: Nonequilibrium Phase Transitions in Repulsive Binary Mixtures --- Classical systems described by repulsive pair potentials have been the subject of intense investigation for over five decades [@Stillinger1964; @Helfand1968; @Baram1983; @Baram1990; @Baram1991; @Baram1992; @Baram1993; @Dijkstra1998; @Lang2000; @Louis2000; @Speedy2003; @Cinacchi2007; @Glaser2007; @Saija2009; @Berthier2010a; @Schmiedeberg2011; @Russo2012a; @Travesset2015; @Horst2016; @Stillinger1976; @Hansen2000; @Prestipino2005; @Prestipino2005a; @Malescio2003; @Mladek2006; @Likos2007; @Likos2008; @Overduin2009; @Archer2004; @Shin2009; @Shall2010; @Russo2012a; @Travesset2015; @Horst2016]. Not in the least due to their role as effective descriptions for interactions in soft-condensed-matter systems [@Likos1998; @Watzlawek1999; @Ferber2000; @Likos2001; @Likos2002; @Likos2006; @Mayer2008; @Mladek2008; @Mayer2009; @Likos2011; @Nikoubashman2015], substantial effort has been directed towards elucidating the equilibrium phase behavior of such models, considering both single-component samples as well as multi-component mixtures [@Stillinger1976; @Hansen2000; @Prestipino2005; @Prestipino2005a; @Malescio2003; @Mladek2006; @Likos2007; @Likos2008; @Overduin2009; @Archer2004; @Shin2009; @Shall2010; @Russo2012a; @Travesset2015; @Horst2016]. Nonequilibrium phenomena, on the other hand, have received much less attention, despite their key role in self-organization phenomena in such systems [@Nicolis1977; @Witten1999; @Marson2014; @Kalsin2006; @Miller2009; @Ye2015; @Kumar2017; @Ye2011; @Macfarlane2011; @Knorowski2011; @Knorowski2011a]. Indeed, one of the challenges in soft-matter materials design concerns the ability to adjust the effective interaction parameters so as to control the self-organization process and achieve desired self-assembled structures [@Knorowski2011a]. In this context, processes that display spontaneous development of structure from an initially disordered, far-from-equilibrium state are of particular interest [@Knorowski2011; @Knorowski2011a; @Nicolis1977]. A typical example are so-called ageing phenomena [@Henkel2011] in which a system initially at equilibrium in a high-temperature state is rapidly quenched to low temperature. Due to the rapid pace of cooling, the initial high-temperature phase becomes unstable and will spontaneously decay into some low-temperature state. When considering mixtures, this decay can occur by means of two different types of continuous nonequilibrium phase transition, depending on whether the involved order parameter is conserved or non-conserved, and which, on the continuum level, are described by the celebrated Allen-Cahn and Cahn-Hilliard equations, respectively [@Balluffi2005; @Henkel2011]. The prototypical example of the first kind are unmixing transitions in which the order parameter is related to the fixed chemical composition and the final low-temperature state is characterized by chemical ordering through phase separation. In the second type, on the other hand, the final state is typified by the development of structural order, as described by a non-conserved order parameter related to, for instance, quantities such as crystal symmetry and/or orientation [@Balluffi2005; @Henkel2011]. However, while unmixing transitions are quite common for the class of repulsive pair potentials [@Archer2001; @Likos2011; @Kambayashi1992], the occurrence of the second type of transition is not. In fact, as far as model systems are concerned, to the best of our knowledge such structural ordering phenomena have so far only been observed for discrete spin systems such as the Ising model [@Henkel2011], while there have been no reports for systems characterized by continuous interactions. Above all, to date there are no known model systems that can display both types of transition as a function of boundary conditions and/or model parameters. \[ht!\] ![image](SantosFlorez_Fig1.pdf){width="17"} Here we show that 3-dimensional binary mixtures described by purely repulsive pairwise interactions display both kinds of nonequilibrium transition and that the observed type can be controlled by tuning the interactions between unlike particles. While strong inter-species repulsion gives rise to chemical ordering through unmixing, weak values lead to a spontaneous development of structural order, forming a polycrystalline solid of uniform chemical composition. Unlike the common equilibrium first-order freezing transitions, however, this nonequilibrium crystallization process is continuous in nature in that it is barrierless and gives rise to grain-size distributions that display scale-invariant characteristics. Furthermore, the results suggest that the dual-type nonequilibrium transition behavior is universal for pairwise repulsive potential-energy functions in general and that the propensity of the continuous structural ordering transition is related to their behavior in the neighborhood of zero separation. Since the main focus is on the nature of the final state of the cooling processes, our results are based on simulations in which an initial infinite-temperature state is instantly quenched to zero temperature. Because the quench is infinitely rapid, the system has no time to explore the potential-energy landscape (PEL) and is instantaneously driven to the local minimum closest to the initial configuration, also known as its inherent structure [@Stillinger1995; @Stillinger2015; @Wales2003]. This quench process is implemented computationally in the following way. First, for a specified particle density, we construct a cubic, periodic simulation cell with a volume $V$ that corresponds to a given total particle number $N$. Subsequently, the system is initialized by randomly placing the $N$ particles in the cell, giving rise to a structureless, uniform position distribution that represents an infinite-temperature state. Then, to locate the corresponding inherent structure, a conjugate-gradient (CG) minimization is invoked. For each set of interaction properties and particle densities this procedure is repeated several times using different random initial conditions. All the CG calculations have been performed using the Polak-Ribiere version of the CG algorithm as implemented in the `LAMMPS` package [@Plimpton1995], which is among the most efficient local minimization algorithms for functions of many variables [@Press2007]. As a first case we consider a binary mixture with inter-particle interactions described by the Uhlenbeck-Ford (UF) model [@deBoer1962; @PaulaLeite2016; @PaulaLeite2017; @PaulaLeite2019], which is characterized by a logarithmic divergence at zero separation and belongs to the class of so-called ultrasoft potentials [@Likos2002], Specifically, the UF model is defined by the potential-energy function $u(r_{ij})=-\epsilon_{ij}\ln(1-e^{-r_{ij}^2/\sigma_{ij}^2})$, where $\epsilon_{ij}$ and $\sigma_{ij}$ are energy and length scales associated with the interactions between particles $i$ and $j$, and $r_{ij}$ is the distance between them. We fix the energy scales of the interactions between particles of the same species to be $\epsilon_{AA}=100 \, \epsilon$ and $\epsilon_{BB}=200 \,\epsilon$, respectively, whereas the energy scale $\epsilon_{AB}$ for interactions between $A$ and $B$ particles is variable. The length scale is chosen to be the same for all interaction types, i.e., $\sigma_{AA}=\sigma_{BB}=\sigma_{AB}=\sigma$ and the cut-off for the interaction calculation is set at $r_c=4\,\sigma$. Species $A$ and $B$ are present in equal proportions for all cases, including for the other interaction models discussed below. Fig. \[Fig1\] displays typical configurations obtained for the UF mixture containing $10^7$ particles at a reduced particle density $\rho^*\equiv N\sigma^3/V=1$. Fig. \[Fig1\]a) depicts a typical random initial condition that is disordered both chemically and structurally. Figs. \[Fig1\]b) and c) then show snapshots obtained from the subsequent CG minimizations for two different values of the inter-species interaction parameter, $\epsilon_{AB}$. Fig. \[Fig1\]b) portrays a case of strong inter-species repulsion at $\epsilon_{AB}=175 \epsilon$. Under these conditions the system is unstable with respect to composition fluctuations [@Santos-Florez_Suppl2019] and undergoes a chemical ordering transition by which the two species unmix. This transformation corresponds to the first type of nonequilibrium transition discussed above, involving a conserved order parameter. Indeed, the depicted structure strongly resembles the typical patterns of spinodal decomposition often seen for phase separation [@Balluffi2005]. Note, however, that the structure depicted in Fig. \[Fig1\]b) has not yet fully converged to the completely unmixed inherent structure. This is because the computational cost to reach a fully unmixed state is prohibitively large for the system size considered here, even for efficient minimizers such as CG. For smaller system sizes, however ($N\sim 10^5-10^6$), complete unmixing is attained within reasonable computational limits. For a weak inter-species interaction at $\epsilon_{AB}=20 \epsilon$, the instability is fundamentally different. In this case the CG minimization rapidly converges to the inherent structure displayed in Fig. \[Fig1\]c), which remains uniform with respect to chemical composition but has spontaneously developed structural order. In particular, it features a polycrystalline morphology composed of grains with the rock-salt (B1) structure, which consists of two interpenetrating fcc lattices, each occupied by either $A$ or $B$. Interestingly, the nature of this crystallization process is fundamentally different from the usual first-order character of equilibrium freezing phenomena. The nonequilibrium crystallization transition observed here is continuous in nature. First, there is no energy barrier between the structureless initial configuration and the final polycrystalline structure since they are connected by a CG sequence that always moves downhill on the PES [@Press2007]. This is the main difference compared to first-order transitions, for which initial and final configurations are separated by an energy barrier that cannot be surmounted using local minimization techniques such as CG. Secondly, the grain-size distribution of the polycrystalline structure displays the same scale-invariant features characteristic for equilibrium continuous phase transitions such as, for instance, in the cluster-size distribution of the percolation transition [@Newman2005]. To demonstrate the latter, we employ the recently developed grain-segmentation algorithm (GSA) in `Ovito` [@Stukowski2010a; @Larsen2016a] to identify individual grains and determine their sizes in terms of particle numbers for the final configuration from a CG minimization with $\epsilon_{AB}=20 \epsilon$. Fig. \[Fig2\] shows a log-log rank-size representation [@Newman2005; @Clauset2009] of the grain-size distribution in which the rank of each grain in terms of its size is plotted as a function of grain size, such that the largest and smallest grains are ranked first and last, respectively. For this particular purpose, to enhance the grain statistics, we have carried out a single quench simulation on a $10^8$ particle cell, with its inherent structure containing more than $4\times10^4$ crystallites with sizes ranging between 100 and $\sim 6\times10^4$ particles. The size-rank graph in Fig. \[Fig2\] clearly displays a linear regime for grain sizes $\gtrsim 10^4$ particles. This suggests that, asymptotically, the distribution for the grain size $k$ follows a power law of the form $p(k)\sim k^{-\alpha}$ (with $\alpha=3.64 \pm 0.02$ in this case) and displays scale invariance. \[t!\] ![(Color online) Log-log graph of rank-size representation of the grain size distribution for cell containing $10^8$ particles as obtained using the grain-segmentation tool of the Ovito package [@Stukowski2010a; @Larsen2016a] , plotting the rank of each grain in terms of its size as a function of grain size, such that the largest and smallest grains are ranked first and last, respectively. Blue circles depict results data points of individual grains. Red line represents a guide to the eye, obtained by a linear fit to the data for the 200 largest grains, which amount to grain sizes greater than $\sim 1.5\times10^4$ particles.[]{data-label="Fig2"}](SantosFlorez_Fig2.pdf "fig:"){width="8.5"} In all of the cases shown above, the results are independent of the random initial condition, displaying the same unmixing and crystallization transitions for different random-number seeds. Accordingly, for a given particle-number density, the type of transition that occurs is determined by the magnitude of the interspecies interaction strength $\epsilon_{AB}$ only. To further analyze its role we carry out a series of quench CG simulations for a set of $\epsilon_{AB}$-values between 0 and $200\,\epsilon$, employing cells containing of the order of $10^3-10^4$ particles. In addition, we also investigate the possible influence of the particle-number density by considering a range of $\rho^*$-values for each $\epsilon_{AB}$. To automate the detection of the phase transitions we monitor the displacements of the particles during each quench simulation, comparing their positions in the initially structureless state to those at the end of the CG minimization procedure. Fig. \[Fig3\]a) displays a density plot of the mean particle displacements (MPD) for the UF system as a function of $\epsilon_{AB}$ and $\rho^*$, expressed in units of the particle-density length scale $d\equiv \rho^{*^{-1/3}}$. It displays three well-defined regimes, characterized by distinct values for the mean particle displacement. The yellow band on the left corresponds to values of the order of $\sim 2d$ and signals the instability of the random initial configuration that leads to its decay into the self-similar rock-salt structure through the continuous ordering transition. The mostly blue band on the right corresponds to the instability that gives rise to the unmixing transition in which particles move over significantly larger distances. Finally, in the orange-colored areas the displacements are less than the average particle separation, meaning that the initial configurations are metastable, i.e., they are “close” to their corresponding local minima, which retain their chemically uniform and structurally disordered character. A further notable characteristic is that the identification of these 3 groups involves $\epsilon_{AB}$ only, being essentially independent of $\rho^*$, except for very low values for which the distances between the particles become large and the interactions between them weak. This implies that the inherent structures associated with high-temperature configurations are invariant with respect to uniform volume scaling [@Stillinger2015]. Interpreted from the perspective of the PEL formalism [@Stillinger2015], the above findings imply that, for the considered binary UF model, the topography of the inherent structures for uniformly sampled configurations undergoes abrupt transitions as a function of the interspecies interaction intensity. At $\epsilon_{AB} \simeq 5$ and 50 there is an abrupt transition between chemically uniform, amorphous inherent structures and local minima that display polycrystalline structural order at a homogeneous composition. When reaching $\epsilon_{AB}\simeq 150$, on the other hand, there is a second kind of transition, with the nature of the inherent structures changing from chemically uniform and structurally ordered to compositionally unmixed without long-range structural order. \[t!\] ![(Color online) Density plots of mean particle displacement in units of the mean interparticle distance $d\equiv{\rho^*}^{-1/3}$ during CG quench as a function of the interaction energy scale $\epsilon_{AB}$ and the reduced density $\rho^*$ for the UF model (a), the IPL4 (b), IPL6 (c), WCA (d) and GC (e) potentials. Inset in (d) shows zoom into region with $\epsilon_{AB}<10\epsilon$. Colors defined in the color bar distinguish between different displacement magnitudes.[]{data-label="Fig3"}](SantosFlorez_Fig3.pdf "fig:"){width="9"} Another important finding is that the observed phenomena are not limited to the binary UF system but seem to be universal for repulsive interaction potential-energy functions in general. This is illustrated in Figs. \[Fig3\] b-e), which depict density plots of the mean particle distance for the inverse fourth-power law (IPL4), the inverse sixth-power law (IPL6), the Weeks-Chandler-Andersen (WCA) and the Gaussian core (GC) models [@Santos-Florez_Suppl2019], respectively. For all these systems the same 3 regimes can be identified, observing unmixing for large values for $\epsilon_{AB}$, continuous structural ordering to chemically uniform, rock-salt-type polycrystals for weak interspecies interactions and chemically/structurally amorphous configurations in between. A particularly interesting issue in this context concerns the relation between the continuous structural ordering regime and the functional form of the repulsive interaction. Specifically, the shape and the extent of the continuous ordering region in Fig. \[Fig3\] is seen to correlate with the rate at which the potential-energy function diverges at the origin. Along the sequence shown in Fig. \[Fig3\] a) to d), in which the divergence changes from slow (logarithmic) to fast ($r^{-12}$), the range of energy scales $\epsilon_{AB}$ for which continuous crystallization occurs reduces systematically. Indeed, the role of the behavior of the pair potential at the origin in the ordering transition becomes even more evident when considering the GC force field, which does not diverge at all, tending to a constant value and zero derivative at the origin. [@Santos-Florez_Suppl2019] As shown in Fig. \[Fig3\] e), the ordering transition to the rock-salt polycrystal structure in this case is restricted to a very narrow region in the $\epsilon_{AB}-\rho^*$ plane, disappearing altogether for densities above $\sim 0.8$. In conclusion, we have considered the nonequilibrium behavior of classical, 3-dimensional binary mixtures of particles interacting through purely repulsive forces during processes in which an infinite-temperature initial structure is rapidly quenched to zero temperature. We find that such systems display both possible types of second-order nonequilibrium phase transition, characterized by either a conserved or non-conserved order parameter. The observed type of transition can be controlled by tuning the interactions between unlike particles, with strong inter-species repulsion giving rise to unmixing, whereas weak interactions lead to a spontaneous development of structural order, forming a rock-salt-type polycrystalline solid of uniform composition. Unlike common first-order equilibrium freezing transitions, however, this crystallization process is continuous in nature, being barrierless and displaying scale-invariant features in the grain-size distributions. Furthermore, the findings suggest that the dual-type transition behavior is universal for repulsive pair interaction potential-energy functions in general, with the propensity for the continuous freezing transition being related to their behavior in the neighborhood of zero separation. We gratefully acknowledge support from the Brazilian agencies CNPq, Capes, Fapesp 2016/23891-6 and the Center for Computing in Engineering & Sciences - Fapesp/Cepid no. 2013/08293-7. Part of the calculations were performed at CCJDR-IFGW-UNICAMP. The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer, which have contributed to the research results reported in this paper. URL: http://sdumont.lncc.br. We thank Alexander Stukowksi and Peter Larsen for their assistance with `Ovito`’s grain segmentation algorithm. [72]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.1726293) [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.1670214) [****,  ()](http://stacks.iop.org/0305-4470/16/i=1/a=005) [****,  ()](http://stacks.iop.org/0305-4470/23/i=8/a=009) [****,  ()](https://doi.org/10.1080/00268979100102521) [****,  ()](https://doi.org/10.1080/00268979200101901) [****,  ()](https://doi.org/10.1080/00268979300101471) [****,  ()](http://stacks.iop.org/0953-8984/10/i=6/a=005) [****,  ()](http://stacks.iop.org/0953-8984/12/i=24/a=302) [****, ()](https://doi.org/10.1103/PhysRevE.62.7961) [****,  ()](http://stacks.iop.org/0953-8984/15/i=11/a=342) [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.2804330) [****,  ()](http://stacks.iop.org/0295-5075/78/i=4/a=46004) [****,  ()](https://doi.org/10.1103/PhysRevE.80.031502) [****, ()](http://link.aps.org/doi/10.1103/PhysRevE.81.031505) [****,  ()](http://stacks.iop.org/0295-5075/96/i=3/a=36010) [****,  ()](http://dx.doi.org/10.1039/C2SM07007C) [ ()](http://www.pnas.org/content/early/2015/07/17/1504677112.abstract) [****,  ()](https://doi.org/10.1063/1.4939238) [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.432891) [****,  ()](https://doi.org/10.1051/jp4:2000504) [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.2064639) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevE.71.050102) [****,  ()](http://dx.doi.org/10.1038/nmat820) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevLett.96.045701) @noop [****,  ()]{} [****,  ()](http://www.sciencedirect.com/science/article/pii/S0010465508000180) [****, ()](http://stacks.iop.org/0295-5075/85/i=2/a=26003) [****,  ()](http://dx.doi.org/10.1088/0953-8984/16/23/L03) [****,  ()](http://dx.doi.org/10.1039/B904103F) [****,  ()](https://doi.org/http://dx.doi.org/10.1063/1.3429354) [****,  ()](https://doi.org/10.1103/PhysRevLett.80.4450) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.82.5289) [****, ()](http://link.aps.org/doi/10.1103/PhysRevE.62.6949) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0370157300001411) [****,  ()](http://stacks.iop.org/0953-8984/14/i=33/a=309) [****, ()](http://dx.doi.org/10.1039/B601916C) [****,  ()](http://dx.doi.org/10.1038/nmat2286) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevLett.100.028301) [****,  ()](https://doi.org/10.1021/ma801894x) [****,  ()](https://doi.org/10.1080/00268976.2010.548344) [****, ()](https://doi.org/http://dx.doi.org/10.1063/1.4931410) [**](https://books.google.com.br/books?id=mZkQAQAAIAAJ) (, ) [****,  ()](https://link.aps.org/doi/10.1103/RevModPhys.71.S367) [****, ()](https://doi.org/10.1021/nl500236b) [****,  ()](http://science.sciencemag.org/content/312/5772/420.abstract) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevE.80.021404) [****, ()](http://dx.doi.org/10.1038/ncomms10052) [****,  ()](https://doi.org/10.1021/acs.jpclett.7b02237) [****,  ()](https://doi.org/10.1021/ja108708v) [****,  ()](http://science.sciencemag.org/content/334/6053/204.abstract) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevLett.106.215501) [****,  ()](http://www.sciencedirect.com/science/article/pii/S1359028611000556) [**](http://books.google.com.br/books?id=aKKX_yVlBZMC) (, ) pp.  [****,  ()](http://www.sciencedirect.com/science/article/pii/S037843710600402X) [**](https://books.google.com.br/books?id=AiofeEteLVcC) (, ) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevE.64.041501) [****,  ()](https://link.aps.org/doi/10.1103/PhysRevA.46.1014) [****,  ()](http://stacks.iop.org/0965-0393/18/i=1/a=015012) [****,  ()](http://dx.doi.org/10.1088/0965-0393/24/5/055007) [****,  ()](http://science.sciencemag.org/content/267/5206/1935.abstract) [**](https://books.google.com.br/books?id=qKC4CgAAQBAJ) (, ) [**](https://books.google.com.br/books?id=YQrB6s3LALEC) (, ) [****,  ()](http://www.sciencedirect.com/science/article/B6WHY-45NJN1B-3N/2/58aa2a309d2ebbbe60e0f417d398b0ef) [**](https://books.google.com.br/books?id=1aAOdzK3FegC) (, ) , in [**](http://books.google.com.br/books?id=ZDEPAAAAIAAJ), ,  (, ) Chap. , p.  [****, ()](https://doi.org/http://dx.doi.org/10.1063/1.4967775) [****, ()](https://link.aps.org/doi/10.1103/PhysRevE.96.032115) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0927025618308085) @noop [****,  ()](https://doi.org/10.1080/00107510500052444) [****,  ()](https://doi.org/10.1137/070710111)
--- abstract: 'It was argued by Mashhoon that a spin-rotation coupling term should add to the Hamiltonian operator in a rotating frame, as compared with the one in an inertial frame. For a Dirac particle, the Hamiltonian and energy operators H and E were recently proved to depend on the tetrad field. We argue that this non-uniqueness of H and E really is a physical problem. We compute the energy operator in the inertial and the rotating frame, using three tetrad fields: one for each of two frameworks proposed to select the tetrad field so as to solve this non-uniqueness problem, and one proposed by Ryder. We find that Mashhoon’s term is there if the tetrad rotates as does the reference frame — but then it is also there in the energy operator for the inertial frame. In fact, the Dirac Hamiltonian operators in two reference frames in relative rotation, but corresponding to the same tetrad field, differ only by the angular momentum term. If the Mashhoon effect is to exist for a Dirac particle, the tetrad field must be selected in a specific way for each reference frame.' author: - | Mayeul Arminjon\ *Laboratory “Soils, Solids, Structures, Risks”, 3SR\ *(CNRS and Universités de Grenoble: UJF, Grenoble-INP),\ *BP 53, F-38041 Grenoble cedex 9, France.*** title: 'Should there be a spin-rotation coupling for a Dirac particle?' --- Introduction {#Intro} ============ In a reference frame that has a uniform rotation with respect to an inertial frame, the angular momentum ${\bf L}$ of a particle is coupled with the rotation of the frame, in the sense that the Hamiltonian function or operator of the particle differs from its expression in the inertial frame by the term $-{{{\boldsymbol{\omega.}}}} {\bf L}$. (Here, ${{{\boldsymbol{\omega}}}}$ is the constant rotation velocity vector.) In the non-relativistic framework (also in the presence of Newtonian gravitation), this is exact for the classical Hamiltonian function as well as for the quantum Hamiltonian operator — when, to define the latter, one considers a scalar particle without spin [@WernerStaudenmannColella1979; @A41]. (In a relativistic framework, for a particle without spin obeying the Klein-Gordon equation, the Hamiltonian operator in a rotating frame may have other terms involving ${\bf L}$, depending on the model metric which is considered [@Kuroiwa-et-al1993; @MorozovaAhmedov2009].) Therefore, if one considers that the spin of a quantum particle is expressing some kind of internal rotation, he may conjecture that also the spin might couple with the rotation of the reference frame. This could even be regarded [@Mashhoon1995] as a natural consequence of the fact that the total angular momentum operator is the sum of the orbital momentum and the spin. Thus, it was argued by Mashhoon [@Mashhoon1988] that a “spin-rotation coupling" term of the form \[Mashhoon term\] =-\_ [**S**]{} should add, in a uniformly rotating frame, to a quantum Hamiltonian H of relativistic quantum mechanics. Here, $\gamma_\mathrm{L} $ is the Lorentz factor corresponding to the velocity, with respect to the inertial frame, of the local observer attached with the rotating frame, and ${\mathbf S} \equiv \frac{1}{2}\hbar{{{\boldsymbol{\sigma}}}}$, where ${{{\boldsymbol{\sigma}}}}$ denotes the space “vector" made with the Pauli matrices $\sigma ^j \ (j=1,2,3)$. The form of H was left free by Mashhoon, who got the additional term (\[Mashhoon term\]) from an assumption about the transformation of H and the wave function from the inertial frame to the rotating one. Later on, a similar term: \[HehlNi term\] ’=-’[**.S**]{} with ${{{\boldsymbol{\omega }}}}'$ the “proper rotation", was predicted by Hehl & Ni [@HehlNi1990] to occur in the Hamiltonian of a particle obeying specifically the standard form [@BrillWheeler1957+Corr; @ChapmanLeiter1976] of the (generally-)covariant Dirac equation (“Dirac-Fock-Weyl" equation or DFW for short). To write the latter explicitly, one needs to define a coordinate system and an (orthonormal) tetrad field. That prediction was obtained for a general situation in which an observer moves with a proper acceleration and a proper rotation, yet a particular tetrad field was chosen, “which behaves as a rotating Fermi-Walker-transported reference frame" [@HehlNi1990]. Still a similar prediction was got, also from the DFW equation but in the case of uniform rotation, by Cai & Papini [@CaiPapini1991] who used another “rotating tetrad". In view of these results, and since the Dirac equation is the relevant one to describe spin half particles, the spin-rotation coupling is usually considered as a theoretically well established fact. It seems that it is too small to be experimentally tested yet [@A41], but it has been argued that it may have been indirectly detected [@Mashhoon1995], although the argument is not very straightforward.\ Until recently, the choice of the tetrad field has been assumed to be entirely neutral, because the Lagrangian of the standard covariant Dirac equation is invariant under a change of the tetrad field, hence the DFW equations obtained with any two different tetrad fields are equivalent [@BrillWheeler1957+Corr; @ChapmanLeiter1976]. (This is true in a topologically-simple spacetime [@Isham1978].) However, it has been observed by Ryder [@Ryder2008] that, in the archetypical case of uniform rotation in the Minkowski spacetime, the spin-rotation coupling term may be present or absent, depending on the choice of the tetrad field. Even more recently, it has been proved that, in a general reference frame in a general spacetime, the Hamiltonian operator H associated with the covariant Dirac equation is not unique [@A43]. (This is true for the DFW equation as well as for all alternative forms of the covariant Dirac equation considered in Refs. [@A43; @A45], for which the gauge freedom is even larger than for the DFW equation.) In loose terms, the reason for this non-uniqueness is as follows: H is got by rewriting the wave equation in a form adapted to a particular reference frame; now, for the covariant Dirac equation, the Dirac $\gamma ^\mu $ matrices and their admissible changes are allowed to depend on the spacetime position; it follows that rewriting the covariant Dirac equation in a form adapted to a particular reference frame does not generally commute with changing the $\gamma ^\mu $ matrices in that equation. It has also been proved [@A43] that the [*energy operator*]{} E is not unique, either. That operator E is equal to the Hermitian part of the Hamiltonian operator H for the relevant scalar product (hence it coincides with H when H is Hermitian) and has the other important property that its mean value is the field energy [@Leclerc2006; @A48]. Thus it is the energy operator E that is relevant to the Mashhoon effect. The spectrum of E, that is the Dirac energy spectrum, is not unique either. Instead, each of H, E, and the spectrum of E, depend on the choice of the tetrad field, or more generally of the field of Dirac matrices [@A43]. Thus, contrary to a widely spread belief, the gauge invariance of the Lagrangian of the DFW equation — i.e., its invariance under any smooth change of the tetrad field — does not guarantee that all physically-relevant objects are also gauge-invariant.\ In contradiction with the criticism [@GorbatenkoNeznamov2013], that non-uniqueness does not regard merely the form of H and E. Indeed, what has been proved [@A43] is the physical inequivalence of the Hamiltonians (and the energy operators) corresponding with different choices of the tetrad field. See Ref. [@A50] for another detailed proof of that point using precisely the concepts of a unitary transformation and of the mean value of an operator, invoked in Ref. [@GorbatenkoNeznamov2013]. Let us now emphasize that this physical non-uniqueness of H and E really is a problem.\ [**i**]{}) The work [@A48], App. A, provides a detailed justification for using first-quantized covariant Dirac theory, that is, [*quantum mechanics of the covariant Dirac equation*]{}, instead of quantum field theory (QFT), in the context of the existing experiments on quantum particles in the gravitational field. In short: curved-spacetime QFT applied to the Dirac field, in its current state, does not allow one to make unambiguous predictions about the COW effect, the Sagnac effect, the quantization of the energy levels in the gravitational field, all of those three effects having been confirmed by experiments, and which are the only available experiments relative to the gravity-quantum coupling. Nor does the current state of QFT allow one to make predictions regarding the Mashhoon effect — which is foreseen to become measurable, has been widely discussed in the literature, and is the precise subject of this paper.\ [**ii**]{}) There also, the issue of the classical [*energy-momentum tensor*]{} is discussed in relation with the non-uniqueness problem. It is shown that the [*canonical*]{} energy-momentum tensor $t^\mu _{\ \, \nu }$ is the one for which the “field energy", i.e. the space integral of $t^0 _{\ \, 0 }$, is equal to the mean value $\langle \mathrm{E} \rangle $ of the energy operator $\mathrm{E}$. Thus, the canonical tensor $t^\mu _{\ \, \nu }$ is the one that is relevant to quantum mechanics and to these experiments — but $t^\mu _{\ \, \nu }$ is not gauge-invariant: this can be checked directly on its expression and results also from the foregoing equality, since $\mathrm{E}$ and $\langle \mathrm{E} \rangle $ are not gauge-invariant. On the other hand, Hilbert’s energy-momentum tensor, say $T^\mu _{\ \, \nu }$, is gauge-invariant, but the space integral of $T^0 _{\ \, 0 }$ is hence not equal to the mean value $\langle \mathrm{E} \rangle $ of the energy operator $\mathrm{E}$. Hence, $T^\mu _{\ \, \nu }$ is not relevant to quantum mechanics. Anyway, in the rather vast literature on quantum mechanics of the DFW equation (see e.g. Refs. [@HehlNi1990]–[@ChapmanLeiter1976], [@Leclerc2006]–[@GorbatenkoNeznamov2013], [@Parker1980]–[@HuangParker2009]), the energy-momentum tensor (be it $t^\mu _{\ \, \nu }$ or $T^\mu _{\ \, \nu }$) is rarely even mentioned, except for Refs. [@BrillWheeler1957+Corr; @Leclerc2006]. In any case, to our knowledge, that tensor has never been used in a calculation that have a definite relationship to the outcomes of the existing or foreseen experiments testing the effects of the gravity-quantum coupling, mentioned at point ([**i**]{}) above. [**iii**]{}) The non-uniqueness problem is there already in the case of an inertial frame in the Minkowski spacetime [@A47], and this is also true in the presence of an external electromagnetic field. [^1] The classical discussion of the hydrogen-type atoms, which is based on the quantum-mechanical Hamiltonian/energy operator and its spectrum, therefore cannot be done if one uses the DFW equation with its gauge freedom, instead of using Dirac’s original equation valid only in Cartesian coordinates [@A48]. I.e.: [*the current theory based on the DFW equation cannot determine the energy levels of the hydrogen atom.*]{} See Eq. (\[bar A-explicit\]) below. This illustrates in a dramatic way the physical relevance of the non-uniqueness problem.\ [**iv**]{}) Finally, the principle according to which “physical observables are gauge invariant" cannot discard the energy operator, because this is the most important quantum-mechanical observable — as is confirmed by point ([**iii**]{}) above. What this principle tells us in that instance is that [*we have to restrict the gauge freedom:*]{} here the freedom in the choice of the tetrad field.\ That non-uniqueness problem makes it plausible that a spin-rotation coupling term could be unambiguously defined only if the choice of the tetrad field were restricted in some consistent way. Note that the derivations which lead to the presence of a spin-rotation coupling term for a Dirac particle are based on choosing a tetrad that is itself rotating more or less like the reference frame [@HehlNi1990; @CaiPapini1991], as is also the case for Ryder’s first tetrad [@Ryder2008]. Whereas, Ryder’s second tetrad, which does not lead to the presence of this term, is indeed non-rotating in the sense of the Fermi-Walker transport [@Ryder2008].\ Two different frameworks have been proposed [@A48; @A47] to restrict the choice of the tetrad field in such a way that the non-uniqueness problem [@A43] is proved to be solved:\ [**I.**]{} With any orthonormal tetrad field $(u_\alpha )_{\alpha =0,...,3}$ that is “adapted" to a given reference frame (in a sense to be precised in Sect. \[Prescriptions\]), one may associate a unique rotation rate field ${{{\boldsymbol{\Xi }}}}$, which is a spatial tensor field. A first way to solve the non-uniqueness problem is to fix that spatial tensor field ${{{\boldsymbol{\Xi }}}}$ [@A47]. Two natural choices for this fixing are: [*a*]{}) ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$, where ${{{\boldsymbol{\Omega }}}}$ is the rotation-rate field of the reference frame itself [@Cattaneo1958; @Weyssenhoff1937; @A47]; and [*b*]{}) ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{0}}}}$. These two choices lead to non-equivalent Hamiltonians, thus represent two different solutions to the non-uniqueness problem.\ [**II.**]{} A third solution is available [@A48] when the spacetime metric ${{{\boldsymbol{g}}}}$ can be put in the following diagonal space-isotropic form: \[isotropic-diagonal\] (g\_)=(f,-h,-h,-h), f&gt;0, h&gt;0 in a suitable coordinate system $(x^\mu )$. That other solution consists in choosing the “diagonal tetrad" in that coordinate system, i.e., \[diagonal-tetrad\] u\_\_\^\_/, d\_0f, d\_1=d\_2=d\_3 -h. So the two frameworks lead to three different prescriptions for uniqueness.\ The aim of this work was to compare these two frameworks for the cases of both an inertial frame and a uniformly rotating frame in the Minkowski spacetime, with special attention to the presence or absence of the Mashhoon term. Section \[Operators\] will recall the definition and the general form of the Dirac Hamiltonian operator $\mathrm{H}$ in a general spacetime, and Section \[Dependence\] will distinguish between the dependences of $\mathrm{H}$ on the reference frame and on the tetrad field. Section \[Prescriptions\] will give some additional details about the three different prescriptions outlined above. Section \[inertial/rotating\], which contains the main new results of this paper, will apply the foregoing to the target situation. In the Minkowski spacetime, the second framework leads one to select the “Cartesian tetrad" and is very easy to put into practice. As this paper shall confirm, the first framework is much less easy to implement. So, instead of calculating exactly the predictions of each among the two variants of the first framework, we shall determine a tetrad field which closely approaches Variant [*a*]{}). We shall also test a rotating tetrad field proposed by Ryder [@Ryder2008]. In each case, i.e., in the two frames and for these three tetrad fields, we shall give the explicit expression of the energy operator. We shall finally find the general expression for the difference between the Hamiltonians corresponding to a given tetrad field, in two frames having a relative rotation. Dirac Hamiltonian operator in a general spacetime {#Operators} ================================================= The standard form of the covariant Dirac equation (the DFW equation) is written in a given coordinate system $(x^\mu )$ defined on the spacetime V (or on an open domain U therein): \[Dirac-normal\] \^D\_=-iM(Mmc/). In this equation, $\gamma ^\mu $ is the field of the Dirac matrices; $\Psi $ is the column matrix made with the components $\Psi ^a \ (a=0,...,3)$ of the wave function $\psi $; and $D_\mu=\partial _\mu +\Gamma _\mu $ is the covariant derivative, where $\Gamma _\mu \ (\mu =0,...,3)$ are the connection matrices, which are $4\times 4$ complex matrices, just like the Dirac matrices $\gamma ^\mu $. On the other hand, $m$ is the rest-mass of the Dirac particles considered. The Dirac Hamiltonian operator is got by rewriting (\[Dirac-normal\]) in the form of the Schrödinger equation and is explicitly [@A42]: \[Hamilton-Dirac-normal\] = mc\^2\^0 -ic\^j D \_j -ic\_0, where \[alpha\] \^0 \^0/g\^[00]{}, \^j \^0\^j/g\^[00]{} (j=1,2,3). In contrast with the wave equation (\[Dirac-normal\]), the Hamiltonian operator (\[Hamilton-Dirac-normal\]) changes in a non-covariant way on a general change of the coordinate system. [^2] This is true for any wave equation. However, the Hamiltonian operator transforms covariantly on a purely spatial change of the coordinates: \[purely-spatial-change\] x’\^0=x\^0,x’\^j=f\^j((x\^k)) (j,k=1,2,3). In particular, for the DFW equation, $\Psi$ behaves as a scalar on any coordinate change (Note \[Covariance Psi\]). It follows easily [@A42] that the Dirac Hamiltonian is [*invariant*]{} after a change (\[purely-spatial-change\]).\ In the covariant Dirac equation (\[Dirac-normal\]), as well as in the Hamiltonian operator (\[Hamilton-Dirac-normal\]), the $\gamma ^\mu $ field is determined from the data of an orthonormal tetrad field $(u_\alpha )$. Decomposing the vectors $u_\alpha $ in the natural basis $(\partial _\mu )$: $u_\alpha =a^\mu_{\ \,\alpha}\,\partial_\mu$, one defines \[flat-deformed\] \^= a\^\_[ ]{}  \^[ ]{}, where $(\gamma ^{ \sharp \alpha})$ is any constant “flat" set of Dirac matrices, i.e., one that is valid for the Minkowski spacetime in Cartesian coordinates [@BrillWheeler1957+Corr; @ChapmanLeiter1976]. The definition (\[flat-deformed\]) implies that the $\gamma ^\mu $ field transforms as a vector on a coordinate change (alone): \[gamma\^mu vector\] ’\^=L\^\_[ ]{}\^,L\^\_[ ]{} , which is well known. [^3] On the other hand, the connection matrices $\Gamma _\mu $ transform as a covector when one changes (only) the coordinate system: \[Gamma\_mu covector\] ’\_= M\^\_[ ]{} \_,M\^\_[ ]{} . Using (\[gamma\^mu vector\]) and (\[Gamma\_mu covector\]), the invariance of the Hamiltonian operator H under a purely spatial change (\[purely-spatial-change\]) is also easy to check directly on the explicit form (\[Hamilton-Dirac-normal\]). We note that Eq. (\[Gamma\_mu covector\])$_1$ relates the matrices $\Gamma _\nu $ and $\Gamma '_\mu$ of any connection (on some vector bundle ${\sf E}$ with base V) when two different frame fields are chosen for the tangent bundle TV, say $(u_\nu )$ and $(u'_\mu)$ with $u'_\mu =M^\nu _{\ \,\mu }u_\nu $, even if these are not coordinate bases. I.e., (\[Gamma\_mu covector\])$_1$ is true also if “non-holonomic" frame fields are chosen for TV. [^4] This will be useful to us because, for the DFW equation, the expression of the connection matrices (of the spin connection defined on the spinor bundle) is simple if, as the frame field on TV, one chooses precisely the tetrad field $(u_\alpha )$ used in the definition (\[flat-deformed\]). These connection matrices are then [@HehlNi1990; @Ryder2008]: \[Spin connection with tetrad field\] \^\_=\_ s \^, s \^. Here $\gamma _{\alpha \beta \epsilon }\equiv \eta _{\alpha \zeta}\gamma^\zeta_{\ \beta \epsilon } $, where $\eta _{\alpha \zeta}\equiv \mathrm{diag}(1,-1,-1,-1)$ is the Minkowski “metric" (in Cartesian coordinates) and the $\gamma^\zeta_{\ \beta \epsilon }$ ’s are the coefficients of the Levi-Civita connection on TV. With an orthonormal tetrad field like $(u_\alpha )$, the $\gamma _{\alpha \beta \epsilon }$ ’s can be calculated as [@HehlNi1990; @Ryder2008]: \[gamma\_alpha beta epsilon\] \_=-(C\_+C\_-C\_ )=-\_, where $C_{\alpha \beta \epsilon }\equiv \eta _{\alpha \zeta}C^\zeta_{\ \beta \epsilon } =-C_{\alpha \epsilon \beta }$, the $C^\zeta_{\ \beta \epsilon } $ ’s being the coefficients of the decomposition, in the tetrad basis, of the commutators of the same tetrad: \[structure constants\] \[u\_,u\_\]=C\^\_[ ]{} u\_. Dependences of the Hamiltonian on the reference frame and on the tetrad field {#Dependence} ============================================================================= #### The relation (\[purely-spatial-change\]) {#ReferenceFrame} between two charts (coordinate systems) is an equivalence relation for charts which are all defined on a given (open) domain U of the spacetime. We call [*reference frame*]{} an equivalence class for this relation. Thus, if $\chi: X \mapsto (x^\mu )$ is some chart, defined on some domain U, one defines a reference frame by considering the class of $\chi $, that is, the set F of all charts which are defined on U and which exchange with $\chi $ by a purely spatial change (\[purely-spatial-change\]). A physically admissible reference frame is one for which we have $g_{00}>0$ everywhere in U, which condition is invariant under a change (\[purely-spatial-change\]). The data of a physically admissible reference frame F determines [@A47; @Cattaneo1958] a unique four-velocity vector $v=v_\mathrm{F}$: in any chart belonging to F, its components are given by \[v\_F\^mu\] (v\_ )\^0, (v\_ )\^j=0. Note that the vector $v_\mathrm{F}$ is indeed invariant under a change (\[purely-spatial-change\]). Equation (\[v\_F\^mu\]) may be rewritten as \[v\_F\] v\_=\_0/, with $(\partial _\mu )$ the natural basis of any coordinate system belonging to the frame F. This definition of a reference frame formalizes Cattaneo’s idea of a reference fluid as a three-dimensional congruence of time-like world lines. The world lines of the congruence have constant space coordinates, in any chart $\chi $ of F. The vector field $v_\mathrm{F}$ is the normed tangent vector field to these world lines. However, in addition, this definition fixes the time coordinate. This is necessary, because the Hamiltonian operator H does depend on the choice of the time coordinate. See Ref. [@A47] and references therein for more detail.\ Thus, the invariance of $\mathrm{H}$ under the changes (\[purely-spatial-change\]) means that H depends on the coordinate system only through the reference frame. [*The dependence of $\mathrm{H}$ on the reference frame is natural*]{} [@A48; @A47; @A42] and does not imply that the choice of the reference frame should be restricted in any way beyond the necessity of considering a physically admissible reference frame, i.e., one such that the world lines of the congruence are time-like. However, one needs a prescription for choosing the [*tetrad field,*]{} which by Eq. (\[flat-deformed\]) determines the coefficient field $\gamma ^\mu $ in the Dirac equation (\[Dirac-normal\]). To see this, first note that The data of a tetrad field $(u_\alpha )$ is [*more*]{} than the data of a reference frame, because already the time-like vector $u_0$ of the tetrad determines a congruence of world lines — namely, the integral lines of $u_0$ [@A47]. Until recently, the choice of the tetrad field has been assumed to be entirely neutral, so it has been assumed that one can [*independently*]{} fix the reference frame and choose the tetrad field. Thus, the tetrad field need not be “adapted" in the sense of Eq. (\[u\_0=v\_F\]) below to the (arbitrary) chosen reference frame. However, it was proved [@A43] that, if the choice of the tetrad field $(u_\alpha )$ is left free, the energy spectrum [*in a given reference frame*]{} — or even in a given coordinate system — is not unambiguously defined. This applies already to an inertial frame in a Minkowski spacetime [@A47], and this also in the presence of an electromagnetic field, so that even the energy levels of the hydrogen atom would not be defined [@A48]. Concrete examples of the dependence of the operator H (and E, see below) on the tetrad field, for a given reference frame, will be given in this paper. [*That*]{} dependence is [*not*]{} natural. Note for example that, in contrast with the DFW Hamiltonian, the Hamiltonian associated with Dirac’s original equation valid only in Cartesian coordinates in a Minkowski space [*is*]{} fixed once has chosen an inertial reference frame [@A40].\ The relevant Hilbert-space scalar product was derived uniquely from rather compelling conditions [@A42]. This scalar product involves the hermitizing matrix $A$ [@Pauli1936], which for a general $(\gamma ^\mu )$ field is also a field, $A=A(X)$ [@A42]. However, usually the $(\gamma ^\mu )$ field is deduced from a tetrad field and from a constant set of “flat" Dirac matrices $(\gamma ^{\sharp \alpha })$ as in Eq. (\[flat-deformed\]). Then any hermitizing matrix for the set $(\gamma ^{\sharp \alpha })$ is also a (constant) hermitizing matrix $A$ for the $(\gamma ^\mu )$ field [@A42]. This is the relevant case for the present work. Moreover, in the literature, the set $(\gamma ^{\sharp \alpha })$ is usually chosen such that the hermitizing matrix is simply $A=\gamma ^{\sharp 0}$. In that particular case, the scalar product has the form proposed by Parker [@Parker1980] and by Leclerc [@Leclerc2006]. When the operator H is not Hermitian for the scalar product (which is the general case with a non-stationary metric [@Leclerc2006; @A42; @Parker1980]), one should replace H by its Hermitian part or “energy operator" E. The latter has the physically important property that the “field energy" $E$ associated with the Dirac field obeying Eq. (\[Dirac-normal\]), is equal to the mean value of the energy operator E [@A43; @Leclerc2006; @A48]. However, in the present paper, only time-independent metrics and Dirac matrices $\gamma ^0 $ will occur. Therefore, the hermiticity condition proved in Ref. [@A42]: [^5] \[hermiticity-condition\] \_0 ( A \^0 ) =0,g(g\_) is verified, so that [*in the present paper the energy operator coincides with*]{} H. Different prescriptions for uniqueness {#Prescriptions} ====================================== We will now give precisions about the two different frameworks [@A47; @A48] which we proposed in order to [*restrict the choice of the tetrad field*]{} consistently and sufficiently, and which were outlined in Section \[Intro\]. The first framework involves consideration of “spatial tensors" (e.g. “spatial vectors"), which can be defined rigorously as tensor fields on the “space manifold" M associated with a given F [@A47]. Here we will use simple words. As we recalled, the Hamiltonian, as well as the energy operator, depend naturally on the reference frame. This means that, to get a unique Hamiltonian operator, we need first to fix a reference frame. This can be done by considering a given physically admissible coordinate system $(x^\mu )$ defined on some domain U of the spacetime. However, we may replace the coordinate system by another one, provided this is related to the starting one by a change (\[purely-spatial-change\]). Let us summarize successively the two different frameworks. #### Framework I. {#Framework I} The data of a reference frame F fixes its four-velocity field $v_\mathrm{F}$, Eq. (\[v\_F\^mu\]). Now the vector field $u_0$ of an orthonormal tetrad field $(u_\alpha )$ is time-like and normed, hence it is also a four-velocity. To attach the tetrad with the reference frame, on should thus impose the condition [@A47; @MashhoonMuench2002; @MalufFariaUlhoa2007] \[u\_0=v\_F\] u\_0 = v\_. Let us call this an “adapted" tetrad field to the considered reference frame F. There are many different tetrad fields which are adapted to a given arbitrary reference frame, since no condition is imposed on the vectors $u_p\ (p=1,2,3)$ beyond the orthonormality of the whole tetrad $(u_\alpha )$. However, the latter condition implies [@A47] that the following tensor is antisymmetric: \[Phi ST\] \_ (u\_,()\_C ) =- \_, where $\left(\frac{Du}{d\xi }\right)_C$ designates the absolute derivative, with respect to the arbitrary parameter $\xi $ along some curve $C$ in the spacetime, of a vector $u=u(\xi )$. Here specifically, for any point $X$ in the domain U, we take $C$ to be that unique world line $x(X)$ of the congruence attached to the reference frame F which passes at $X$: in any chart of F, the spatial coordinates $x^j$ are fixed along $x(X)$ and only the coordinate time $t\equiv x^0/c$ varies; $C$ is parameterized by $t$. We define thus $\Phi _{\alpha \beta }(X)$, for any point $X \in \mathrm{U}$. One shows [@A47] that \[Phi explicit\] \_= c \_[0]{}, where $\tau $ is the proper time along the world line $x(X)$ and the coefficients $\gamma _{\alpha \beta \epsilon }$ are given by Eq. (\[gamma\_alpha beta epsilon\]). Moreover, one shows that the spatial components $\Phi _{p q }\ (p,q=1,2,3)$ make a spatial tensor ${{{\boldsymbol{\Phi }}}}$ in a precise geometrical sense. This tensor is indeed the opposite of the [*rotation rate of the spatial triad*]{} $({\bf u}_p)$. \[To any four-vector $u$ — here $u_p\ (p=1,2,3)$ — we associate the spatial vector ${\bf u}$ — here ${\bf u}_p$ — whose components are the spatial components $u^j$ of $u$ in a chart belonging to the F considered. This spatial vector is independent of the chart $\chi \in \mathrm{F}$ since, on changing the chart $\chi \in \mathrm{F}$ by (\[purely-spatial-change\]), the components $u^j$ transform correctly.\] The rotation rate of the spatial triad is also a spatial tensor ${{{\boldsymbol{\Xi }}}}$, whose components in the triad basis $({\bf u}_p)$ are thus: \[Xi=-Phi\] \_[p q ]{}=-\_[p q ]{}=-c \_[p q 0]{}. It has also been proved that, if two tetrad fields are adapted to the same reference frame F and if the associated spatial triads have the same rotation rate ${{{\boldsymbol{\Xi }}}}$, then the two tetrad fields give rise, in that reference frame $\mathrm{F}$, to physically equivalent Dirac Hamiltonian operators, as well as to physically equivalent Dirac energy operators. Thus the first framework for uniqueness consists, in a given reference frame, in choosing an [*adapted*]{} tetrad field such that, in addition, its rotation rate tensor field ${{{\boldsymbol{\Xi }}}}$ is a predefined field. Two natural choices are: [*a*]{}) ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$, where ${{{\boldsymbol{\Omega }}}}$ is the rotation-rate field of the reference frame F itself [@A47; @Cattaneo1958; @Weyssenhoff1937], whose components in a coordinate system of F are [^6] \[Weyssenhoff modified\] \_[jk]{} c(\_j g\_k-\_k g\_j-g\_j\_0 g\_k+g\_k\_0 g\_j),g\_j. [*b*]{}) ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{0 }}}}$. As we announced in the Introduction, the two choices [*a*]{}) and [*b*]{}) lead to non-equivalent Hamiltonians, thus represent two different solutions to the non-uniqueness problem [@A47].\ That first framework is difficult to implement, especially its variant [*a*]{}) which needs to calculate the field ${{{\boldsymbol{\Omega }}}}$ and to find a tetrad field such that ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega}}}}$: in practice, this could be done only approximately, by numerical integration of ordinary differential equations of the form $\delta {\bf u}_q /dt = \Omega^p _{\ \,q }\, {\bf u}_p$. {Here $\delta {\bf u}_q /dt $ is the Fermi-Walker derivative of ${\bf u}_q$ [@A47].} Moreover, by imposing the condition (\[u\_0=v\_F\]), we limit from the outset the validity of this kind of solution of the non-uniqueness problem to a given reference frame. #### Framework II. {#Framework II} That framework needs that there is some special coordinate system $(x^\mu )$, in which the metric has the special form (\[isotropic-diagonal\]) [@A48]. As discussed there, this form is general enough for the prospective purpose of testing the generally-covariant Dirac equations in a realistic spacetime metric. [^7] Then one chooses the “diagonal tetrad" in that coordinate system, Eq. (\[diagonal-tetrad\]). This defines the Dirac matrices $\gamma ^\mu $ in that coordinate system, Eq. (\[flat-deformed\]). Then, in any possible coordinate system, say $(x'^\mu )$, the Dirac matrices $\gamma '^\mu $ are got by the transformation (\[gamma\^mu vector\]). As it has been proved in Ref. [@A48]: if one considers another coordinate system in which the metric has also the [*form*]{} (\[isotropic-diagonal\]) (a priori not with the same coefficients), then one passes from the first to the second one by a constant rotation, combined with a constant homothecy. It follows [@A48], first, that the corresponding “diagonal tetrads" (\[diagonal-tetrad\]) exchange by a [*constant*]{} Lorentz transformation, and then, that in any given reference frame, the Hamiltonian operators got from these two choices of tetrad fields are equivalent, as well as the energy operators. Thus the non-uniqueness problem is solved simultaneously in any possible reference frame, and in a simple tractable way. Dirac energy operator in an inertial or a rotating frame {#inertial/rotating} ======================================================== Starting with a global inertial reference frame F$'$ in a Minkowski spacetime, defined from a Cartesian system of coordinates $(x'^\mu )=(ct',x',y',z')$, we define the uniformly rotating reference frame F from the rotating coordinates $(x^\mu)=(ct,x,y,z)$ given by \[rotating Cartesian\] t=t’,x=x’t + y’ t,y=-x’ t + y’ t,z=z’, where $\omega $ is a real constant. In these coordinates, the Minkowski metric remains stationary: it becomes \[Minkowski in rotating Cartesian\] ds\^2=(dx\^0)\^2 + 2 (ydx-x dy) dx\^0-(dx\^2+dy\^2 +dz\^2). The validity of these new coordinates is restricted by the admissibility condition $g_{00}>0$ to the domain $\mathrm{U}$ made of those points in the spacetime for which we have $\ V\equiv \omega \rho <c$, where $\rho \equiv (x^2+y^2)^{1/2}$. Thus, in contrast with the inertial frame F$'$, the rotating frame F is a local reference frame, so that going from F$'$ to F represents some “loss of information". Indeed the Hamiltonian and energy operators in the frame F act on wave functions $\Psi $ defined on the spatial manifold M associated with F [@A43]. The extension of that manifold depends on the domain $\mathrm{U}$ of the coordinates considered [@A44]. That is, the operators H and E act on wave functions $\Psi $ depending on the spatial coordinates $x,y,z$, whose domain of definition is only a subset U of the whole spacetime — specifically, here U is defined by the condition $\omega \rho <c$. However, if the rotating frame follows the rotation of a real astronomical object, this limitation does not have any practical consequence. Even for the extreme case of a neutron star with the highest observed angular velocity $\omega \simeq 10^3/\mathrm{s}$, the limitation only imposes $\rho < 3.10^5\,\mathrm{m}$, which is still 30 times the typical radius of the neutron star, $R\simeq 10\, \mathrm{km}$. Clearly, at such distances the wave function of, say, a neutron, can safely be equated to zero. For the Earth, with $\omega \simeq 7.10^{-5}/\mathrm{s}$, the limitation is $\rho <5.10^{12}\,\mathrm{m}\simeq 30\,\mathrm{au}$.\ Energy operators in the two frames with the Cartesian tetrad ------------------------------------------------------------ In the global Cartesian coordinates $(x'^\mu )$ on the Minkowski spacetime, the metric has of course the space-isotropic diagonal form (\[isotropic-diagonal\]), hence we can apply . The corresponding diagonal tetrad (\[diagonal-tetrad\]) is just $u'_\alpha \equiv \delta ^\mu _\alpha \partial '_\mu $, that is, the natural basis of the Cartesian coordinate system $(x'^\mu )$, or “Cartesian tetrad". Clearly, the coefficients $\gamma^\zeta_{\ \beta \epsilon }$ of the Levi-Civita connection are zero with this tetrad field $(u'_\alpha )$, [^8] so the connection matrices (\[Spin connection with tetrad field\]) are $\Gamma ^\sharp _\epsilon =0$ and, by (\[Gamma\_mu covector\]), they remain zero in any coordinates. Also, using the tetrad $(\partial '_\mu )$, the Dirac matrices (\[flat-deformed\]) in the coordinates $(x'^\mu )$ are simply the “flat" matrices, $\gamma '^\mu =\gamma ^{\sharp \mu }$. Hence, when it is used in the inertial frame F$'$ itself, the Cartesian tetrad yields by (\[Hamilton-Dirac-normal\]) just the special-relativistic Hamiltonian, which is Hermitian: \[Hamilton-Dirac-SR\] ’\_1 = ’\_1 = mc\^2\^[0]{} -ic\^[j]{} ’\_j, where $\alpha ^{\sharp j} \equiv \gamma ^{\sharp 0}\gamma ^{\sharp j}$. This result is the physically correct one: in an inertial frame, the Hamiltonian operator should indeed be the one predicted by Dirac’s original theory.\ The Hamiltonian H$_1$ in the rotating frame F and corresponding with the Cartesian tetrad involves the Dirac matrices transformed to the rotating coordinates (\[rotating Cartesian\]) by Eq. (\[gamma\^mu vector\]): \[gamma rotating\] \^0 & = & \^[0]{},\^1=\^[1]{}t + \^[2]{}t+\^[0]{},\ \^2 & = & -\^[1]{}t +\^[2]{}t - \^[0]{},\^3=\^[3]{}. From (\[hermiticity-condition\]), (\[Minkowski in rotating Cartesian\]) and (\[gamma rotating\])$_1$, it follows that H$_1$ is Hermitian. Noting that $g^{00}=1$ after the coordinate change (\[rotating Cartesian\]), we get then the $\alpha $ matrices of Eq. (\[alpha\]): \[alpha-Minkowski-tetrad-1\] \^0 & = & \^[0]{},\^1=\^[1]{}t + \^[2]{}t+[**1**]{}\_4,\ \[alpha-Minkowski-tetrad-2\] \^2 & = & -\^[1]{}t +\^[2]{}t -[**1**]{}\_4,\^3=\^[3]{}. We have moreover from (\[rotating Cartesian\]): \[d’\_j fn d\_k\] t \_x-t \_y=\_[x’]{},t \_x + t \_y=\_[y’]{},\_z=\_[z’]{}. Therefore, the energy operator $\mathrm{E}_1 = \mathrm{H}_1$, Eq. (\[Hamilton-Dirac-normal\]), is explicitly: \_1 & = & mc\^2\^0 -ic \^[ j]{} \_j\ & = & mc\^2\^[0]{} -ic\ & & -ic (y\_x-x\_y)[**1**]{}\_4\ & = & ’\_1 - i(y\_x-x\_y),\ \_1 & = & ’\_1 -[**L**]{}. \[Hamilton-restricted-gauge\] Here, ${\bf L}\equiv {\bf r}\wedge (-i\hbar \nabla )$ is the angular momentum operator. Thus, in the case of a uniformly rotating frame in a Minkowski spacetime \[and arguably in general, see Eq. (\[delta H-relative rotation\]) below\], does not predict any spin-rotation coupling. Constructing a tetrad adapted to the rotating frame --------------------------------------------------- Let us now try to use . One rotating orthonormal tetrad that appears naturally in the metric (\[Minkowski in rotating Cartesian\]) is Ryder’s [@Ryder2008] first tetrad: \[Ryder1\] u\_0=+-, u\_1=, u\_2=, u\_3=. However, as noted in Ref. [@A47], it results from (\[rotating Cartesian\]) and (\[Ryder1\]) that \[Ryder 1 = Cartesian tetrad-0\] u\_0 & = & =’\_0,\ \ \[Ryder 1 = Cartesian tetrad-123\] u\_1 & = & t ’\_1+t’\_2,u\_2=-t ’\_1+t’\_2, u\_3=’\_3, where $(\partial '_\mu )$ is the Cartesian tetrad. Since $v_\mathrm{F}=\partial _0/\sqrt{g_{00}}$, Eq. (\[v\_F\]), Equation (\[Ryder 1 = Cartesian tetrad-0\]) means that Ryder’s tetrad $(u_\alpha )$ is “adapted" in the sense of Eq. (\[u\_0=v\_F\]) to the inertial frame F$'$, not to the rotating frame F. On the other hand, since ${{{\boldsymbol{g}}}}(\partial _\mu ,\partial _\nu )=g_{\mu \nu }$, we see from (\[Minkowski in rotating Cartesian\]) that the natural basis $(\partial _\mu )$ of the rotating coordinates $x^\mu $ given by (\[rotating Cartesian\]) is not orthogonal. But consider, at each $X\in \mathrm{U}$, the hyperplane $\mathrm{H}_X$ in the local tangent space $\mathrm{TV}_X$ to the spacetime V, made of the vectors which are orthogonal to $v_\mathrm{F}(X)$. Define the orthogonal projection $\Pi_X$ onto $\mathrm{H}_X$ [@A47; @JantzenCariniBini1992]. (Note that this operator depends on the reference frame F which is considered, as do $v_\mathrm{F}$ and $\mathrm{H}_X$.) Obviously, if at each $X\in \mathrm{U}$ we thus project the vectors $\partial _j(X)\ (j=1,2,3)$ onto $\mathrm{H}_X$, we get vector fields $\Pi \partial _j$ such that $(\Pi \partial _j)(X)$ is orthogonal to $v_\mathrm{F}(X)$ at any $X\in \mathrm{U}$. From the definition, one finds the components of $\Pi _X a$ for a vector $a \in \mathrm{TV}_X$, in a coordinate system belonging to F [@A47]. Thus we get \[Pi\_d\_j\] (\_j)\^0=-g\_[0k]{}(\_j)\^k/g\_[00]{}=-g\_[0j]{}/g\_[00]{},(\_j)\^k=(\_j)\^k = \^k\_j, from which it follows that \[g(Pi d\_j, Pi d\_k)\] (\_j,\_k)=g\_[jk]{}- -h\_[jk]{}. Here ${{{\boldsymbol{h}}}}$ is the spatial metric of the reference frame F, such that for any two vectors $a,b$ at $X$ [@A47; @JantzenCariniBini1992]: \[Def h\] \_X(a,b) -\_X(\_X a,\_X b) . Note that the definition of $\mathrm{H}_X$ and $\Pi_X$, as well as Eqs. (\[Pi\_d\_j\]) to (\[Def h\]), are valid for a general reference frame in a general spacetime. Coming back to the uniformly rotating frame in a Minkowski spacetime, from (\[Minkowski in rotating Cartesian\]) and (\[g(Pi d\_j, Pi d\_k)\]) we get that ${{{\boldsymbol{g}}}}(\Pi \partial _1,\Pi \partial _3)={{{\boldsymbol{g}}}}(\Pi \partial _2,\Pi \partial _3)=0$ but ${{{\boldsymbol{g}}}}(\Pi \partial _1,\Pi \partial _2) \ne 0$, $(\partial _\mu )$ being again specifically the natural basis associated with the coordinates (\[rotating Cartesian\]). Hence, one may define an orthonormal tetrad adapted to the rotating frame F by taking: $v_\mathrm{F}$, $\Pi \partial _2/\parallel \Pi \partial _2\parallel $, $\Pi \partial _3/\parallel \Pi \partial _3\parallel $, and the vector product of (the spatial vectors associated with) the two last vectors. However, a simpler orthonormal tetrad adapted to F is obtained by considering the natural basis $(\partial ^\circ_\mu )$ of the “rotating cylindrical coordinates" $(x^{\circ \mu })=(ct, \rho , \varphi , z)$, related to the coordinates (\[rotating Cartesian\]) by \[rotating cylindrical\] x=,y=. (It follows from this that $\partial ^\circ_0=\partial _0 $ and $\partial ^\circ_3=\partial ^\circ_z=\partial _3=\partial _z $.) In the coordinates $(x^{\circ \mu })$, the Minkowski metric (\[Minkowski in rotating Cartesian\]) rewrites immediately as \[L&L(89,2)\] ds\^2=g\^\_dx\^dx\^=c\^2 dt\^2 - 2\^2 d dt-(d\^2+\^2 d\^2+dz\^2), from which we find that the spatial metric defined by Eq. (\[g(Pi d\_j, Pi d\_k)\])$_2$ has components \[in the coordinates $(x^{\circ j})=(\rho ,\varphi ,z)$\]: \[h-cylindrical\] h\_[jk]{}=\_[jk]{}   h\_[22]{}=. Hence, owing to Eq. (\[g(Pi d\_j, Pi d\_k)\])$_1$, we define an orthonormal tetrad adapted to the rotating frame F by taking $v_\mathrm{F}=\partial ^\circ_0/\sqrt{g^\circ_{00}}$ and by norming the $\Pi\partial ^\circ_j $ vectors, which results simply in setting \[u circ-01\] u\^\_0=\_0,u\^\_1 = \^\_1 =\^\_, \[u circ-23\] u\^\_2 = \_0+\^\_,u\^\_3 =\^\_3 = \^\_z =\_z. We note that the matrix $a\equiv (a^\mu_{\ \,\alpha})$, such that $u^\circ_\alpha =a^\mu_{\ \,\alpha}\,\partial^\circ_\mu$, is independent of the time coordinate $t$. Hence, so are also the Dirac matrices (\[flat-deformed\]). Thus, from (\[hermiticity-condition\]), the Hamiltonian operator in the rotating frame with the adapted rotating tetrad is Hermitian.\ Let us calculate the rotation rate tensor field ${{{\boldsymbol{\Xi }}}}$ of the tetrad $(u^\circ_\alpha)$, Eq. (\[Xi=-Phi\]). The coefficients of the decomposition (\[structure constants\]) of the commutators of the tetrad (\[u circ-01\])-(\[u circ-23\]) are easily computed to be: $C^\zeta _{\ \,\beta \epsilon}=0$, except for: \[C\^zeta \_beta epsilon-1\] C\^0 \_[ 0 1]{}=-C\^0 \_[ 1 0]{}=-,C\^0 \_[ 1 2]{}=-C\^0 \_[ 2 1]{}=, \[C\^zeta \_beta epsilon-2\] C\^2 \_[ 1 2]{}=-C\^2 \_[ 2 1]{}=-. From this, we deduce immediately the coefficients $C_{\alpha \beta \epsilon }= \eta _{\alpha \zeta}C^\zeta_{\ \beta \epsilon } $, then we get the coefficients $\gamma _{\alpha \beta \epsilon }=-\gamma _{ \beta \alpha \epsilon }$ \[Eq. (\[gamma\_alpha beta epsilon\])\]. They are zero, except for (when $\alpha < \beta$): \[gamma\_alpha beta epsilon-1\] \_[0 1 0]{}=-,\_[1 2 2]{}=, \[gamma\_alpha beta epsilon-2\] \_[1 2 0]{}=-\_[0 1 2]{}=\_[0 2 1]{}=. Therefore, Eqs. (\[Xi=-Phi\]) and (\[L&L(89,2)\]) give us: $\ {\textcolor{noir}{\Xi _{p q}=0}}$, except for \[Xi rotating frame\] \_[21]{}=-\_[12]{}=\_, \_ =\_ ()\^[-1/2]{}. We may compare this with the rotation rate tensor ${{{\boldsymbol{\Omega }}}}$ of the reference frame, defined in general by Eq. (\[Weyssenhoff modified\]). For the rotating frame F, the components $\Omega _{jk}$ of ${{{\boldsymbol{\Omega }}}}$ are easily computed [@A47]: \[Omega uniformly rotating frame\] \_[32]{}=0,\_[13]{}=0,\_[21]{} = +\_ \^3. These are in fact the components of the spatial tensor ${{{\boldsymbol{\Omega }}}}$ in the natural basis $({{{\boldsymbol{\partial }}}}_j)$ associated with the spatial coordinates $(x^j )$ \[the spatial part of the coordinates (\[rotating Cartesian\])\]. The components $\Omega^\circ_{p q}$ of ${{{\boldsymbol{\Omega }}}}$ in the spatial triad basis $({\bf u}^\circ _p )$ associated with the tetrad basis $(u^\circ _\alpha )$ are got from (\[Omega uniformly rotating frame\]) and from the relation between the triad bases $({{{\boldsymbol{\partial }}}}_j)$ and $({\bf u}^\circ _p )$. This relation follows from (\[rotating cylindrical\]) and (\[u circ-01\])–(\[u circ-23\]) and is: \^\_1= \_1+ \_2, \^\_2= (-\_1+ \_2 )/\_(),\^\_3= \_3. By standard tensor transformation, we find from this and from (\[Omega uniformly rotating frame\]): \[Omega uniformly rotating frame-2\] \^\_[32]{}=0,\^\_[13]{}=0,\^\_[21]{} = \_[21]{}/\_ =\_\^2. These differ from (\[Xi rotating frame\]) only by $O(V^2/c^2)$ terms (for $V\equiv \omega \rho \ll c$). Up to this negligible difference, we may thus consider that the adapted rotating tetrad $(u^\circ _\alpha )$ verifies ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$, as required by the variant [*a*]{}) of . Energy operator with the adapted rotating tetrad ------------------------------------------------ Let us thus calculate the Hamiltonian (\[Hamilton-Dirac-normal\]) in the rotating frame F, when choosing the tetrad $(u^\circ_\alpha )$, Eqs. (\[u circ-01\])-(\[u circ-23\]). We begin with the spin connection matrices (\[Spin connection with tetrad field\]) with the tetrad field $(u^\circ_\alpha )$. From Eqs. (\[gamma\_alpha beta epsilon-1\])–(\[gamma\_alpha beta epsilon-2\]), these are: \[Gamma dièse-1\] 4\^\_0=\_[0 1 0]{}s \^[01]{}+\_[1 2 0]{}s \^[1 2]{},4\^\_1=\_[0 2 1]{}s \^[0 2]{}, \[Gamma dièse-2\] 4\^\_2=\_[0 1 2]{}s \^[01]{}+\_[1 2 2]{}s \^[1 2]{},\^\_3=0. To compute the connection matrices $\Gamma _\mu $ when the coordinate basis $(\partial^\circ_\mu)$ is chosen, we use the fact that they transform as a covector \[see Eq. (\[Gamma\_mu covector\]) and thereafter\]. Thus we have $\Gamma _\mu =b^\alpha _{\ \,\mu }\Gamma ^\sharp _\alpha $, where the matrix $b\equiv (b^\alpha _{\ \,\mu })$, such that $\partial^\circ_\mu=b^\alpha _{\ \,\mu }\,u^\circ_\alpha$, is got easily from Eqs. (\[u circ-01\])-(\[u circ-23\]): b= \_ \^[-1]{} & 0 & - & 0\ 0 & 1 & 0 & 0\ 0 & 0 & \_ & 0\ 0 & 0 & 0 & 1 . We get thus from (\[Gamma dièse-1\])–(\[Gamma dièse-2\]), using the standard set of Dirac matrices: [^9] \[Gamma-rotating-tetrad-1\] \_0= -( \^3+’\^1 ), \_1= ’\^2, \[Gamma-rotating-tetrad-2\] \_2= (’\^1 + i\^3 ), \_3=0, where \^j \^j & 0\ 0 & \^j\ ,’\^j 0 & \^j\ \^j & 0\ . On the other hand, the $\gamma ^\mu $ matrices are defined by (\[flat-deformed\]). In view of Eqs. (\[u circ-01\])-(\[u circ-23\]), we have: \^0=\_\^[0]{}+ \^[2]{},\^1=\^[1]{}, \^2=\^[2]{},\^3=\^[3]{}, from which we get the matrices $\alpha ^\mu $ of Eq. (\[alpha\]) \[note that $g^{00}=1$\]: \[alpha-rotating-tetrad-1\] \^0 =\^0,\^2 = (\^[2]{}+ \_4)=(’\^[2]{}+ \_4), \[alpha-rotating-tetrad-2\] \^j= \_(\^[j]{}+ s \^[2j]{})=\_(’\^[j]{}- i \_[2jk]{}\^[k]{}) (j=1,3). The energy operator with the adapted rotating tetrad is thus \[Eq. (\[Hamilton-Dirac-normal\])\]: \[Hamilton-Dirac-normal-2\] \_2 =\_2 = mc\^2\^0 -ic\^j (\^\_j+\_j) -ic\_0, where the matrices $\Gamma _\mu $ and $\alpha ^\mu $ are given by Eqs. (\[Gamma-rotating-tetrad-1\])-(\[Gamma-rotating-tetrad-2\]) and (\[alpha-rotating-tetrad-1\])-(\[alpha-rotating-tetrad-2\]). In particular, for $V\equiv \rho \omega \ll c$, we have from (\[Gamma-rotating-tetrad-1\]): \[Spin-rotation-2\] -ic\_0=-\^3 =-\_.[**S**]{}. which is the usual [*“spin-rotation coupling" term*]{} [@Mashhoon1988; @HehlNi1990; @CaiPapini1991; @Ryder2008]. Energy operator in the two frames with Ryder’s rotating tetrad -------------------------------------------------------------- Since Ryder’s [@Ryder2008] first tetrad $(u_\alpha )$, Eq. (\[Ryder1\]) above, is “adapted" in the sense of Eq. (\[u\_0=v\_F\]) to the inertial frame F$'$, it is interesting to compute the energy operator associated in the inertial frame F$'$ with this tetrad. We checked that, as was found by Ryder, the spin connection matrices (\[Spin connection with tetrad field\]) for this tetrad field $(u_\alpha )$ are \[Spin connec - Ryder1\] \^\_0= - \^3, \^\_j=0. The tetrad $(u_\alpha )$ is related to the natural basis $(\partial '_\mu )$ by Eqs. (\[Ryder 1 = Cartesian tetrad-0\])–(\[Ryder 1 = Cartesian tetrad-123\]). We thus transform immediately the connection matrices to the natural basis, getting the same: \[Spin connec - Ryder1 - Cartesian\] \_0= - \^3, \_j=0. We get also from (\[flat-deformed\]) and (\[Ryder 1 = Cartesian tetrad-0\])–(\[Ryder 1 = Cartesian tetrad-123\]), using then (\[alpha\]): \[alpha - Ryder1 - Cartesian\] \^0=\^[0]{}, \^1=t \^[1]{} -t \^[2]{}, \^2=t \^[1]{} + t\^[2]{}, \^3=\^[3]{}. We note that here again $\gamma ^0=\alpha ^0=\gamma ^{\sharp 0}$ is constant, so the Hamiltonian is Hermitian, Eq. (\[hermiticity-condition\]). From (\[d’\_j fn d\_k\]), (\[Spin connec - Ryder1 - Cartesian\]), and (\[alpha - Ryder1 - Cartesian\]), we find the explicit expression of the energy operator $\mathrm{E}'_3 = \mathrm{H}'_3$, Eq. (\[Hamilton-Dirac-normal\]): ’\_3 & = & mc\^2\^0 -ic \^[ j]{} ’\_j -ic \_0\ & = & mc\^2\^[0]{} -ic\ & & -ic \_0, thus ’\_3 = mc\^2\^[0]{} -ic \^[j]{} \_j -\^3 . \[Hamilton-Ryder1-inertial\] Thus, with Ryder’s tetrad, we find that the DFW energy operator in the [*inertial*]{} frame F$'$ does contain the spin-rotation coupling term $-\frac{\hbar \omega }{2}\Sigma ^3=-{{{\boldsymbol{\omega }}}}.{\bf S }$. This is certainly unexpected physically. Also, by comparing H$'_1$ with H$'_3$ \[Eqs. (\[Hamilton-Dirac-SR\]) and (\[Hamilton-Ryder1-inertial\])\], we have a clear confirmation of the non-uniqueness of the DFW Hamiltonian and energy operator. The energy operators H$'_1$ and H$'_3$, which are related together by a simple local similarity transformation $S$, were known in advance to be physically inequivalent [@A47]. They are in fact [*grossly*]{} inequivalent, e.g. the difference in their mean values for corresponding states $\Psi$ and $\widetilde{\Psi}\equiv S^{-1}\Psi$ depends on the state $\Psi$ and contains the [*arbitrary*]{} factor $\omega $. That is, for any state $\Psi=(\Psi ^\alpha )_{\alpha =0,...,3}\,$, for which the energy mean value with H$'_1$ is $\langle \mathrm{H}'_1 \rangle\equiv (\Psi \mid \mathrm{H}'_1\Psi )$, the corresponding energy mean value $\langle \mathrm{H}'_3 \rangle=(\widetilde{\Psi}\,\widetilde{\mid }\, \mathrm{H}'_3 \widetilde{\Psi})$, got by using H$'_3$, may differ arbitrarily from $\langle \mathrm{H}'_1 \rangle$ — depending on the arbitrary rotation rate $\omega $ of Ryder’s tetrad: \[bar A-explicit\] A’\_3 - ’\_1 =-([\^0]{}\^2 +[\^2]{}\^2-[\^1]{}\^2-[\^3]{}\^2)\^3[**x**]{} {Eq. (29) in Ref. [@A50].} Recall that H$'_1$ is the standard Dirac Hamiltonian of special relativity, that, once augmented with the “electromagnetic term" to become H$'_{1\ \mathrm{em}}$, leads to the correct energy levels for the electron in the hydrogen atom. Thus, suppose that $\Psi $ is an eigenstate, with energy $E$, of the special-relativistic Dirac Hamiltonian H$'_{1\ \mathrm{em}}$ for the electron in the hydrogen atom. Let $E'$ be the corresponding energy mean value got by using the DFW Hamiltonian H$'_{3\ \mathrm{em}}$ — which is valid in the same inertial frame (the mass center frame) as is H$'_{1\ \mathrm{em}}$, but that uses Ryder’s tetrad instead of the Cartesian tetrad. As shown by Eq. (\[Etilde-Ebreve-em\]), we have $E'-E=A $, where $A$ is given by Eq. (\[bar A-explicit\]): $A$ depends on the eigenstate $\Psi $ and is arbitrarily large.\ Although Ryder’s tetrad is not “adapted" to the rotating frame F in the sense of Eq. (\[u\_0=v\_F\]), it will turn out to be interesting to have the precise expression of the Hamiltonian and energy operator in that frame F \[in the coordinates $(x^\mu )$, Eq. (\[rotating Cartesian\])\] with this tetrad. That precise expression was not given by Ryder [@Ryder2008], who wrote: “The Dirac equation (4) then, on rearrangement, is found to have a ${{{\boldsymbol{\sigma }}}}.{{{\boldsymbol{\omega }}}}$ ($=\omega \sigma ^3$ here) contribution to the Hamiltonian — a spin-rotation coupling term exactly as predicted by Mashhoon." From the expression (\[Ryder1\]) of that tetrad as function of the natural basis of the coordinates $(x^\mu )$, and from (\[Spin connec - Ryder1\]), we get once again for the connection matrices \[this time in the coordinates $(x^\mu )$\]: \[Spin connec - Ryder1 - rotating Cartesian\] \_0= - \^3, \_j=0, and we get the $\gamma ^\mu $ matrices (\[flat-deformed\]), \[gamma - Ryder1 - rotating Cartesian\] \^0=\^[0]{},\^1=\^[0]{}+\^[1]{},\^2=-\^[0]{}+\^[2]{},\^3=\^[3]{} , whence for the $\alpha ^\mu $ matrices in Eq. (\[alpha\]): \[alpha - Ryder1 - rotating Cartesian\] \^0=\^[0]{},\^1= \_4+\^[1]{}, \^2= -\_4+\^[2]{}, \^3=\^[3]{}. Therefore, the energy operator $\mathrm{E}_3 = \mathrm{H}_3$, Eq. (\[Hamilton-Dirac-normal\]), is now: \_3 & = & mc\^2\^0 -ic \^[ j]{} \_j -ic \_0\ & = & mc\^2\^[0]{} -ic ,\ \_3 & = & ’\_3 -[**L**]{}. \[Hamilton-Ryder1-rotating\] Remembering Eq. (\[Spin connec - Ryder1 - rotating Cartesian\]), we see that with Ryder’s (first) tetrad, the energy operator in the rotating frame F has indeed the spin-rotation coupling term $-\frac{\hbar \omega }{2}\Sigma ^3=-{{{\boldsymbol{\omega }}}}.{\bf S }$ — as has the energy operator with this tetrad but in the inertial frame F$'$, Eq. (\[Hamilton-Ryder1-inertial\]). Also, these two energy operators differ from one another only by the [*angular momentum*]{} term — just as we found also with the Cartesian tetrad, Eqs. (\[Hamilton-Dirac-SR\]) and (\[Hamilton-restricted-gauge\]). The general relation between the Hamiltonians in two frames in relative rotation -------------------------------------------------------------------------------- It turns out to be a general fact that the Dirac Hamiltonian operators in two reference frames in relative rotation differ only by the angular momentum term, if they correspond to the same tetrad field. In a general Lorentzian spacetime $(\mathrm{V},{{{\boldsymbol{g}}}})$, consider a general reference frame R$'$, defined by a chart $\chi' : X\mapsto (x'^\mu )=(ct',x',y',z')$, and define another reference frame R by a chart $\chi $ deduced from $\chi '$ by a transformation generalizing (\[rotating Cartesian\]): \[rotating chart\] t=t’, x=x’(t) + y’ (t), y=-x’ (t) + y’ (t), z=z’. So the spatial coordinate vector ${\bf r}\equiv (x,y,z)$, at least, is undergoing a rotation, at a variable rate $\omega \equiv \dot{\phi }\equiv d\phi /dt$, with respect to the space browsed by the coordinates $(x'^j)$. This corresponds to a rotation in physical space if $\mathrm{V}$ is endowed with the Minkowski metric ${{{\boldsymbol{\gamma}}}}$ (with possibly ${{{\boldsymbol{\gamma}}}}={{{\boldsymbol{g}}}}$ as a particular case) and the chart $\chi' $ is Cartesian for ${{{\boldsymbol{\gamma}}}}$. The Dirac Hamiltonian (\[Hamilton-Dirac-normal\]) rewrites immediately, in the most general case, as \[Hamilton-DFW-space-covariant\] = i + \^0 -i (=1=c). On a general coordinate change, $\gamma ^\mu D_\mu$ is invariant (for a given tetrad field, of course), due to the transformation behaviours of $\gamma ^\mu $, $\Gamma _\mu $, and $\partial _\mu $. On the coordinate change (\[rotating chart\]), $\gamma ^0$ and $g^{00}$ are invariant. Therefore, we have \[Hamilton-change-rotation\] ’- = i(-) = i =i(y-x), that is, \[delta H-relative rotation\] - ’= -[**L**]{}, (,0,0), (-i), as announced. [^10] Conclusion ========== The predictions of the spin-rotation coupling term for a particle obeying the covariant Dirac equation (\[Dirac-normal\]) have considered a tetrad field which is undergoing more or less the same rotation as the rotating reference frame itself [@HehlNi1990; @CaiPapini1991; @Ryder2008]. As suggested by Hehl & Ni [@HehlNi1990] and by Ryder [@Ryder2008], to make this precise one should use the notion of the Fermi-Walker transport or derivative. By using the Fermi-Walker derivative one may indeed define rigorously the rotation rate of the spatial triad associated with an orthonormal tetrad, for a general reference frame in a general spacetime [@A47]. This rotation rate is the spatial tensor field ${{{\boldsymbol{\Xi }}}}$ in Eq. (\[Xi=-Phi\]). That definition needs that one considers an “adapted" tetrad to the reference frame considered, i.e., one such that the time-like vector of the tetrad is the four-velocity of the reference frame, Eq. (\[u\_0=v\_F\]). The rotation rate of the reference frame should be precisely defined also as a spatial tensor field ${{{\boldsymbol{\Omega }}}}$ and can indeed be, Eq. (\[Weyssenhoff modified\]).\ For a uniformly rotating frame in the Minkowski spacetime, we succeeded at defining an [*adapted*]{} tetrad field which verifies ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$ almost exactly. With this tetrad field, the energy operator in the rotating frame does have the spin-rotation coupling term, Eq. (\[Spin-rotation-2\]). We also wrote explicitly the energy operator with Ryder’s rotating tetrad field [@Ryder2008], which does involve this term, too — although Ryder’s tetrad is adapted to the inertial frame, not to the rotating frame.\ However, the three tetrad fields investigated in the present work provide three different Hamiltonians in the inertial frame, as well as three different Hamiltonians in the rotating frame. (In each case, the Hamiltonian coincides with the energy operator.) We emphasized the grave physical inequivalence of the energy operators in the inertial frame and corresponding with either the Cartesian tetrad or Ryder’s rotating tetrad. Moreover, those tetrads that provide the spin-rotation coupling term in the energy operator of the rotating frame, give it also in the energy operator of the [*inertial frame*]{}. In fact, we find quite generally that the Hamiltonian operators in two reference frames in relative rotation, but corresponding to the same tetrad field, differ [*only*]{} by the [*angular momentum*]{} term, Eq. (\[delta H-relative rotation\]). Thus, if the Hamiltonian involves spin-rotation coupling in the rotating frame, and if one keeps the same tetrad, then the corresponding Hamiltonian in the inertial frame [*must*]{} also involve spin-rotation coupling, which is certainly unexpected physically. Therefore, if the spin-rotation coupling is to exist for a Dirac particle, it means that two different tetrad fields must be chosen for two different reference frames. Thus, for each given reference frame, a tetrad field adapted to that reference frame should be chosen. Then, to get the relevant rotation rate in the spin-rotation coupling term, one has to impose that the rotation rate of the triad is indeed that of the reference frame: ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$. That is, if the spin-rotation coupling is to exist for a Dirac particle, Variant [*a*]{}) of is the correct scheme to select the tetrad field. As we saw, this is difficult to implement already for the simple case of a uniform rotation in a Minkowski spacetime — not to speak of a general situation.\ One may consider that the choice of a tetrad field should be valid for any reference frame instead. is the only currently available one that ensures this while providing unambiguous Dirac Hamiltonian and energy operators. It assumes that the metric can be put in the form (\[isotropic-diagonal\]) in some chart: preferably a global one of course, in which case, by setting $\gamma _{\mu \nu }\equiv \eta _{\mu \nu }$ in that chart, one endows the spacetime with the Minkowski metric ${{{\boldsymbol{\gamma }}}} $, related simply to the physical metric ${{{\boldsymbol{g}}}}$. In the case of a Minkowski spacetime (${{{\boldsymbol{g}}}}={{{\boldsymbol{\gamma }}}} $), this framework leads to selecting any “Cartesian tetrad". It predicts no spin-rotation coupling. Thus, experiments should decide. [9]{} S. A. Werner, J. L. Staudenmann, and R. Colella, “Effect of Earth’s rotation on the quantum mechanical phase of the neutron," [*Phys. Rev. Lett.*]{} [**42**]{}, 1103–1106 (1979). M. Arminjon, “Main effects of the Earth’s rotation on the stationary states of ultra-cold neutrons," [*Phys. Lett. A*]{} [**372**]{}, 2196–2200 (2008). \[See also [arXiv:0708.3204v2 (quant-ph)](http://arxiv.org/abs/0708.3204v2)\] J. Kuroiwa, M. Kasai, and T. Futamase, “A treatment of general relativistic effects in quantum interference," [*Phys. Lett. A*]{} [**182**]{}, 330–334 (1993). V. S. Morozova and B. J. Ahmedov, “Quantum interference effects in slowly rotating NUT space-time," [*Int. J. Mod. Phys. D*]{} [**18**]{}, 107–118 (2009). [\[arXiv:0804.2786v2 (gr-qc)\]](http://arxiv.org/abs/0804.2786v2) B. Mashhoon, “On the coupling of intrinsic spin with the rotation of the earth," [*Phys. Lett. A*]{} [**198**]{}, 9–13 (1995). B. Mashhoon, “Neutron interferometry in a rotating frame of reference," [*Phys. Rev. Lett.*]{} [**61**]{}, 2639–2642 (1988). F. W. Hehl and W. T. Ni, “Inertial effects of a Dirac particle," [*Phys. Rev.  D*]{} [**42**]{}, 2045–2048 (1990). Y. Q. Cai and G. Papini, “Neutrino helicity flip from gravity-spin coupling," [*Phys. Rev. Lett.*]{} [**66**]{}, 1259–1262 (1991). D. R. Brill and J. A. Wheeler, “Interaction of neutrinos and gravitational fields," [*Rev. Modern Phys.*]{} [**29**]{}, 465–479 (1957). Erratum: [*Rev. Modern Phys.*]{} [**33**]{}, 623–624 (1961). T. C. Chapman and D. J. Leiter, “On the generally covariant Dirac equation," [*Am. J. Phys.*]{} [**44**]{}, No. 9, 858–862 (1976). C. J. Isham, “Spinor fields in four dimensional space-time," [*Proc. Roy. Soc. London A*]{} [**364**]{}, 591–599 (1978). L. Ryder, “Spin-rotation coupling and Fermi-Walker transport," [*Gen. Relativ. Gravit.*]{} [**40**]{}, 1111–1115 (2008). M. Arminjon and F. Reifler, “A non-uniqueness problem of the Dirac theory in a curved spacetime," [*Ann. Phys. (Berlin)*]{} [**523**]{}, 531–551 (2011). [\[arXiv:0905.3686 (gr-qc)\]](http://arxiv.org/abs/0905.3686) M. Arminjon and F. Reifler, “Four-vector vs. four-scalar representation of the Dirac wave function," [*Int. J. Geom. Meth. Mod. Phys.*]{} [**9**]{}, No. 4, 1250026 (2012). [\[arXiv:1012.2327v2 (gr-qc)\]](http://arxiv.org/abs/1012.2327v2) M. Leclerc, “Hermitian Dirac Hamiltonian in the time-dependent gravitational field," [*Class. Quant. Grav.*]{} [**23**]{}, 4013–4020 (2006). [\[arXiv:gr-qc/0511060v3\]](http://arxiv.org/abs/gr-qc/0511060v3) M. Arminjon, “A simpler solution of the non-uniqueness problem of the Dirac theory," [*Int. J. Geom. Meth. Mod. Phys.*]{} [**10**]{}, No. 7, 1350027 (2013) \[24 pages\]. [\[arXiv:1205.3386v4 (math-ph)\]](http://arxiv.org/abs/1205.3386v4) M. V. Gorbatenko and V. P. Neznamov, “Absence of the non-uniqueness problem of the Dirac theory in a curved spacetime. Spin-rotation coupling is not physically relevant," [arXiv:1301.7599v2 (gr-qc)](http://arxiv.org/abs/1301.7599v2). M. Arminjon, “On the non-uniqueness problem of the covariant Dirac theory and the spin-rotation coupling," [*Int. J. Theor. Phys.*]{} [**52**]{} (2013), DOI 10.1007/s10773-013-1717-x. \[[arXiv:1302.5584v2 (gr-qc)](http://arxiv.org/abs/1302.5584v2)\] L. Parker, “One-electron atom as a probe of spacetime curvature," [*Phys. Rev. D*]{} [**22**]{}, 1922–1934 (1980). X. Huang and L. Parker, “Hermiticity of the Dirac Hamiltonian in curved spacetime," [*Phys. Rev. D*]{} [**79**]{}, 024020 (2009). [\[arXiv:0811.2296 (gr-qc)\]](http://arxiv.org/abs/0811.2296) M. Arminjon, “A solution of the non-uniqueness problem of the Dirac Hamiltonian and energy operators," [*Ann. Phys. (Berlin)*]{} [**523**]{}, 1008–1028 (2011). \[Pre-peer-review version: [arXiv:1107.4556v2 (gr-qc)](http://arxiv.org/abs/1107.4556v2)\]. M. Arminjon and F. Reifler, “Dirac equation: Representation independence and tensor transformation," [*Braz. J. Phys.*]{} [**38**]{}, 248–258 (2008). [\[arXiv:0707.1829 (quant-ph)\]](http://arxiv.org/abs/0707.1829) C. Cattaneo, “General relativity: relative standard mass, momentum, energy and gravitational field in a general system of reference," [*il Nuovo Cimento*]{} [**10**]{}, 318–337 (1958). J. von Weyssenhof, “Metrisches Feld und Gravitationsfeld," [*Bull. Acad. Polon. Sci., Sect. A*]{} [**252**]{} (1937). (Quoted by Cattaneo [@Cattaneo1958].) M. Arminjon and F. Reifler, “Basic quantum mechanics for three Dirac equations in a curved spacetime," [*Braz. J. Phys.*]{} [**40**]{}, 242–255 (2010). [\[arXiv:0807.0570 (gr-qc)\]](http://arxiv.org/abs/0807.0570). S. S. Chern, W. H. Chen, and K. S. Lam, [*Lectures on Differential Geometry*]{} (Singapore: World Scientific 1999), pp. 113–121. W. Pauli, “Contributions mathématiques à la théorie des matrices de Dirac," [*Ann. Inst. Henri Poincaré*]{} [**6**]{}, 109–136 (1936). B. Mashhoon and U. Muench, “Length measurement in accelerated systems," [*Ann. Phys. (Berlin)*]{} [**11**]{}, 532–547 (2002). [\[arXiv:gr-qc/0206082v1\]](http://arxiv.org/abs/gr-qc/0206082v1) J. W. Maluf, F. F. Faria, and S. C. Ulhoa, “On reference frames in spacetime and gravitational energy in freely falling frames," [*Class. Quant. Grav.*]{} [**24**]{}, 2743–2754 (2007). [\[arXiv:0704.0986v1 (gr-qc)\]](http://arxiv.org/abs/0704.0986v1) M. Arminjon, “Space isotropy and weak equivalence principle in a scalar theory of gravity," [*Braz. J. Phys.*]{} [**36**]{}, 177–189 (2006). [\[arXiv:gr-qc/0412085\]](http://arxiv.org/abs/gr-qc/0412085) M. Arminjon and F. Reifler, “General reference frames and their associated space manifolds," [*Int. J. Geom. Methods Mod. Phys.*]{} [**8**]{}, No. 1, 155–165 (2011). \[[arXiv:1003.3521v2 (gr-qc)](http://arxiv.org/abs/1003.3521)\] R. T. Jantzen, P. Carini, and D. Bini, “The many faces of gravitoelectromagnetism," [*Ann. Phys. (New York)*]{} [**215**]{}, 1–50 (1992). [\[arXiv:gr-qc/0106043\]](http://arxiv.org/abs/gr-qc/0106043) [^1]: \[em-case\]In that case, the r.h.s. of the “free" Dirac equation (\[Dirac-normal\]) is augmented with the term $-iq\gamma ^\mu V_\mu \Psi $, with $q$ the electric charge and $V_\mu $ the four-potential. Thus the “free" Hamiltonian H is replaced by $\mathrm{H}_{\mathrm{em}}=\mathrm{H}+q(V_0\,{\bf 1}_4+V_j\,\alpha ^j)$, where $\alpha ^j\equiv \gamma ^0\gamma ^j/g^{00}$, as is the case [@A40] for Dirac’s original equation. It follows that, after a local similarity transformation $S$, after which H becomes $\widetilde{\mathrm{H}}$, the complete Hamiltonian H$_{\mathrm{em}}$ becomes $\widetilde{\mathrm{H}_{\mathrm{em}}}$, with $\widetilde{\mathrm{H}_{\mathrm{em}}}-S^{-1}\mathrm{H}_{\mathrm{em}}S=\widetilde{\mathrm{H}}-S^{-1}\mathrm{H}S$. We get similarly for the energy operator: $\widetilde{\mathrm{E}_{\mathrm{em}}}-S^{-1}\mathrm{E}_{\mathrm{em}}S=\widetilde{\mathrm{E}}-S^{-1}\mathrm{E}S$, whence for any state $\Psi $ and the corresponding state after application of $S$, $\widetilde{\Psi }\equiv S^{-1}\Psi $ \[noting $(\,\mid \,)$ and $(\,\,\widetilde{\mid} \, \,)$ the scalar products before and after application of $S$\]: \[Etilde-Ebreve-em\] ( )-(\_)=( )-(),  -\_ = - . {We use the fact that $(\widetilde{\Psi }\,\widetilde{\mid} \, S^{-1}\mathrm{E}S \widetilde{\Psi })=(\Psi \mid \mathrm{E}\Psi )$ [@A50].} Hence, the non-uniqueness of the operators H$_{\mathrm{em}}$ and E$_{\mathrm{em}}$ and that of the spectrum of E$_{\mathrm{em}}$ appear in strictly the same way as in the case of the “free" Dirac equation, whether the spacetime is curved or not. [^2]: \[Covariance Psi\]Nevertheless, the covariant Dirac equation being in particular covariant on a coordinate change, the evolutions of $\Psi $ calculated from $i\,\partial _t \Psi =\mathrm{H}\Psi $ in one coordinate system or in another one are equivalent. Specifically, for the DFW equation, $\Psi $ behaves as a scalar on any coordinate change [@BrillWheeler1957+Corr; @ChapmanLeiter1976; @A45], thus we have simply $\Psi'((x'^\nu ))=\Psi((x^\mu ))$ — with the restriction mentioned after Eq. (\[Minkowski in rotating Cartesian\]) below. [^3]: There are alternative versions of the covariant Dirac equation in which the wave function is a complex vector field, for which case one may optionally decompose the wave function on the coordinate basis (the natural basis of the coordinate system) [@A45]. Taking this option means that the frame field on the spinor bundle coincides with the coordinate basis. Then $\Psi $ transforms as a vector and $(\gamma ^\mu) $ as a $(2\ 1)$ tensor [@A45]. [^4]: \[Connection matrices\]Let $D$ be a connection on some vector bundle ${\sf E}$ with base V, let $(u_\alpha ) $ be a frame field on TV, and let $(e_a)$ be a frame field on ${\sf E}$. The connection matrices $\Gamma _\alpha $ of $D$ in the frame fields $(u_\alpha) $ and $(e_a)$ are defined by their scalar components $(\Gamma _\alpha)^b_{\ \,a} $, such that \[De\_a\] De\_a(u\_) =(\_)\^b\_[ a]{} e\_b. This leads immediately to (\[Gamma\_mu covector\])$_1$. If the frame field on TV is a local coordinate basis: $u_\alpha =\delta ^\mu _\alpha \partial _\mu $, one may then compute the covariant derivatives $D_\mu \Psi ^b$ of any section of ${\sf E}$, $\psi =\Psi ^b e_b$, in a matrix form: $ D_\mu \Psi =\partial _\mu \Psi + \Gamma _\mu \Psi $. Thus, this notion of a connection matrix [@A45] extends conveniently the usual notion of the matrices of the “spin connection" entering the covariant Dirac equation, to any connection on a general vector bundle. It has a simple relation to the definition of a connection “matrix" as a matrix of one-forms [@ChernChenLam1999], $\omega =(\omega ^b_{\ \,a})$: if $(\theta ^\beta )$ is the dual frame of a frame field $(u_\alpha ) $ on TV, one has $\omega ^b_{\ \,a}= (\Gamma _\alpha)^b_{\ \,a}\, \theta ^\alpha $. The covector transformation of the matrices $\Gamma _\alpha $ on changing $(u_\alpha )$ applies for a given frame field on ${\sf E}$ in (\[De\_a\]), thus it does not apply if ${\sf E}=\mathrm{TV}$ and $e_a=\delta ^\alpha _a u_\alpha $. [^5]:   When $A$ is the constant $A=\gamma ^{\sharp 0}$, the hermiticity condition has been derived in the form $(\forall \Psi ,\Phi )\ \int \Psi ^\dagger \gamma ^{\sharp 0} \partial _0 \left(\sqrt{-g}\, \gamma^0 \right) \Phi \, \dd ^3{\bf x}=0$ by Parker [@Parker1980] and by Huang & Parker [@HuangParker2009]. A particular case of the latter integral condition has been derived by Leclerc [@Leclerc2006]. [^6]: \[Omega vs t\]The spatial tensor ${{{\boldsymbol{\Omega }}}}$ depends on the choice of the time coordinate $t$ in a complex manner, whereas, on changing from $t$ to $t'$,  ${{{\boldsymbol{\Xi }}}}$ gets simply multiplied by $dt/dt'$. Hence, the equality ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$ is not covariant under a change of the time coordinate, so that the prescriptions ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega }}}}$ corresponding to reference frames differing merely in the choice of the time coordinate are not physically equivalent. And indeed, there is a rewriting of the geodesic equation of motion in the form of Newton’s second law, in which the tensor ${{{\boldsymbol{\Omega }}}}$ plays exactly the role played by the angular velocity tensor of a rotating frame in Newtonian theory [@Cattaneo1958] — but in this rewriting ${{{\boldsymbol{\Omega }}}}$ has to be calculated with a time coordinate $\hat{x}^0$ such that, along a world line of the congruence, we have $d\hat{x}^0=c\,d\tau $, where $d\tau $ is the proper time increment. Thus, if one applies the prescription ${{{\boldsymbol{\Xi }}}}={{{\boldsymbol{\Omega}}}}$, one should impose that the time coordinate be a such one, $\hat{x}^0$ with $d\hat{x}^0=c\,d\tau $ [@A47]. For the uniformly rotating frame, the tensors ${{{\boldsymbol{\Omega }}}}$ calculated with either $t$ or $\tau $ differ only by $O(V^2/c^2)$ [@A47] ($V$ is defined in Sect. \[inertial/rotating\]), and the same is easy to check for ${{{\boldsymbol{\Xi }}}}$. [^7]: Moreover, this form is generic for an alternative theory of gravitation [@A35], in the preferred reference frame assumed by that theory. That theory is based only on a scalar field which determines, among other things, the physical metric ${{{\boldsymbol{g}}}}$, from an a priori assumed flat metric, say ${{{\boldsymbol{\gamma }}}}$. Although it thus has two metrics, this is not a metric theory in the standard sense. [^8]: Hence, by (\[Xi=-Phi\]): for that tetrad, in the inertial frame F$'$ to which it is adapted, we have ${{{\boldsymbol{\Xi}}}}={{{\boldsymbol{0}}}}$: [*the Cartesian tetrad solves Variant [*b*]{}) of for the inertial frame.*]{} [^9]: The choice of the set $(\gamma ^{\sharp \alpha })$ does not matter, because corresponding $(\gamma ^\mu )$ fields exchange by constant similarity transformations, hence give rise to equivalent energy operators. With the standard set (Dirac’s), we have $s^{jk}=-2i\epsilon _{jkl}\Sigma ^l$ and $s^{0j}=2\Sigma'^j$. [^10]: In particular, the Hamiltonian in the inertial frame F$'$ and with the adapted tetrad (\[u circ-01\])-(\[u circ-23\]) is: $\mathrm{H}'_2=\mathrm{H}_2+{{{\boldsymbol{\omega .}}}}{\bf L}$, with $\mathrm{H}_2$ given by Eq. (\[Hamilton-Dirac-normal-2\]). This is also the energy operator.
--- abstract: 'Two recent papers proved that complex index pairings can be calculated as the half-signature of a finite dimensional matrix, called the spectral localizer. This paper contains a new proof of this connection for even index pairings based on a spectral flow argument. It also provides a numerical study of the spectral gap and the half-signature of the spectral localizer for a typical two-dimensional disordered topological insulator in the regime of a mobility gap at the Fermi energy. This regime is not covered by the above mathematical results (which suppose a bulk gap), but nevertheless the half-signature of the spectral localizer is a clear indicator of a topological phase.' author: - | Edgar Lozano Viesca$^1$, Jonas Schober$^2$, Hermann Schulz-Baldes$^2$\ \ [$^1$ Instituto de Matemáticas, UNAM, Unidad Cuernavaca, Mexico]{}\ [$^2$ Department Mathematik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany]{} date: title: 'Chern numbers as half-signature of the spectral localizer' --- Introduction {#sec-Intro} ============ In non-commutative geometry, an index pairing results from pairing a $K$-theory class with a Fredholm module [@Con; @GVF]. In complex $K$-theory and $K$-homology, such pairings can be odd or even. Here the focus is on those pairings which lead to Fredholm operators in the classical sense and hence an integer-valued index. In two recent papers [@LS; @LS2], Loring and one of the authors proved that both odd and even index pairings can be calculated as the half-signature of a certain matrix, called the spectral localizer. In the even case, the normality of the Dirac operators is needed as a supplementary condition. While the initial proofs in [@LS; @LS2] use several $K$-theoretic tools and are quite involved, the paper [@LS3] gave a relatively elementary proof for the case of odd pairings. It is merely based on basic properties of the spectral flow. In the present paper, this is also achieved for the even index pairings. As will be discussed below, this provides new insights on the nature of the spectral localizer itself. We furthermore expect these proofs via spectral flow also to transpose to semifinite index pairings which could then be calculated in terms of a Breuer-Fredholm signature. While the spectral localizer is certainly of interest from a purely index-theoretic point of view, its main use may be to make index pairings accessible to numerical computations in situations where other standard tools do not work as well. This was demonstrated already in an early study by Loring [@Lor] and quite impressively in a recent preprint addressing quasicrystals [@Lor2]. In this paper the more standard situation of a two-dimensional disordered integer quantum Hall state is analyzed numerically. Particular focus is on the mobility gap regime in which the Hamiltonian has Anderson localized states at the Fermi level, but the Fermi projection nevertheless has non-trivial topology. The prototypical example is the quantum Hall regime between two Landau bands where the density of states is strictly positive and the Chern number (that is, the Hall conductivity) is non-vanishing. This Chern number is equal to an index pairing by a well-known index theorem which extends into the mobility gap regime, namely both the Chern numbers and the integer-valued index remain well-defined and are equal [@BES; @PS]. At least in the regime of a gapped Hamiltonian, the Chern number can hence be calculated as the half-signature of the spectral localizer due to the result of [@LS2]. It is, of course, interesting to analyze the spectral localizer in the Anderson localized mobility gap regime. To investigate this point numerically, we picked the $p+ip$ wave dirty superconductor in the tight-binding approximation as a toy model. Even though the numerical methods are not nearly as sophisticated as in [@Lor2] and the used computational power was quite limited, the first numerical results presented here are encouraging. Indeed, they show that there is large range of parameters (of the disorder strength) for which the Hamiltonian has (presumably Anderson localized) spectrum at the Fermi level and the spectral localizer remains nevertheless gapped and has a non-vanishing signature. Hence the conclusion from this numerical study is that the spectral localizer continues to detect the non-trivial topology in the mobility gap regime, even though the mathematical theorems of [@LS; @LS2] do not apply any longer. Let us comment that also ${{\mathbb Z}}_2$-valued (strong) index pairings make sense in the mobility gap regime whenever a Real symmetry is present [@Sch; @GS]. It was argued in [@LS] that also in these cases one can calculate the ${{\mathbb Z}}_2$-invariant as the sign of the Pfaffian or determinant of the spectral localizer (depending on the symmetry considered). Such Real symmetries are not studied in the present work. Finally let us indicate a further potential application of the spectral localizer: it can be used as a local topological marker allowing to distinguish spacial regions with different topological invariants in an inhomogeneous material or model. This paper is organized as follows. After the definition of index pairings and the spectral localizer in Sections \[sec-FredMod\] and \[sec-SpecLoc\] respectively, the main known facts about the connection between index pairings and the spectral localizer are presented in Section \[sec-Gap\]. In Section \[sec-Chern\] it is discussed how to use the spectral localizer for the calculation of Chern numbers. The numerical results on the mobility gap regime are then presented and discussed in Section \[sec-p+ip\]. An outline of the new spectral flow proof of the main result on even index pairings is given in Section \[sec-SFargument\], and the details of the proof follow in Section \[sec-Proof\]. Fredholm modules and index pairings {#sec-FredMod} =================================== Let us begin by describing the notations and notions of complex index pairings ([*i.e.*]{} without Real symmetries), by extracting the essentials from the general framework of non-commutative index theory [@Con; @GVF]. Let $A$ be an invertible operator on a separable Hilbert space ${{\cal H}}$. Its phase $U=A|A|^{-1}$ is a unitary operator. Even though this is not of importance in the following, one may consider it as specifying a $K_1$-class (of the $C^*$-algebra generated by $U$ or some larger $C^*$-algebra). An unbounded odd Fredholm module for $A$ is a selfadjoint invertible (Dirac) operator $D$ with compact resolvent such that the commutator $[A,D]$ extends to a bounded operator (more traditionally, one requires commutators for a dense subset in a C$^*$-algebra containing $A$ to have this property). Associated to $D$ is the so-called Hardy projection $\Pi=\chi(D>0)$ where $\chi$ denotes the indicator function. Then it is well-known, [*e.g.*]{} [@Con] or p. 462 in [@GVF], that the commutator $[\Pi,A]$ is compact and the Toeplitz operator $$\label{eq-OddPair} T^{{\mbox{\rm\tiny od}}}\;=\;\Pi \,A\,\Pi+({{\bf 1}}-\Pi) \;,$$ is a bounded Fredholm operator on ${{\cal H}}$. To any Fredholm operator $T$ is associated its index $${{\rm Ind}}(T) \;=\; \dim({{\rm Ker}}(T))\,-\,\dim({{\rm Ker}}(T^*)) \;.$$ (It should be called a Noether index rather than a Fredholm index as Fritz Noether was the first to exhibit a Fredholm operator with non-vanishing index, and Fredholm erroneously believed that all Fredholm operators have vanishing index.) The operator $T^{{\mbox{\rm\tiny od}}}$ and its index are called the odd index pairing of (the $K_1$-class of) $A$ with (the odd Fredholm module specified by) $D$. Next let us describe even index pairings. Let $H=H^*$ be an invertible selfadjoint operator on a Hilbert space ${{\cal H}}$. By spectral calculus it has a negative spectral projection $P=\chi(H<0)$ (the so-called Fermi projection of $H$) which may be thought of as fixing a $K_0$-class (of a suitable $C^*$-algebra, but again this is not relevant for the following). An even Fredholm module for $H$ is an invertible, selfadjoint (Dirac) operator $D$ on ${{\cal H}}\oplus{{\cal H}}$ with compact resolvent such that $[D,H \oplus H]$ can be extended to a bounded operator together with a selfadjoint unitary $\Gamma$ with two infinite dimensional eigenspaces and for which $\Gamma D\Gamma=-D$. In the following, we will always go into the spectral representation of $\Gamma$ so that $$\Gamma \;=\; \begin{pmatrix} {{\bf 1}}& 0 \\ 0 & -{{\bf 1}}\end{pmatrix} \;, \qquad D\;=\;\begin{pmatrix} 0 & D_0^* \\ D_0 & 0\end{pmatrix} \;,$$ where $D_0$ is an invertible, unbounded operator on ${{\cal H}}$. Furthermore, it will always be assumed below that $D_0$ is normal. The operator $F=D_0 |D_0|^{-1}$ is unitary and is called the corresponding Dirac phase. Again it is well-known [@Con; @GVF] that $$\label{eq-EvenPair} T^{{\mbox{\rm\tiny ev}}}\;=\; P F P \,+\, ( {{\bf 1}}-P )$$ is a bounded Fredholm operator which together with its index is called the even index pairing of $H$ with $D$ (or more conventionally, the $K_0$-class of $P$ with the even Fredholm module specified by $D$). Spectral localizer {#sec-SpecLoc} ================== In this section, the spectral localizer is introduced, separately for the odd and even cases. It is a selfadjoint operator on ${{\cal H}}\oplus{{\cal H}}$ depending on a tuning parameter $\kappa>0$. For an odd pairing, it is given by $$L^{{\mbox{\rm\tiny od}}}_\kappa \;=\; \begin{pmatrix} \kappa\,D & A \\ A^* & -\kappa\,D\end{pmatrix} \;=\; \kappa\,D\otimes\Gamma\,+\, \begin{pmatrix} 0 & A \\ A^* & 0\end{pmatrix} \;,$$ while for an even pairing $$L^{{\mbox{\rm\tiny ev}}}_\kappa \;=\; \begin{pmatrix} -H & \kappa \,D_0^* \\ \kappa\, D_0 & H\end{pmatrix} \;=\; \kappa \,D\,-\, H \otimes \Gamma \;.$$ The tuning parameter can be thought of as the resolution of space which allows to alter the distance between eigenvalues of $D$ that are interpreted as spacial distance. Note that with this interpretation in mind, the fact that the commutators of $A$ and $H$ with $D$ are bounded reflects that $A$ and $H$ are local operators w.r.t. the spacial structure of $D$, that is, their matrix elements have an off-diagonal decay over the eigenbasis of $D$. In both the odd and even case, the spectral localizer is next restricted to finite volume, again with a notion of space connected to the Dirac operator. Hence, as finite volume one uses the spectral projections of $D$ on all eigenvalue of modulus less than $\rho>0$. For odd pairings, let $\pi_\rho$ the surjective partial isometry onto ${{\cal H}}_\rho={{\rm Ran}}(\chi(|D|\leq \rho))$ which by the compactness assumption on the resolvent of $D$ is a finite dimensional subspace. Then set $({{\cal H}}\oplus{{\cal H}})_\rho={{\cal H}}_\rho\oplus{{\cal H}}_\rho$ and let us identify $\pi_\rho\oplus\pi_\rho$ with $\pi_\rho$ for sake of notational simplicity. Then for any operator $B$ on ${{\cal H}}$ or ${{\cal H}}\oplus{{\cal H}}$ let $B_\rho=\pi_\rho B\pi_\rho^*$ be the restriction of $B$ to ${{\cal H}}_\rho$ or $({{\cal H}}\oplus{{\cal H}})_\rho$, respectively. In particular, ${{\bf 1}}_\rho=\pi_\rho \pi_\rho^*$ is the identity on ${{\cal H}}_\rho$ or $({{\cal H}}\oplus{{\cal H}})_\rho$. The finite volume spectral localizer is then the finite-dimensional selfadjoint matrix $$L^{{\mbox{\rm\tiny od}}}_{\kappa,\rho}\;=\;(L^{{\mbox{\rm\tiny od}}}_\kappa)_\rho \;=\; \begin{pmatrix} \kappa\,D_\rho & A^*_\rho \\ A_\rho & -\kappa\,D_\rho\end{pmatrix} \;.$$ In the case of even index pairings, one proceeds in a similar manner, but now ${{\rm Ran}}(\chi(|D|\leq \rho))$ is a subspace $({{\cal H}}\oplus{{\cal H}})_\rho$ of ${{\cal H}}\oplus{{\cal H}}$. As $D^2 = {{\rm diag}}(D_0^*D_0, D_0 D_0^*)$ one has $(\mathcal H \oplus \mathcal H )_\rho = \mathcal H_{\rho,+} \oplus \mathcal H_{\rho,-}$ with $\mathcal H_{\rho,+} = {{\rm Ran}}( \chi(|D_0| \leq \rho))$ and $\mathcal H_{\rho,-} = {{\rm Ran}}( \chi(|D_0^*| \leq \rho))$. As already stressed above, it will assumed throughout that $D_0$ is normal. Then ${{\cal H}}_{\rho,+}={{\cal H}}_{\rho,-}$ which will again simply be denoted by ${{\cal H}}_\rho$. Then the set-up is exactly as in the case of odd pairings and the spectral localizer at finite volume is given by the same formula as above: $$\label{eq-SLEvDef} L^{{\mbox{\rm\tiny ev}}}_{\kappa, \rho} \;=\; \left( \kappa\, D \,-\, H \otimes \Gamma \right)_\rho \;=\; \begin{pmatrix} -H_\rho & \kappa \,D_{0,\rho}^* \\ \kappa\, D_{0,\rho} & H_\rho \end{pmatrix} \;.$$ Whenever a statement below holds for both $L^{{\mbox{\rm\tiny od}}}_{\kappa, \rho}$ and $ L^{{\mbox{\rm\tiny ev}}}_{\kappa, \rho}$, the upper index is dropped. Before going on with the presentation of results, let us put forward some intuition on the spectral localizer. If $A$ and $H$ vanish (what is strictly speaking not allowed), then the spectrum of $L_{\kappa}$ is symmetric around $0$ by construction. Note that the distance of the spectrum to $0$ is of the order $\kappa$ which can thus be made small by increasing the spacial resolution. Now $A$ and $H$ act like a mass term and open a larger spectral gap of the spectral localizer, by moving low-lying eigenvalues of $L_{\kappa}$ away from $0$. It is, however, a fact proved later on that this gap opening by adding $A$ or $H$ happens in a non-trivial manner, namely there may be more eigenvalues moving to the right or left of $0$. This can hence create a spectral asymmetry which turns out to be dictated by the topology captured by the index pairing. Resuming, $A$ and $H$ are like mass terms, albeit topologically non-trivial ones. Now as $A$ and $H$ are local, this spectral asymmetry of $L_\kappa$ should be captured already by its low-lying spectrum, namely it can be read off from the signature of the finite volume restrictions $L_{\kappa,\rho}$. To prove the validity of these heuristics is the object of [@LS; @LS2; @LS3], and also the present paper. Spectral gap and half-signature of spectral localizer {#sec-Gap} ===================================================== \[theo-Gap\] Let $g$ be the invertibility gap of $A$ or $H$, namely $g=\|A^{-1}\|^{-1}$ or $g=\|H^{-1}\|^{-1}$ respectively. Suppose that the tuning parameter satisfies in the respective cases $$\label{eq:kappa} \kappa \;\leq \; \frac{g^3}{12 \left\|H\right\| \left\|\left[D,A\right]\right\|} \;, \qquad \kappa \;\leq \; \frac{g^3}{12 \left\|H\right\| \left\|\left[D,H \oplus H\right]\right\|} \;,$$ and that the radius $\rho$ satisfies $$\label{eq:rho} \rho\;>\;\frac{2g}{\kappa} \;.$$ Then $$\label{eq-LBound} (L_{\kappa, \rho})^2 \;\geq \; \frac{g^2}{4} \,{{\bf 1}}_\rho \;.$$ In particular, and imply that $L_{\kappa, \rho}$ is invertible. The proof of this statement is given in full detail in [@LS2; @LS3] and will not be reproduced here. Let us merely sketch the main idea, focussing on the case of an odd pairing. One starts out from $$\label{eq-LowBound} (L^{{\mbox{\rm\tiny od}}}_{{\kappa},{\rho}})^2 \;=\; \begin{pmatrix} A_{\rho}^* A_{\rho}& 0 \\ 0 & A_{\rho}A_{\rho}^*\end{pmatrix} \,+\, {\kappa}^2\begin{pmatrix} D^2_{\rho}& 0 \\ 0 & D^2_{\rho}\end{pmatrix} \,+\, {\kappa}\begin{pmatrix} 0 & [D,A]^*_{\rho}\\ [D,A]_{\rho}& 0 \end{pmatrix} \;.$$ The first two summands are positive. The second one is large on large eigenvalues of $D$, but relatively small for on the low-lying spectrum of $D$ due to the (small) factor $\kappa^2$. For the latter spacial region, the positivity comes from the positivity of the first summand. Now $(A^*A)_\rho$ is bounded below by $g^2{{\bf 1}}_\rho$, but $A_{\rho}^* A_{\rho}\not= (A^*A)_\rho$. On the other hand, one can use an operator $f_\rho=f_\rho(D)$ constructed from a tapering (smooth) function $f_\rho:[-\rho,\rho]\to[0,1]$ of $D$ which vanishes at $\pm\rho$ and is equal to $1$ on $[-\frac{\rho}{2},\frac{\rho}{2}]$. Then $$A_\rho^*A_\rho \;\geq\; \pi_\rho A^*f_\rho^2A\pi_\rho^* \;=\; f_\rho A^*A f_\rho \,+\, \pi_\rho \big([f_\rho,A]^*f_\rho A+f_\rho A^*[f_\rho ,A]\big)\pi_\rho^* \;.$$ Now the first term on the r.h.s. combined with a similar one for $A_\rho A_\rho^*$ and the second term of leads to a uniform lower bound by $g^2$ on ${{\cal H}}_\rho\oplus{{\cal H}}_\rho$. The commutators are bounded by $\|[f_\rho ,A]\|\leq \|\widehat{f'_\rho}\|_{L^1}\|[D,A]\|$ (see [@BR]) and are dealt with as perturbations, just as the last summand in . On a technical level, these perturbations are then controlled by and it is a matter of patience to combine all of these estimates to obtain , see [@LS2; @LS3]. As long as $L_{\kappa,\rho}$ is invertible, its signature is well-defined. The signature is the finite-dimensional equivalent of the $\eta$-invariant. The following result states its stability properties. \[theo-SigConst\] As long as and hold, ${{\rm Sig}}(L_{\kappa,\rho})$ is independent of $\kappa$ und $\rho$. The proof of Theorem \[theo-SigConst\] is again given in [@LS2; @LS3], but it merely interpolates between different values of $\kappa$ and $\rho$ for which the gap is known to be open by Theorem \[theo-Gap\]. As the gap of $L_{\kappa,\rho}$ remains open during these deformations, the signature clearly cannot change. Now the main result on the spectral localizer can be stated. \[theo-SigInd\] Suppose that and hold. For an even Fredholm module, also suppose that $D_0$ is normal. Then for the index pairings $T$ given by and , one has $${{\rm Ind}}(T) \;=\; \frac{1}{2} \;{{\rm Sig}}\left( L_{\kappa, \rho} \right) \;.$$ As already stated above, a proof of Theorem \[theo-SigInd\] is given in [@LS2; @LS3], and a new proof in the case of even index pairings is outlined in Section \[sec-SFargument\], and the details are carried out in Section \[sec-Proof\]. Chern numbers and half-signatures {#sec-Chern} ================================= This section shows how to apply the spectral localizer in a concrete situation which appears in the analysis of solid state systems. While the presentation of the mathematical framework is essentially self-contained, it is kept very brief because the reader can consult the monograph [@PS] for details and further background informations. Let us start out with a Hamiltonian $H=H^*$ on a Hilbert space ${{\cal H}}=\ell^2({{\mathbb Z}}^2)\otimes{{\mathbb C}}^L$ over a two-dimensional lattice which is supposed to have a spectral gap at $0$ and to be of short-range, namely if $|n\rangle=|n_1,n_2\rangle$ denotes the (${{\mathbb C}}^L$-vector-valued) state localized at $n=(n_1,n_2)\in{{\mathbb Z}}^2$, then $\langle n|H|m\rangle=0$ if $|n-m|>R$ for some $R$ (called the range). Then the so-called Fermi projection $P=\chi(H<0)$ has an index pairing w.r.t. the even Fredholm module specified by $D_0=X_1+\imath X_2$ where $X_1$ and $X_2$ denote the unbounded, self-adjoint position operators on ${{\cal H}}$ given by $X_j|n_1,n_2\rangle=n_j\,|n_1,n_2\rangle$. At the origin one can modify $D_0$ in order to make it invertible. The boundedness of $[H,D_0]$ is supposed to hold. The index pairing is defined as in with $$F\;=\;\frac{X_1+\imath X_2}{|X_1+\imath X_2|} \;,$$ together with the choice $F|0\rangle=|0\rangle$ at the origin. By a well-known index theorem, this index is connected to a Chern number whenever the latter is defined. For this, it is necessary that $H$ is either periodic or at least given by a covariant family $(H_\omega)_{\omega\in\Omega}$ of short-ranged, gapped Hamiltonians indexed by a parameter taken from a compact probability space $\Omega$ equipped with a ${{\mathbb Z}}^2$-action $\tau$ and an invariant and ergodic probability measure ${{\mathbb P}}$. Covariance means that $U(a)H_\omega U(a)^*=H_{\tau_a\omega}$ for $a\in{{\mathbb Z}}^2$ and $U(a)$ the magnetic translations (see [@PS] for details). Then also the Fermi projections $P_\omega=\chi(H_\omega<0)$ form a covariant family. Each projection $P_\omega$ leads to a Fredholm operator $T_\omega$ by and thus an index ${{\rm Ind}}(T_\omega)$, but it is known that these indices are ${{\mathbb P}}$-almost surely constant. On the other hand, the Chern number is defined as $${{\rm Ch}}(P) \;=\;-\,2\pi\imath\; \int {{\mathbb P}}(d\omega)\;{\mbox{\rm Tr}}\,\langle 0|P_\omega[[X_1,P_\omega],[X_2,P_\omega]]|0\rangle \;,$$ whenever $$\label{eq-LocCond} \sum_{j=1,2}\int {{\mathbb P}}(d\omega)\;{\mbox{\rm Tr}}\,\langle 0|\,|[X_j,P_\omega]|^2\,|0\rangle \;<\;\infty \;.$$ When this condition holds, it is well-known that the Chern number is essentially equal to the Hall conductance. An index theorem (Corollary 6.3.2 in [@PS] or [@BES]) shows that the almost sure index ${{\rm Ind}}(T_\omega)$ is equal to ${{\rm Ch}}(P)$. The condition is called the dynamical localization and is considered here as the mathematical definition of the mobility gap regime [@BES]. If $0$ lies in a gap, it definitely holds. Combined with Theorem \[theo-SigInd\] one therefore obtains \[coro-Chern\] Let $H=(H_\omega)_{\omega\in\Omega}$ be a covariant family of short range Hamiltonians on the Hilbert space ${{\cal H}}=\ell^2({{\mathbb Z}}^2)\otimes{{\mathbb C}}^L$ for which $0$ does not lie in the spectrum. Let the spectral localizer be defined by with $D_0=X_1+\imath X_2$. Then the Chern number of the Fermi projection $P=\chi(H<0)$ is given by $${{\rm Ch}}(P)\;=\; \frac{1}{2} \;{{\rm Sig}}\left( L_{\kappa, \rho} \right) \;,$$ provided that $\kappa$ and $\rho$ are chosen such that and hold. Let us note a few remarkable facts about this result with a particular focus on numerical implementation. First of all, the construction of the matrix $L_{\kappa, \rho}$ does not involve any spectral calculus of $H$ (in contrast: calculating the Fermi projection requires a diagonalization of $H$). One merely needs the Hamiltonian in the natural basis of ${{\cal H}}=\ell^2({{\mathbb Z}}^2)\otimes{{\mathbb C}}^L$. Moreover, the finite volume restriction can then be done either on a discrete square box $[-\rho,\rho]^2\cap{{\mathbb Z}}^2$ or a sphere $\{(n_1,n_2)\in{{\mathbb Z}}^2\,:\,n_1^2+n_2^2\leq \rho^2\}$. The latter appears in Theorem \[theo-SigInd\] and Corollary \[coro-Chern\], but it is straightforward to check that the different geometry does not alter the result [@LS2]. A second important point is that it is not necessary to carry out a full spectral analysis of the spectral localizer neither. Merely the signature is needed which can be calculated very efficiently by a block Cholesky decomposition. Finally, let us note that in typical situations the value of $\rho$ does not have to be very large so that only relatively small matrices have to be dealt with. All of this is illustrated in Section \[sec-p+ip\] below, where also the spectral localizer in the mobility gap regime is analyzed numerically. Theorem \[theo-SigInd\] can readily be applied to other strong invariants appearing in the analysis of topological insulators so that Corollary \[coro-Chern\] should be considered as the two-dimensional case. In dimension one and three (and more generally any odd dimension), one considers chiral Hamiltonians which then have a Fermi unitary of which one can calculate integer-valued (higher) winding numbers. These can again be computed by the spectral localizer, see the discussion in [@LS]. Also higher Chern numbers are of relevance. For example, a periodically time-driven three-dimensional system can have a non-vanishing second Chern number which is then the non-linear response coefficient for the magneto-electric effect [@PS]. Again this integer number can be calculated as the half-signature of a spectral localizer. ![Half-signature as well as average and minimum gap sizes of $L_{\kappa,\rho}$ and $H(\lambda)$ (over $100$ samples) as function of $\lambda$. The system size is $\rho=30$ and other parameters as stated.[]{data-label="fig-HSlambda"}](Mixta30.pdf){width="15cm"} Numerical results for a dirty $p+ i p$ superconductor {#sec-p+ip} ===================================================== ![Spectrum of $H(\lambda)$ with periodic boundary condition for one realization of the disorder and various values of $\lambda$.[]{data-label="fig-SpecH"}](EspectroRho30Hp.pdf){width="15cm"} A mean field description of a superconductor leads to Bogoliubov-de Gennes (BdG) Hamiltonian on a particle-hole Hilbert space. For the study of the low-energy behavior it is also sufficient to study a tight-binding BdG Hamiltonian. A well-known topological model of this type is obtained by the $p+ i p$ wave interaction (see [@DDS] for references to the physics literature on this model). A periodic (clean) system of this type is described by the BdG Hamiltonian on $\ell^2({{\mathbb Z}}^2,{{\mathbb C}}^2)$ of the form $$H(0) \;=\; \begin{pmatrix} S_1+S_1^*+S_2+S_2^* -\mu & \delta\big(S_1-S_1^*+\imath(S_2-S_2^*)\big) \\ \delta\big(S_1-S_1^*+\imath(S_2-S_2^*)\big)^* & -(S_1+S_1^*+S_2+S_2^* - \mu) \end{pmatrix} \;.$$ Here $S_1$ and $S_2$ are the shifts on the lattice given by $$S_1|n_1,n_2\rangle\;=\;|n_1+1,n_2\rangle \;, \qquad S_2|n_1,n_2\rangle\;=\;|n_1,n_2+1\rangle \;.$$ The parameter $\mu\in{{\mathbb R}}$ is the chemical potential and $\delta\in{{\mathbb R}}$ is the strength of the $p+ip$ pairing potential. The system becomes a “dirty” superconductor by adding a random potential of the type $$V_\omega \;=\; \sum_{n\in{{\mathbb Z}}^2}v_n\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \;|n\rangle\langle n| \;.$$ Here each realization $\omega=(v_n)_{n\in{{\mathbb Z}}^2}$ is a point in the compact Tychonov space $\Omega=[-\frac{1}{2},\frac{1}{2}]^{{{\mathbb Z}}^2}$. Each $v_n$ is drawn independently and identically with a uniform distribution from the interval $[-\frac{1}{2},\frac{1}{2}]$. The product measure ${{\mathbb P}}$ on $\Omega$ is then invariant and ergodic w.r.t. to the natural shift action of ${{\mathbb Z}}^2$ on $\Omega$. The random BdG Hamiltonian with coupling constant $\lambda\geq 0$ is now: $$\label{eq-HamPIP} H_\omega(\lambda) \;\;=\;\; H(0)\,+\,\lambda\,V_\omega \;.$$ Note that it still has the particle-hole symmetry $\sigma_1 \overline{H(\lambda)}\sigma_1=-H(\lambda)$ w.r.t. the first Pauli matrix $\sigma_1=\binom{0\;1}{1\;0}$, but this is not crucial for the following. However, it does lead to a symmetry in the spectrum of $H(\lambda)$ that can also be observed in Figure \[fig-SpecH\] (the symmetry is broken for the spectral localizer). This completes the description of the model. It fits into the set-up of Section \[sec-Chern\]. In particular, the Chern number ${{\rm Ch}}(P)$ is a well-defined integer number under the mobility gap assumption (which is rigorously known to hold only at the band edges at weak disorder [@DDS]). Whenever the central gap is open, Corollary \[coro-Chern\] allows to compute ${{\rm Ch}}(P)$ as the half-signature of the localizer. Before describing the numerical results, let us briefly comment on the methods. The random Hamiltonian and associated spectral localizer were generated by a standard random number generator. Of importance is that one uses Dirichlet boundary conditions for the spectral localizer (and the Hamiltonian therein), but the spectra of the finite volume approximations of the Hamiltonian are calculated with periodic boundary conditions. If one uses Dirichlet boundary conditions for the latter, this produces edge states which always close the (bulk) gap and are not the focus of the present study. We simply used an octave code to diagonalize the Hamiltonian and the spectral localizer. The code was run on a Supermicro with two processors Xeon E5-2630 V2 at 2.60ghz and with 128gb RAM. Figure \[fig-HSlambda\] needed a few days of CPU time, the others only less than an hour. ![Example of the spectrum of the spectral localizer for one realization.[]{data-label="fig-SLspec"}](EspectroRho30SL.pdf){width="15cm"} Let us start by describing the clean Hamiltonian at $\lambda=0$. Its spectrum and Chern numbers can be calculated analytically [@DDS]. The Hamiltonian $H(0)$ has a central gap around $0$ except for $(\mu,\delta)$ lying on the coordinate axis. In the four quadrants, the Chern number are $1$ (for $\mu>0$) and $-1$ (for $\mu<0$). We choose a point $(\delta,\mu)=(-0.35,0.25)$ which is well inside the topologically non-trivial phase, but for which the gap of the Hamiltonian $H(0)$ is not too large, namely roughly equal to $0.27$. With these parameters fixed in the following, let us now add the random potential by varying $\lambda$. The results are shown in Figure \[fig-HSlambda\]. First of all, one notes that the half-signature is constant for values $\lambda\leq 2.75$. However, one is in the regime of Corollary \[coro-Chern\] with an open bulk gap only for $\lambda<2.0$ because Figure \[fig-HSlambda\] shows that for $\lambda\geq 2.0$ there are already realizations with a closed gap of $H(\lambda)$. See also Figure \[fig-SpecH\] for an illustration of this fact where at least at $\lambda=2.75$ there are eigenvalues very close to $0$. The most remarkable regime in Figure \[fig-HSlambda\] is for values of $\lambda\in[2.5,2.75]$. Here the Hamiltonian is not gapped, but expected to be in the mobility gap regime. The half-signature is nevertheless deterministically equal to the non-trivial value $1$. Figure \[fig-SLspec\] shows the spectrum of the spectral localizer for one particular realization for various values of $\lambda$ (the same realization as in Figure \[fig-SpecH\]). One can clearly see that the central gap of the spectral localizer is open for $\lambda\leq 3$. As $\lambda$ increases further, the averaged half-signature decreases to the topologically non-trivial value $0$, and this also corresponds to a closing gap for the particular realization in Figure \[fig-SLspec\]. In conclusion, we believe that these numerical results strongly support the use of the spectral localizer in the mobility gap regime. Clearly further numerical and analytical analysis is needed to gain a better understanding of the spectral localizer in this physically interesting regime. Spectral flow argument for even index pairings {#sec-SFargument} ============================================== As already advertised above, the main mathematical novelty of this paper is a new proof of Theorem \[theo-SigInd\] for the case of even pairings. In this section, we will therefore outline the strategy of the argument, deferring detailed proofs of the technical lemmata to Section \[sec-Proof\]. Of course, some familiarity with the spectral flow is necessary and the main facts needed here are collected in Appendix \[app-SFReview\] for the convenience of the reader. Crucial for the understanding of the following is that ${{\rm Sf}}(T_0,T_1)$ is the spectral flow along the straight line path $T_t=(1-t)T_0+tT_1$ connecting two selfadjoint Fredholm operators within the set of Fredholm operators. Further properties of the spectral flow used in the following are the homotopy invariances under homotopies keeping the end points fixed, the invariance under unitary transformations and as well as the concatenation and additivity properties. Finally, the spectral flow is connected to the index of an index pairing $T=PFP+({{\bf 1}}-P)$ by a theorem of Phillips [@Phi1]: $${{\rm Ind}}(T)\;=\;{{\rm Sf}}\left(F({{\bf 1}}-2P )F^*,{{\bf 1}}- 2P \right) \;.$$ As in [@LS3], this is the starting point of the argument. Due to the gap of $H$, one can next deform ${{\bf 1}}-2P$ into $H$: $${{\rm Ind}}(T)\;=\; {{\rm Sf}}( FHF^*,H ) \;.$$ Furthermore, one can use the additivity of the spectral flow as well as the definition of $\Gamma={{\rm diag}}({{\bf 1}},-{{\bf 1}})$ to deduce $$\begin{aligned} {{\rm Ind}}(T) &\;=\; {{\rm Sf}}\left( \begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F\end{pmatrix}\begin{pmatrix} -H & 0 \\ 0 & H\end{pmatrix}\begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F^*\end{pmatrix},\begin{pmatrix} -H & 0 \\ 0 & H\end{pmatrix} \right) \\ &\;=\;{{\rm Sf}}\left(\begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F\end{pmatrix} \left(-H \otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F^*\end{pmatrix}, -H \otimes \Gamma\right) \;.\end{aligned}$$ Now the following lemma will allow to replace the second argument $-H\otimes \Gamma$ by $L_\kappa$. \[lem-SFSLH\] For $\kappa$ sufficiently small, $${{\rm Sf}}(-H\otimes \Gamma,L_\kappa)\;=\;0 \;.$$ [**Proof.**]{} As already stated, most proofs are deferred to Section \[sec-Proof\], but this one is so short and essential that we give it right away. Indeed, one merely checks that the gap does not close along the straight-line path $T_t=-H\otimes\Gamma +t \kappa D$ connecting $-H\otimes \Gamma$ and $L_\kappa$. This follows from $$(T_t)^2 \;=\; \begin{pmatrix} H^2+(t \kappa)^2 |D_0|^2 & t \kappa [H,D_0]^* \\ t \kappa [H,D_0] & H^2+(t \kappa)^2 \|D_0|^2\end{pmatrix} \;\geq \; \left(g^2 - \kappa \Vert[D,H \otimes {{\bf 1}}_2]\Vert\right) {{\bf 1}}\;,$$ for $\kappa$ sufficiently small. $\Box$ Now one can apply the concatenation property to deduce $$\label{eq-SFLoc} {{\rm Ind}}(T) \;=\; {{\rm Sf}}\left( \begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F\end{pmatrix} \left(-H \otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}& 0 \\ 0 & F^*\end{pmatrix},L_{\kappa}\right) \;.$$ Some care is needed at this point because one has to verify that the path given by the concatenation of two straight-line paths can be deformed (within the self-adjoint Fredholm operators) into a straight-line path, but the conditions stated after are satisfied because the first straight-line path is merely a compact difference and the second is even in the invertibles. A second point is that Lemma \[lem-SFSLH\] and thus only holds for small $\kappa$. This is, however, sufficient because by Theorem \[theo-SigConst\] it is sufficient to prove the claim of Theorem \[theo-SigInd\] for $\kappa$ as small as desired and $\rho$ correspondingly large (or larger) so that holds. The next point is that one can decouple both $L_\kappa$ and $H\otimes\Gamma$ into their finite volume restrictions and their restrictions to the orthogonal complement $({{\cal H}}_\rho\oplus{{\cal H}}_\rho)^\perp={{\rm Ran}}( \chi(|D| > \rho))$. To state this fact, let us denote the surjective partial isometry onto the latter space by $\pi_{\rho^c}$ and set $B_{\rho^c}=\pi_{\rho^c} B \pi_{\rho^c}^*$ for any operator $B$. \[lem-SLDecop\] For $\kappa$ sufficiently small and $\rho$ sufficiently large, $${{\rm Sf}}(L_\kappa,L_{\kappa ,\rho} \oplus L_{\kappa ,\rho^c })\;=\;0 \;.$$ \[lem-HDecop\] Let $\kappa$ be sufficiently small and $\rho$ sufficiently large. Supposing that $H_{\rho}$ and $H_{\rho^c }$ are invertible, $${{\rm Sf}}(H\otimes \Gamma,(H_{\rho} \oplus H_{\rho^c })\otimes\Gamma)\;=\;0 \;.$$ Both technical proofs are given in Section \[sec-Proof\], but let us give an intuitive argument why these facts are true. First of all, the matrix elements of $L_\kappa$ coupling $L_{\kappa ,\rho}$ to $L_{\kappa ,\rho^c}$ stem from the operator $H$ and are thus uniformly bounded and local in the sense that they fall off from the boundary. For $\rho$ large, such local and bounded terms are dominated by the operator $D$ which is of the order $\rho$ in this region. Therefore homotopically sending this coupling to $0$ does not modify the low lying spectrum and thus does not lead to a spectral flow, as claimed in Lemma \[lem-SLDecop\]. The reason why Lemma \[lem-HDecop\] holds is much simpler: the presence of $\Gamma$ assures that tuning down the coupling elements of $H$ connecting ${{\cal H}}_\rho$ and ${{\cal H}}_{\rho^c}$ leads to as much spectral flow upwards as downwards. The invertibility condition is merely imposed to avoid ambiguities in the definition of the spectral flow. It will also be shown in Section \[sec-Proof\] that this can readily be achieved by an arbitrarily small and compact perturbation of $H$. Hence one can assume this to hold in the following. Applying these two lemmas and using again the unitary invariance and additivity of the spectral flow now implies (note that all straight-line paths involved only consist of adding compact operators so that the conditions for are indeed satisfied): $$\begin{aligned} {{\rm Ind}}(T) \;=\; & \;{{\rm Sf}}\left( \begin{pmatrix} {{\bf 1}}_{\rho} & 0 \\ 0 & F_{\rho}\end{pmatrix} \left(-H_{\rho}\otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}_{\rho} & 0 \\ 0 & F_{\rho}^*\end{pmatrix},L_{\kappa ,\rho }\right) \nonumber\\ & \;\,+\; {{\rm Sf}}\left(\begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}\end{pmatrix} \left(-H_{\rho^c} \otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}^*\end{pmatrix}, L_{\kappa ,\rho^c }\right) \;. \label{eq-IndSFLast}\end{aligned}$$ \[lem-OuterSF\] The spectral flow of the second summand in vanishes. Hence ${{\rm Ind}}(T)$ is merely given by the first summand in . This is the spectral flow between two finite dimensional selfadjoint matrices, and as such given as half the difference of the signatures of these matrices, notably $$\begin{aligned} {{\rm Ind}}(T) &\;=\;\frac{1}{2} \left({{\rm Sig}}\left(L_{\kappa ,\rho }\right)\;-\; {{\rm Sig}}\left(\begin{pmatrix} {{\bf 1}}_{\rho} & 0 \\ 0 & F_{\rho}\end{pmatrix} \left( -H_{\rho}\otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}_{\rho} & 0 \\ 0 & F_{\rho}^*\end{pmatrix}\right) \right) \\ & \;=\; \frac{1}{2} \big({{\rm Sig}}\left(L_{\kappa ,\rho }\right) \,+\,{{\rm Sig}}\left( H_{\rho}\otimes \Gamma\right) \big) \\ & \;=\; \frac{1}{2} \;{{\rm Sig}}\left(L_{\kappa ,\rho }\right) \;.\end{aligned}$$ This concludes the proof of Theorem \[theo-SigInd\] for the case of even pairings. Details of the spectral flow proof {#sec-Proof} ================================== This section contains the proofs of the lemmata of Section \[sec-SFargument\] and further facts needed there. [**Proof**]{} of Lemma \[lem-SLDecop\]. Let us set $$L_\kappa(t) \;=\; L_{\kappa ,\rho} \oplus L_{\kappa ,\rho^c } \;-\; t \begin{pmatrix} 0 & \pi_\rho \left(H \otimes \Gamma\right) \pi_{\rho^c}^* \\ \pi_{\rho^c} \left(H \otimes \Gamma\right) \pi_\rho^* & 0\end{pmatrix} \;.$$ Then $L_\kappa(1)=L_\kappa$ and $L_\kappa(0)=L_{\kappa ,\rho} \oplus L_{\kappa ,\rho^c }$. Because $\pi_\rho$ is of finite range, the second summand on the r.h.s. is compact. It will be shown that this path is in the invertible (Fredholm) operators and this then implies the claim. First let us note that the operator $L_{\kappa ,\rho^c}$ is invertible for $\rho$ sufficiently large because, using , one has $$\begin{aligned} (L_{\kappa ,\rho^c })^2 & \;=\; \begin{pmatrix} H_{\rho^c}^2+\kappa^2 \left|D_{0,\rho^c}\right|^2 & \kappa \left[D_{0,\rho^c},H_{\rho^c}\right]^* \\ \kappa \left[D_{0,\rho^c},H_{\rho^c}\right] & H_{\rho^c}^2+\kappa^2 \left|D_{0,\rho^c}\right|^2\end{pmatrix} \\& \;\geq \;\left(\kappa^2 \rho^2 - \kappa \left\Vert\left[D,H \otimes {{\bf 1}}\right]\right\Vert\right) {{\bf 1}}_{\rho^c} \\&\;\geq \;\left(\kappa^2 \rho^2 - \frac{g^3}{12 \left\Vert H \right\Vert}\right) {{\bf 1}}_{\rho^c} \\&\;\geq\; \left(\kappa^2 \rho^2 - \frac{g^2}{12}\right) {{\bf 1}}_{\rho^c} \\&\;\geq\; \left(\kappa^2 \rho^2 - \frac{\kappa^2 \rho^2}{48}\right) {{\bf 1}}_{\rho^c} \\&\;\geq\; \frac{1}{2}\, \kappa^2 \rho^2 \,{{\bf 1}}_{\rho^c} \;.\end{aligned}$$ Introducing the invertible operator $\widetilde L=|L_{\kappa ,\rho } \oplus L_{\kappa ,\rho^c }|^\frac{1}{2}$, one now has $$L_\kappa(t) \;=\; \widetilde L \left( S - t \begin{pmatrix} 0 & \left|L_{\kappa ,\rho }\right|^{-\frac{1}{2}}\pi_\rho \left(H \otimes \Gamma\right) \pi_{\rho^c}^*\left|L_{\kappa ,\rho^c }\right|^{-\frac{1}{2}} \\ \left|L_{\kappa ,\rho^c }\right|^{-\frac{1}{2}} \pi_{\rho^c} \left(H \otimes \Gamma\right) \pi_\rho^* \left|L_{\kappa ,\rho }\right|^{-\frac{1}{2}} & 0\end{pmatrix} \right) \widetilde L$$ where matrix is w.r.t. the decomposition $\mathcal H \oplus \mathcal{H} = (\mathcal H \oplus \mathcal{H})_\rho \oplus (\mathcal H \oplus \mathcal{H})_{\rho^c}$ and $S$ is a selfadjoint unitary which is also diagonal in this grading. As now $$\Big\| |L_{\kappa ,\rho }|^{-\frac{1}{2}}\pi_\rho \left(H \otimes \Gamma\right) \pi_{\rho^c}^*|L_{\kappa ,\rho^c }|^{-\frac{1}{2}}\Big\| \;\leq\; \sqrt {\frac{2}{g}} \;\| H \|\; \sqrt {\frac{\sqrt 2}{\kappa \rho}} \;=\; \frac{C}{\sqrt {\kappa \rho}} \;,$$ the invertibility of $L_\kappa(t)$ follows for sufficiently large $\rho$. $\Box$ Concerning the proof of Lemma \[lem-HDecop\], namely that ${{\rm Sf}}(H\otimes \Gamma,(H_{\rho} \oplus H_{\rho^c })\otimes\Gamma)=0$, it was already stated above that this results from the spectral doubling $\sigma(H_t\otimes \Gamma)=\sigma(H_t)\cup(-\sigma(H_t))$ along the path associated to $H_t=(1-t) H+t\,H_{\rho} \oplus H_{\rho^c }$. To avoid ambiguities, it is best to assure that the end point $H_1=H_{\rho} \oplus H_{\rho^c }$ is also invertible (see the hypothesis in Lemma \[lem-HDecop\]). This can be achieved by the following lemma. \[lem-HDecop2\] For any $H$ and $\rho$, there exists a selfadjoint $\widetilde{H}$ with [(i)]{} $H-\widetilde{H}$ is of finite rank, [(ii)]{} $H-\widetilde{H}$ is of arbitrarily small norm, [(iii)]{} $\widetilde{H}_{\rho}$ and $\widetilde{H}_{\rho^c }$ are invertible. [**Proof.**]{} Recall that any neighborhood of a self-adjoint Fredholm operator (and more generally of a Fredholm operator with vanishing index) contains an invertible self-adjoint operator (just add a small finite rank perturbation on the finite dimensional kernel). This can be applied to both summands of $H_\rho \oplus H_{\rho^c}$ separately, and then combined to show the claim. $\Box$ Lemma \[lem-HDecop2\] can be used to replace $H$ by $\widetilde{H}$ at any stage of the argument described in Section \[sec-SFargument\] because the (arbitrarily) small paths from $H$ to $\widetilde{H}$ never lead to spectral flow. In particular, this lifting of the kernels of ${H}_{\rho}$ and ${H}_{\rho^c }$ can be done at the very beginning. We will suppress the distinction of $H$ from $\widetilde{H}$ and can thus tacitly assume that ${H}_{\rho}$ and ${H}_{\rho^c }$ are both invertible from now. Now essentially only remains the [**Proof**]{} of Lemma \[lem-OuterSF\]. Let us use the abbreviation $${{\rm Sf}}_c \;=\; {{\rm Sf}}\left( L_{\kappa ,\rho^c },\begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}\end{pmatrix} \left(-H_{\rho^c} \otimes \Gamma\right) \begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}^*\end{pmatrix}\right) \;.$$ Hence the aim is to show ${{\rm Sf}}_c=0$. The first step is to show that $$\label{eq-Intermed} {{\rm Sf}}_c \;=\; {{\rm Sf}}\left( L_{\kappa ,\rho^c },\begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}\end{pmatrix} \begin{pmatrix} -H_{\rho^c} & \kappa \rho \\ \kappa \rho & H_{\rho^c}\end{pmatrix} \begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}^*\end{pmatrix}\right) \;.$$ This seems to follow immediately from the standard bound $$\begin{aligned} \label{eq-DiracGap} \begin{pmatrix} -H_{\rho^c} & \lambda\kappa\rho\,{{\bf 1}}_{\rho^c} \\ \lambda\kappa\rho\,{{\bf 1}}_{\rho^c} & H_{\rho^c}\end{pmatrix}^2 & \;=\;\begin{pmatrix} H_{\rho^c}^2+\lambda^2\kappa^2\rho^2\,{{\bf 1}}_{\rho^c} & 0 \\ 0 & H_{\rho^c}^2+\lambda^2\kappa^2\rho^2\,{{\bf 1}}_{\rho^c} \end{pmatrix} \nonumber \\ & \;\geq \;\big(\| H_{\rho^c}^{-1} \|^{-2} + \lambda^2\kappa^2\rho^2\big){{\bf 1}}_{\rho^c} $$ for $\lambda\in[0,1]$ and the fact that $H_{\rho^c}$ is invertible because then there is no extra spectral flow on the line segment connecting $H_{\rho^c} \otimes \Gamma$ to the middle matrix in the second argument of , but there is a caveat here because one still needs to verify that the two parameter family $$A(t,\lambda) \;=\; t\, L_{\kappa ,\rho^c } \;+\; (1-t) \begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}\end{pmatrix} \begin{pmatrix} -H_{\rho^c} & \lambda \kappa \rho \\ \lambda \kappa \rho & H_{\rho^c}\end{pmatrix} \begin{pmatrix} {{\bf 1}}_{\rho^c} & 0 \\ 0 & F_{\rho^c}^*\end{pmatrix}$$ lies in the selfadjoint Fredholms so that the additivity rule applies. This Fredholm property is not altered if one adds finite rank operators such as the complementing piece of ${{\cal H}}_\rho\oplus{{\cal H}}_\rho$, and even compact operators such as $\pi_\rho H\pi_{\rho^c}^*$ and $[H,F]$. Hence the Fredholm property of $A(t,\lambda)$ is equivalent to the Fredholm property of $$\begin{aligned} B\left(t,\lambda\right) &\;=\; t\, L_{\kappa} \;+\; \left(1-t\right) \begin{pmatrix} -H & \lambda \kappa \rho F^* \\ \lambda \kappa \rho F & H\end{pmatrix} \\ & \;=\; \begin{pmatrix} -H & t \kappa D_0^* + \left(1-t\right) \lambda \kappa \rho F^* \\ t \kappa D_0 + \left(1-t\right) \lambda \kappa \rho F & H\end{pmatrix} \;.\end{aligned}$$ Now the off-diagonal entries combined are a function of $D$. This function involves an indicator function, but it can be replaced by a smooth function, up to a compact perturbation. This shall be explored next. Let $s_1: \mathbb{R} \to [-1,1]$ be a smooth function with $s_1|_{\left(-\infty,-1\right]} = -1$, $s_1|_{\left[1,\infty\right)} = 1$ and $\|\widehat {s'_1}\|_{L^1\left(\mathbb{R}\right)} =2$. Such functions are explicitly constructed in Lemma 4 of [@LS] where it is also shown that $s_\rho\left(x\right)=s_1(\frac{x}{\rho})$ then satisfies $$\left\|\left[s_\rho\left(D\right),H \otimes {{\bf 1}}\right]\right\| \;\leq \; 2\, \rho^{-1} \,\left\|\left[D,H \otimes {{\bf 1}}\right]\right\| \;.$$ Now introduce the smoothening $\widetilde{F}$ of $F$ by $$s_\rho\left(D\right)\;=\;\begin{pmatrix} 0 & \widetilde F^* \\ \widetilde F & 0\end{pmatrix} \;.$$ Then $\|[\widetilde F,H]\| \leq 2 \rho^{-1} \|[D,H \otimes {{\bf 1}}]\|$ and still ${\widetilde F=\widetilde F_\rho \oplus \widetilde F_{\rho^c}}$. Furthermore, $\widetilde F_{\rho^c} = F_{\rho^c}$ so that $\widetilde F - F$ is compact. Hence $B\left(t,\lambda\right) $ is Fredholm if and only if $$C\left(t,\lambda\right)\;=\; \begin{pmatrix} -H & t \kappa D_0^* + \left(1-t\right) \lambda \kappa \rho \widetilde F^* \\ t \kappa D_0 + \left(1-t\right) \lambda \kappa \rho \widetilde F & H\end{pmatrix}$$ is Fredholm. Now $$\begin{aligned} C\left(t,\lambda\right)^2 &\;=\; \begin{pmatrix} H^2+|t \kappa D_0 + (1-t) \lambda \kappa \rho \widetilde F|^2 & [H,t \kappa D_0 + (1-t) \lambda \kappa \rho \widetilde F]^* \\ [H,t \kappa D_0 + (1-t) \lambda \kappa \rho \widetilde F] & H^2+|t \kappa D_0 + (1-t) \lambda \kappa \rho \widetilde F|^2\end{pmatrix} \\ & \;\geq\; g^2 \,-\,t \kappa \|[D,H \otimes {{\bf 1}}]\| \,-\, (1-t) \lambda \kappa \rho\, \|[\widetilde F,H]\| \\ & \;\geq \; g^2 \,-\, \kappa \|[D,H \otimes {{\bf 1}}]\| \,-\, \kappa \rho 2 \rho^{-1} \|[D,H \otimes {{\bf 1}}]\| \\ & \;=\; g^2 \,-\, 3 \kappa \|[D,H \otimes {{\bf 1}}]\| \;.\end{aligned}$$ This last expression is strictly positive for $\kappa$ sufficiently small. Hence then $C(t,\lambda)$ is invertible and thus a Fredholm operator. By now, a formal proof of is completed. Let us next multiply out: $${{\rm Sf}}_c \;=\; {{\rm Sf}}\left( L_{\kappa ,\rho^c },\begin{pmatrix} -H_{\rho^c} & \kappa \rho F_{\rho^c}^* \\ \kappa \rho F_{\rho^c} & F_{\rho^c} H_{\rho^c} F_{\rho^c}^*\end{pmatrix}\right) \;.$$ Next $F_{\rho^c} H_{\rho^c} F_{\rho^c}^*- H_{\rho^c}$ is compact with operator norm bounded by $2\|H\|$. Thus using for $\lambda=1$, $$\left\| \begin{pmatrix} -H_{\rho^c} & \kappa \rho F_{\rho^c}^* \\ \kappa \rho F_{\rho^c} & H_{\rho^c}\end{pmatrix}^{-1} \right\|^{-1} \;\geq\; \kappa \rho \,-\, 2\, \| H \| \;,$$ which is thus positive for $\rho$ sufficiently large. As the path connecting $F_{\rho^c} H_{\rho^c} F_{\rho^c}^*$ to $ H_{\rho^c}$ is in the compacts, $${{\rm Sf}}_c \;=\; {{\rm Sf}}\left( L_{\kappa ,\rho^c },\begin{pmatrix} -H_{\rho^c} & \kappa \rho F_{\rho^c}^* \\ \kappa \rho F_{\rho^c} & H_{\rho^c}\end{pmatrix}\right) \;.$$ It only remains to show that this last expression vanishes. For this, it is shown that the straight-line path $$D(\lambda) \;=\; \begin{pmatrix} -H_{\rho^c} & \kappa \left(\left(1-\lambda\right) \rho F_{\rho^c}^* + \lambda {D_0^*}_{\rho^c}\right) \\ \kappa \left(\left(1-\lambda\right) \rho F_{\rho^c} + \lambda D_{0,\rho^c}\right) & H_{\rho^c}\end{pmatrix}$$ connecting the two arguments lies in the invertible operators. Indeed, using $$\left| (1-\lambda)\rho F_{\rho^c}\,+\,\lambda\,D_{0,\rho^c} \right| \;\geq\; \rho^2\,{{\bf 1}}_{\rho^c} \;,$$ one finds $$\begin{aligned} D(\lambda)^2 & \;=\; \begin{pmatrix} (H_{\rho^c})^2+\kappa^2 \left|\left(1-\lambda\right) \rho F_{\rho^c} + \lambda D_{0,\rho^c}\right|^2 & \kappa \left[H_{\rho^c},\left(1-\lambda\right) \rho F_{\rho^c} + \lambda D_{0,\rho^c}\right]^* \\ \kappa \left[H_{\rho^c},\left(1-\lambda\right) \rho F_{\rho^c} + \lambda D_{0,\rho^c}\right] & (H_{\rho^c})^2+\kappa^2 \left|\left(1-\lambda\right) \rho F^*_{\rho^c} + \lambda {D_0}^*_{\rho^c}\right|^2\end{pmatrix} \\ & \;\geq \; \big(\left(\kappa \rho\right)^2-\kappa \rho \left\Vert\left[F,H\right]\right\Vert - \kappa \left\Vert\left[D,H \otimes {{\bf 1}}\right]\right\Vert\big) {{\bf 1}}_{\rho^c} \\ &\;=\; \kappa \big(\rho \left(\kappa \rho - \left\Vert[F,H]\right\Vert\right) - \left\Vert\left[D,H \otimes {{\bf 1}}\right]\right\Vert \big){{\bf 1}}_{\rho^c} \;,\end{aligned}$$ which is strictly positive for $\rho$ sufficiently large. $\Box$ [**Acknowledgements:**]{} This work was partially supported by the DFG. Properties of the spectral flow {#app-SFReview} =============================== This appendix collects the relevant information from [@Phi1; @Phi2; @DS2] on the spectral flow. Let $t\in[0,1]\mapsto T_t$ be a continuous path of bounded selfadjoint Fredholm operators. Then there is an $\epsilon>0$ such that uniformly in $t\in[0,1]$ the operator $T_t$ has only discrete spectrum (isolated eigenvalues of finite multiplicity) in $(-\epsilon,\epsilon)$. Intuitively, the spectral flow is then the number of eigenvalues moving past $0$ in the positive direction minus the number of those eigenvalues moving past $0$ in the negative direction. In [@Phi1], Phillips gives a careful definition of the spectral flow that is not spelled out here. However, let us spell out the main properties of the spectral flow: - (Homotopy invariance) Let $s\in[0,1]\mapsto T_t(s)$ be a homotopy of paths with fixed end points $T_0(s)$ and $T_1(s)$. Then $${{\rm Sf}}(t\in[0,1]\mapsto T_t(0)) \;=\; {{\rm Sf}}(t\in[0,1]\mapsto T_t(1)) \;.$$ - (Concatenation) For paths $t\in[0,1]\mapsto T_t$ and $t\in[1,2]\mapsto T_t$, $${{\rm Sf}}(t\in[0,1]\mapsto T_t) \;+\; {{\rm Sf}}(t\in[1,2]\mapsto T_t) \;=\; {{\rm Sf}}(t\in[0,2]\mapsto T_t) \;.$$ - (Unitary invariance) For any path $t\in [0,1]\mapsto U_t$ of unitaries, $${{\rm Sf}}(t\in[0,1]\mapsto U_t^*T_tU_t) \;=\; {{\rm Sf}}(t\in[0,1]\mapsto T_t) \;.$$ - (Additivity) For paths $t\in[0,1]\mapsto T_t$ and $t\in[0,1]\mapsto T'_t$, $${{\rm Sf}}(t\in[0,1]\mapsto T_t\oplus T'_t) \;=\; {{\rm Sf}}(t\in[0,1]\mapsto T_t) \;+\; {{\rm Sf}}(t\in[0,1]\mapsto T'_t) \;.$$ - For a path $t\in[0,1]\mapsto T_t$ with $0$ not in the spectrum $\sigma(T_t)$ of $T_t$ for all $t\in[0,1]$, $${{\rm Sf}}(t\in[0,1]\mapsto T_t) \;=\; 0 \;.$$ For the case that the path is given by the straight line between its endpoints, we will also use the shorter notation $${{\rm Sf}}(T_0,T_1) \;=\; {{\rm Sf}}\big( t\in [0,1] \mapsto T_t\;=\;(1-t) T_0 + t T_1 \big) \;.$$ Given ${{\rm Sf}}(T_0,T_1)$ and ${{\rm Sf}}(T_1,T_2)$, one has due to (i) and (iii) that $$\label{eq-SFrule} {{\rm Sf}}(T_0,T_1) \;+\; {{\rm Sf}}(T_1,T_2) \;=\; {{\rm Sf}}(T_0,T_2)\;,$$ provided that all operators of the form $(1-t) T_0 + t (T_1 + \lambda (T_2-T_1))$ are Fredholm. As $(1-t) T_0 + t T_1$ and $(1-\lambda) T_1 + \lambda T_2$ are Fredholm, this is, in particular, the case when either $T_1-T_0$ or $T_2-T_1$ is compact, because $$\begin{aligned} (1-t) T_0 \,+\, t \big(T_1\, +\, \lambda (T_2-T_1)\big) & \;=\; \big((1-t) T_0 \,+\, t T_1\big) \,+ \,t \lambda (T_2-T_1) \\ &\;=\;(1-t) (T_0-T_1) \,+\, \big((1-t \lambda)T_1 \,+\, t \lambda T_2\big) \;.\end{aligned}$$ The starting point of the spectral flow argument is the following fundamental relation between index pairings and spectral flow. It goes back to Phillips [@Phi1]. A proof based on homotopy invariance is given in [@DS2]. Let $P$ be a projection and $F$ a unitary operator such that $[F,P]$ is compact. Then $PFP+{{\bf 1}}-P$ is a Fredholm operator and its index satisfies $$\label{eq-SFInd} {{\rm Ind}}(PFP+{{\bf 1}}-P) \;=\; {{\rm Sf}}(F({{\bf 1}}-2P)F^*,{{\bf 1}}-2P) \;.$$ Finally let us add a comment on the spectral flow of paths $t\in[0,1]\mapsto D_t$ of unbounded selfadjoint Fredholm operators. One then obtains a path $t\in[0,1]\mapsto T_t=\tanh(D_t)$ of bounded selfadjoint Fredholm operators and can use its spectral flow to define the spectral flow of $t\in[0,1]\mapsto D_t$. Instead of $\tanh$ any increasing smooth function $f$ with $f(0)=0$ and $f'(0)>0$ can be used. All of the above properties naturally transpose to the unbounded case. [99]{} J. Bellissard, A. van Elst, H. Schulz-Baldes, [*The Non-Commutative Geometry of the Quantum Hall Effect*]{}, J. Math. Phys. [**35**]{}, 5373-5451 (1994). O. Bratteli, D. W. Robinson, [*Operator Algebras and Quantum Statistical Mechanics 1*]{}, (Springer, Berlin, 1979). A. Connes, [*Noncommutative Geometry*]{}, (Academic Press, San Diego, 1994). G. De Nittis, H. Schulz-Baldes [*Spectral flows of dilations of Fredholm operators*]{}, Canad. Math. Bulletin [**58**]{}, 51-68 (2015). G. De Nittis, M. Drabkin, H. Schulz-Baldes, [*Localization and Chern numbers for weakly disordered BdG operators*]{}, Markov Processes Relat. Fields [**21**]{}, 463-482 (2015). J. M. Gracia-Bondía, J. C. Várilly, H. Figueroa, [*Elements of noncommutative geometry*]{}, (Springer Science & Business Media, 2013). J. Grossmann, H. Schulz-Baldes, [*Index pairings in presence of symmetries with applications to topological insulators*]{}, Commun. Math. Phys. [**343**]{}, 477-513 (2016). T. A. Loring, [*K-theory and pseudospectra for topological insulators*]{}, Annals of Physics [**356**]{}, 383-416 (2015). T. Loring, [*Bulk Spectrum and $K$-theory for Infinite-Area Topological Quasicrystal*]{}, [arXiv:1811.07494]{}. T. Loring, H. Schulz-Baldes, [*Finite volume calculations of $K$-theory invariants*]{}, New York J. Math. [**22**]{}, 1111-1140 (2017). T. Loring, H. Schulz-Baldes, [*The spectral localizer for even index pairings*]{}, to appear in J. Non-Commutative Geometry, [arXiv:1802.04517]{}. T. Loring, H. Schulz-Baldes, [*Spectral flow argument localizing an odd index pairing*]{}, Cand. Bull. Math. [**62**]{}, 373-381 (2019). J. Phillips, [*Self-adjoint Fredholm operators and spectral flow*]{}, [Canad. Math. Bull.]{} [**39**]{}, 460-467 (1996). J. Phillips, [*Spectral Flow in Type I and Type II factors - a New Approach*]{}, Fields Institute Communications [**vol. 17**]{}, 137-153 (1997). E. Prodan, H. Schulz-Baldes, [*Bulk and boundary invariants for complex topological insulators: From $K$-theory to physics*]{}, (Springer Int. Pub., Cham, Szwitzerland, 2016). H. Schulz-Baldes, [*$\mathbb{Z}_2$-indices and factorization properties of odd symmetric Fredholm operators*]{}, Dokumenta Mathematica [**20**]{}, 1481-1500 (2015).
--- abstract: | We study in detail the interesting dynamical symmetry and its applications in general many-level and many-ensemble atomic systems with electromagnetically induced transparency (EIT). By discovering the symmetrical Lie group of various atomic systems, the novel applications to quantum memory and quantum entanglement between photons or atomic ensembles are investigated.\ PACS numbers: 03.67.-a, 03.65.Fd, 03.67.Mn, 42.50.Gy --- **Dynamical Symmetry and Quantum Information Processing with Electromagnetically Induced Transparency** Xiong-Jun Liu$^{a,b}$[^1], Hui Jing$^c$, Xin Liu$^{a,b}$ and Mo-Lin Ge$^{a,b}$ a\. Theoretical Physics Division, Nankai Institute of Mathematics,Nankai University, Tianjin 300071, P.R.China\ b. Liuhui Center for Applied Mathematics, Nankai University and Tianjin University, Tianjin 300071, P.R.China\ c. State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics,\ Wuhan Institute of Physics and Mathematics, CAS, Wuhan 430071, P. R. China Introduction ============ During the last decade or so, rapid advances have been witnessed in both experimental and theoretical aspects towards probing the novel mechanism of Electromagnetically Induced Transparency (EIT) [@2] and its many potential applications [@3; @4; @5]. In particular, based on the elegant “dark-state polaritons” (DSPs) theory proposed by Fleischhauer and Lukin [@6], the quantum memory techniques are now actively explored by exchanging the quantum state information between the quantized light field and the metastable collective atomic field [@7]. DSP is a new quantum field which is the superposition of the light field amplitude and the atom coherence between two lower levels of the $\Lambda$ type $three$-level atoms, and it describes the total system of the optical and collective atomic fields. In linear theory where the two-photon detuning of the light pulses is zero, the dynamical evolution of DSPs can lead to a perfect state mapping from the photonic branch into the atomic excitation one and vice versa by adiabatically adjusting the coupling laser [@6; @7]. The dynamical symmetry of multi-level atomic system interacting with light fields was studied by D. A. Lidar et al. [@Lidar]. On other hand, a semidirect product group in $three$-level atomic system under the condition of larger atom number and low collective excitation limit [@6] with EIT was discovered by Sun et al [@8], and the the validity of adiabatic passage condition for the dark states is also investigated in this technique. After that, a series research on the study of hidden symmetry as well as its application to quantum information with $four$-level atomic system and many atomic ensembles were done recently [@liu; @atom]. All these works indicate many interesting hidden symmetrical properties in various atomic systems with EIT. In this paper, by discovering the symmetrical Lie group, we examine in detail the general definition of dark-state polariton (DSP) operators, and then the dark-states in different atomic systems. Also, it is interesting to find that the symmetrical properties of the multi-level system and multi-atomic-ensemble system are dependent on some characteristic parameters such as the coupling constant $g_i$ and Rabi frequency $\Omega_i$ etc.. Furthermore, a controllable scheme to generate quantum entanglement between atoms or lights via quantized DSPs theory is discussed, which might be experimentally implemented in the near future.. The development herein is outlined as follows. In section II, we discuss the dynamical symmetry by discovering the Lie algebra structure of various atomic systems including multi-level and multi-atomic ensembles cases etc.; In section III, we respectively examine the general definition of DSP operators and then quantum memory for photons via DSP theory of these systems; Generation of different formalisms of entanglement between atoms or lights via quantized DSPs theory are discussed in section IV; In the last section, we conclude and further discuss the dynamical symmetry and the applications in these EIT-systems. Hidden symmetrical group in electromagnetically induced transparency ==================================================================== Complex $m$-level ($m>3$, multi-level) atomic system ---------------------------------------------------- The system we consider is shown in Fig. 1 (a), a collection of $N$ double $\Lambda$ type $m$-level ($m\geq3$, multi-level) atoms interact with $m-2$ single-mode quantized fields which couple the transitions from the ground state $|b\rangle$ to excited state $|e_{\sigma}\rangle$ $(1\leq \sigma\leq m-2)$ with coupling constants $g_{\sigma}$, and $m-2$ classical control ones, which couple the transitions from the metastable state $|c\rangle$ to excited one $|e_{\sigma}\rangle$ with time-dependent Rabi-frequencies $\Omega_{\sigma}(t)$. Generalization to multi-mode probe pulse case is straightforward. Considering all transitions at resonance, the interaction Hamiltonian of the total system can be written as: \[\] $$\begin{aligned} \label{eqn:1} \hat H=\sum_{\sigma=1}^{m-2}g_{\sigma}\sqrt{N}\hat a_{\sigma}\hat E_{\sigma}^{\dag}+\sum_{\sigma=1}^{m-2}\Omega_{\sigma}\hat T_{e_{\sigma}c}+h.c.,\end{aligned}$$ where subscription $\sigma$ denotes the corresponding excited state and the collective atomic excitation operators: $$\label{eqn:2} \hat E_{\sigma}=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\hat\sigma_{be_{\sigma}}^{j}, \ \hat C=\frac{1}{\sqrt{N}}\sum_{j=1}^N\hat\sigma_{bc}^{j},$$ with $\hat\sigma^j_{\mu\nu}=|\mu\rangle_{jj}\langle\nu| (\mu,\nu=b,c,e_1,e_2,...,e_{m-2})$ being the flip operators of the $j$-th atom between states $|\mu\rangle$ and $|\nu\rangle$, and $$\label{eqn:3} \hat T^{-}_{\mu\nu}=\hat T_{\mu\nu}=\sum_{j=1}^{N}\hat\sigma _{\mu\nu}^{j}, \ \ \hat T_{\mu\nu}^{+}=(\hat T^{-}_{\mu\nu})^{\dagger },$$ where $\mu\neq\nu=c,e_1,e_2,...,e_{m-2}$. Denoting by [@dick] $|b\rangle=|b_1,b_2,...,b_N\rangle$ the collective ground state with all $N$ atoms staying in the same single particle ground state $|b\rangle$, we can easily give other quasi-spin wave states by the operators defined in formula (\[eqn:2\]): $|e_{\sigma}^n\rangle=[n!]^{-1/2}(\hat E_{\sigma}^{\dag})^n|b\rangle$ and $|c^n\rangle=[n!]^{-1/2}(\hat C^{\dag})^n|b\rangle$. For the EIT case, we consider two approximation conditions [@6; @7]: i) The system include a very large number of atoms, i.e. $N\gg1$; ii) The low atomic excitation condition, i.e. the control fields are much stronger than the quantized probe fields and only a few atoms occupy the metastable state $|c\rangle$ and excited states $|e_j\rangle$. It then follows that $[\hat E_i,\hat E_j^{\dag}]=\delta_{ij}$ and $[\hat C,\hat C^{\dag}]=1$ and all the other commutators are zero, which shows the mutual independence between these bosonic operators $\hat E_{\sigma}$ and $\hat C$. On the other hand, the $m^2-3m+2$ collective operators $\hat T_{\mu\nu}$ satisfy the $u(m-1)$ commutation relation: $ [\hat T_{\alpha\beta},\hat T_{\mu\nu}]=\delta^{\beta\mu}\hat T_{\alpha\nu}-\delta^{\alpha\nu}\hat T_{\mu\beta} $. Thus the operators ($\hat T^{\pm}_{\mu\nu}, \hat T_{\mu\nu}^z$)($\mu,\nu=c,e_1,e_2,...,e_{m-2}$) compose the $m^2-2m+1$ generators of the algebra $su(m-1)$, here $$\label{eqn:4} \hat T_{\mu\nu}^{z}=\sum_{j=1}^{N}(\hat\sigma _{\mu\mu}^{j}-\hat\sigma _{\nu\nu}^{j})/2, \ (\mu\neq\nu=c,e_1,e_2,...,e_{m-2}),$$ with the relation $\hat T_{\mu\nu}^{z}=\hat T_{\mu\rho}^{z}-\hat T_{\rho\nu}^{z}$. Considering $[\hat T_{ce_{\sigma}}^{+},\hat E_{\sigma}]=-\hat C$, $[\hat T_{ce_{\sigma}}^{-},\hat C]=-\hat E_{\sigma}$, $[\hat T_{e_ie_j}^{+},\hat E_k]=\delta_{jk}\hat E_i-\delta_{ik}\hat E_j$, $[\hat T_{e_ie_j}^{-},\hat E_k]=\delta_{ik}\hat E_j-\delta_{jk}\hat E_i$ and denoting by $h_{m-1}$ the algebra generated by $(\hat E_{\sigma},\hat E^{\dag}_{\sigma},\hat C,\hat C^{\dag})$, we then obtain $[su(m-1),h_{m-1}]\subset h_{m-1}$ which means that the dynamical symmetry of the $m$-level atomic system is governed by a semidirect product Lie group [@group] $SU(m-1)\overline{\otimes}H_{m-1}$ in large $N$ limit and low excitation condition. In general, the dynamical symmetry of a $m$-level atomic system is governed by $SU(m)$ [@Lidar], e.g. the Gell-Mann dynamical symmetry $SU(3)$ for three-level quantum system [@gell]. However, here in the large atom number limit and low excitation condition, the dynamical symmetry of the multi-level EIT system is governed by a semi-direct Lie group. Particularly, when $m=3$, i.e., for the usual $three$-level system, the dynamical symmetry is governed by the simplest $SU(2)\overline{\otimes}H_2$ group [@8], while the $four$-level double $\Lambda$ ($m=4$) system [@four1; @four2] is governed by $SU(3)\overline{\otimes}H_{3}$ [@liu] (Fig. 1(b)) and the $five$-level $W$-type system governed by $SU(4)\overline{\otimes}H_4$ (Fig. 1(c)), etc.. Multi-atomic-ensemble system of $three$-level atoms --------------------------------------------------- In this subsection we consider a cloud of identical atoms with the $three$-level $\Lambda$ type structure which is shown in Fig. 2. Atoms of the $l$-th $(l=1,2,...k)$ atomic ensemble interact with the input single-mode quantized field with coupling constants $g_l$, and one classical control filed with time-dependent Rabi-frequencies $\Omega_l(t)$. Considering all transitions at resonance, the interaction Hamiltonian of the total system can be written as: \[\] $$\begin{aligned} \label{eqn:6} \hat H=\sum_{\sigma=1}^kg_{\sigma}\sqrt{N_{\sigma}}\hat a\hat A_{\sigma}^{\dag}+\sum_{\sigma=1}^k\Omega_{\sigma}(t)\hat T^+_{\sigma}+h.c.,\end{aligned}$$ where the subscript $\sigma$ denotes the corresponding atomic ensemble and the collective atomic excitation operators: $$\label{eqn:7} \hat A_{\sigma}=\frac{1}{\sqrt{N_{\sigma}}}\sum_{j=1}^{N_{\sigma}}e^{-i\bf k_{ba}\cdot\bf r^{(\sigma)}_j}\hat\sigma_{ba}^{j(\sigma)}, \ \ \hat C_{\sigma}=\frac{1}{\sqrt{N_{\sigma}}}\sum_{j=1}^{N_{\sigma}}e^{-i\bf k_{bc}\cdot\bf r^{(\sigma)}_j}\hat\sigma_{bc}^{j(\sigma)}, \ \ \sigma=1,2,...,k$$ with $\hat\sigma^i_{\mu\nu}=|\mu\rangle_{ii}\langle\nu| (\mu,\nu=a,b,c)$ being the flip operators of the $i$-th atom between states $|\mu\rangle$ and $|\nu\rangle$, $\bf k_{ba}$ and $\bf k_{ca}$ are, respectively, the wave vectors of the quantum and classical light fields, $\bf k_{bc}=\bf k_{ba}-\bf k_{ca}$ and $$\label{eqn:8} \hat T^{-}_{\sigma}=(\hat T_{\sigma}^{+})^{\dagger}=\sum_{j=1}^{N_{\sigma}}e^{-i\bf k_{ca}\cdot\bf r^{(\sigma)}_j}\hat\sigma _{ca}^{j(1)}.$$ Denoting by $|b^{({\sigma})}\rangle=|b^{({\sigma})}_1,b^{({\sigma})}_2,...,b^{({\sigma})}_{N_{\sigma}}\rangle ({\sigma}=1,2,...,k)$ the collective ground state of the ${\sigma}$-th atomic ensemble with all atoms staying in the same single particle ground state $|b\rangle$, we can easily give other quasi-spin wave states by the operators defined in formula (\[eqn:11\]): $|a^n_{(\sigma)}\rangle=[n!]^{-1/2}(\hat A_{\sigma}^{\dag})^n|b^{(\sigma)}\rangle$ and $|c^n_{(\sigma)}\rangle=[n!]^{-1/2}(\hat C_{\sigma}^{\dag})^n|b^{(\sigma)}\rangle$. Similarly, in large $N_{\sigma}$ limit and low excitation condition, it follows that $[\hat A_{(i)},\hat A^{\dag}_{(j)}]=\delta_{ij}, [\hat C_{(i)},\hat C^{\dag}_{(j)}]=\delta_{ij}$ and all the other commutators are zero, which shows the mutual independence between these bosonic operators $\hat A_{i}$ and $\hat C_{i}$. On the other hand, one can easily find the commutation relations: $[\hat T^+_{i},\hat T^-_{j}]=\delta_{ij}\hat T^z_{j}$ and $[\hat T^z_{i},\hat T^{\pm}_{j}]=\pm\delta_{ij}\hat T^{\pm}_{j}$, where $$\label{eqn:9} \hat T_{\sigma}^{z}=\sum_{j=1}^{N_{\sigma}}(e^{-i\bf k_{aa}\cdot\bf r^{(\sigma)}_j}\hat\sigma _{aa}^{j(\sigma)}-e^{-i\bf k_{cc}\cdot\bf r^{(\sigma)}_j}\hat\sigma _{cc}^{j(\sigma)})/2 \ (\sigma=1,2,...,m)$$ are two traceless operators. Thus the operators $(\hat T^{\pm}_{\sigma}, \hat T^z_{\sigma})$ generate the $\oplus_{\sigma}su(2)$ algebra. Considering $[\hat T_{i}^{+},\hat A_{j}]=-\delta_{ij}\hat C_{j}$, $[\hat T_{i}^{-},\hat C_{j}]=-\delta_{ij}\hat A_{j}$ and denoting by $h_{2m}$ the Heisenberg algebra generated by $(\hat A_i,\hat A_i^{\dag},\hat C_i,\hat C_i^{\dag}; i=1,2,...,k)$, we then obtain $[\oplus_{\sigma}su(2),h_{2k}]\subset h_{2k}$ which means that the dynamical symmetry of the double $\Lambda$ system is governed by a semidirect product Lie group [@group] $(\otimes_{\sigma}SU(2))\overline{\otimes}H_{2k}$ in large $N_{\sigma}$ limit and low excitation condition. In particular, for $k=2$, the symmetrical group reads $SO(4))\overline{\otimes}H_{4}$ [@atom]. Quantum memory process in multi-level and multi-ensemble atomic system ====================================================================== The discovery of dynamical symmetry in above section leads us, by the spectrum generating algebra method [@group], to find $H-$invariant subspaces, in which one can diagonalize the Hamiltonian easily. As is known, the zero-eigenvalue subspace composed of dark states is the key definition in quantum memory with EIT technique [@6; @7; @Lidar; @8]. During the quantum memory process when the quantum states are adiabatically transferred from lights to collective atom coherence, the total system of the atoms and quantized probe light should be restricted in the dark-state subspace [@6; @7; @Lidar; @8], therefore the key point of studying this process is to obtain the dark states of the total system. On the other hand, the dark states can be generated by the dark-state polaritons (DSPs) operator which commutes with the Hamiltonian operator [@6; @7; @8], so we firstly study the general definition of the DSPs operator in the general multi-level atomic and multi-ensemble atomic systems, and can then easily study the quantum memory process by generating the dark states of the system. DSPs operator can be constructed basing on two properties: 1) It commutes with the Hamiltonian and satisfies the bosonic commutation relation; 2) It is the superposition of the collective atomic excitation operator $\hat C$ and the annihilation operators of the quantized probe lights. For this the dark-state subspace is a collection composed of zero-eigenstates excluding any excited state $|e_j\rangle$ (in multi-level system) or $|a\rangle$ (in multi-atomic-ensemble system). For the derivation of the DSPs operator, firstly we can obtain its form of the three-level Lambda system [@8], and four-level double Lambda system [@liu], and also the five-level case. Then, by induction we can obtain the general definition of the DSPs operator in m-level case, which is similar to that in multi-atomic-ensemble system. Quantum memory with a $m$-level atomic system --------------------------------------------- Firstly, we study the general definition of DSPs of the single-atomic-ensemble system with many $m$-level atoms. Based on the above analysis of the properties of DSP operator, the new type of dark-state-polaritons operator of the $m$-level system can be defined as $$\label{eqn:d1} \hat d=\cos\theta\prod_{j=1}^{m-3}\cos\phi_j\hat a_1+\cos\theta\sum_{l=2}^{m-2}\sin\phi_{l-1}\prod_{j=l}^{m-3}\cos\phi_j\hat a_l-\sin\theta\hat C,$$ where the mixing angles $\theta$ and $\phi_j$ are defined through $$\label{eqn:10} \tan\theta=\frac{g_1g_2...g_{m-2}\sqrt{N}}{\bigr[\sum_{j=1}^{m-2}\bigr(\Omega_j^2\prod_{l=1,l\neq j}^{m-2}g_l^2\bigr)\bigr]^{1/2}}$$ and $$\label{eqn:11} \tan\phi_j=\frac{\prod_{l=1}^{j}g_l\Omega_{j+1}}{\bigr[\sum_{l=1}^{j}\bigr(\Omega^2_l\prod_{s=1,s\neq l}^{j+1}g^2_s\bigr)\bigr]^{1/2}}.$$ The eq.(\[eqn:11\]) provides us $\tan\phi_1=g_1\Omega_2/g_2\Omega_1, \tan\phi_2=g_1g_2\Omega_3/\sqrt{\Omega_1^2g_2^2g_3^2+\Omega_2^2g_1^2g_3^2} ...$, etc. By a straightforward calculation one can verify that $$\label{eqn:com1} [\hat d,\hat d^{\dag}]=1, \ \ \ \ [\hat H,\hat d \ ]=0,$$ hence the general atomic dark states of $m$-level system can be obtained through $|D_n\rangle=[n!]^{-1/2}(\hat d^{\dag})^n|0\rangle$, where $$\label{eqn:20} |0\rangle=\underbrace{|0, 0,..., 0}_{m-2}\rangle_{photon}\otimes|b\rangle_{atom}$$ are the collective ground states [@ground] with $|0,0,...,0\rangle_{photon}$ denoting the electromagnetic vacuum of $m-2$ quantized probe fields. Based on the above result we here discuss a novel phenomenon. Initially, only one weak probe light (described by the coherent state $|\alpha_1\rangle$ with $\alpha_1=\alpha_0$) is injected into the atomic ensemble to couple the transition from $|b\rangle$ to $|e_1\rangle$, one strong control field is used to couple the transition from $|c\rangle$ to $|e_1\rangle$ and all other light fields ($m-3$ probe fields and $m-3$ control fields) are off. For this the mixing angles $\theta=0$, $\phi_j=0$ and the initial total state of the quantized field and atomic ensemble reads $ |\Psi_0\rangle=\sum_{n}P_n(\alpha_0)|n,\underbrace{0,0,...,0}_{m-3} \rangle_{photon}\otimes|b\rangle_{atom} $, where $P_n(\alpha_0)=\frac{\alpha_0^n}{\sqrt{n!}}e^{-|\alpha_0|^2/2}$ is the probability of distribution function. Subsequently, the mixing angle $\theta$ is adiabatically rotated to $\pi/2$ by turning the control field off, and the quantum states of the probe light $|\alpha_1\rangle$ is fully mapped into the collective atomic excitations, i.e. $|\Psi_t\rangle=\sum_{n}P_n(\alpha_0)|\underbrace{0,0,...,0}_{m-2} \rangle_{photon}\otimes|c^n\rangle_{atom}$. Finally, when all $m-2$ control fields are all turned back on and the mixing angle $\theta$ is rotated back to $\theta=0$ again with $\phi_j$ to some value $\phi_{ej}$ which are only determined by the Rabi-frequencies of the re-applied control fields, we finally obtain $$\begin{aligned} \label{eqn:split1} |\Psi_e\rangle&=&\sum_{n}P_n(\alpha_0)|D_n(\theta=0)\rangle\nonumber\\ &=&\sum_{j}\sum_{l}...\sum_{f}P_j(\alpha_{e1})P_l(\alpha_{e2})...P_f(\alpha_{e(m-2)}) |b\rangle\otimes|j,l,...,f\rangle\nonumber\\ &=&|b\rangle_{atom}\otimes|\alpha_{e1},\alpha_{e2},...,\alpha_{e(m-2)}\rangle_{photon},\end{aligned}$$ where $\alpha_{e1}=\alpha_0\prod_{j=1}^{m-3}\cos\phi_{ej}$ and $\alpha_{el}=\alpha_0\sin\phi_{e(l-1)}\prod_{j=l}^{m-3}\cos\phi_{ej}, (l=2,3,...,m-2)$ are the parameters of the released coherent lights. The above expression clearly shows that the injected quantized field can convert into $m-2$ different coherent pulses $|\alpha_{ej}\rangle (j=1,2,...,m-2)$ after a proper evolution manipulated by the control fields. Particularly, if the strengths of all re-applied control fields equal each other, the output probe lights read $\alpha_{e1}=\alpha_{e2}=...=\alpha_{e(m-2)}=\alpha_0/\sqrt{m-2}$. Obviously, this novel mechanism can be extended to other cases of the injected field, say, in presence of a non-classical or squeezed light beam. Quantum memory with k-atomic-ensemble system -------------------------------------------- Now, to give a clear description of the interesting quantum memory process in this $k$-atomic-ensemble system composed of $\Lambda$ type $three$-level-atoms, we define the new type of dark-state-polaritons operator as $$\label{eqn:d2} \hat d=\cos\theta\hat a-\sin\theta\prod_{j=1}^{k-1}\cos\phi_j\hat C_1-\sin\theta\sum_{l=2}^k\sin\phi_{l-1}\prod_{j=l}^{k-1}\cos\phi_j\hat C_l,$$ where the mixing angles $\theta$ and $\phi_j$ are defined through $$\label{eqn:12} \tan\theta=\frac{\bigr[\sum_{j=1}^{k}\bigr(g_j^2N_j\prod_{l=1,l\neq j}^{k}\Omega^2_l\bigr)\bigr]^{1/2}}{\Omega_1\Omega_2...\Omega_k}$$ and $$\label{eqn:13} \tan\phi_j=\frac{g_{j+1}\sqrt{N_{j+1}}\prod_{l=1}^{j}\Omega_l}{\bigr[\sum_{l=1}^{j}\bigr(g^2_lN_l\prod_{s=1,s\neq l}^{j+1}\Omega^2_s\bigr]^{1/2}},$$ where one finds $\tan\phi_1=g_2\sqrt{N_2}\Omega_1/g_1\sqrt{N_1}\Omega_2, \tan\phi_2=g_3\sqrt{N_3}\Omega_1\Omega_2/\sqrt{g_1^2N_1\Omega_2^2\Omega_3^2+g_2^2N_2\Omega_1^2\Omega_3^2}$, etc. Also, by a straightforward calculation one can verify that $[\hat d,\hat d^{\dag}]=1$ and $ [\hat H,\hat d \ ]=0 $, hence the general atomic dark states can be obtained through $|D_n\rangle=[n!]^{-1/2}(\hat d^{\dag})^n|0\rangle$, where $|0\rangle=|b^{(1)}, b^{(2)},..., b^{(k)}\rangle_{atom}\otimes|0\rangle_{photon}$ and $|0\rangle_{photon}$ denotes the electromagnetic vacuum of the quantized probe field. Similar to the discussion in above subsection, we can investigate the quantum memory process in the multi-ensemble atomic system. Initially the total state reads (meanwhile $\theta=0$ or the external control fields are very strong): $|\Psi_0\rangle=\sum_{n}P_n(\alpha_0)|b^{(1)}, b^{(2)},...,b^{(k)}\rangle_{atom}\otimes|n\rangle_{photon}$, then the mixing angle $\theta$ is adiabatically rotated from $0$ to $\pi/2$ by keeping the ratio between arbitrary two of the Rabi-frequencies $\Omega_1$, $\Omega_2$ ... and $\Omega_{k}$ in a fixed value (i.e. keeping the mixing angles $\phi_j$ constant) and switching them off adiabatically, we finally obtain the state from the dark-state of present system: $$\begin{aligned} \label{eqn:atom-spliter1} |\Psi(t)\rangle&=&\sum_{n}P_n(\alpha_0)|D_n(\theta=\frac{\pi}{2})\rangle\nonumber\\ &=&\sum_{j}\sum_{l}...\sum_{f}P_j(\alpha_{1})P_l(\alpha_{2})...P_f(\alpha_{k}) |c^{(1)}_j,c^{(2)}_l,...,c^{(k)}_f\rangle\otimes|0\rangle\nonumber\\ &=&|\alpha_1, \alpha_2,...,\alpha_k\rangle_{coherence}\otimes|0\rangle_{photon},\end{aligned}$$ where $\alpha_1=\alpha_0\prod_{j=1}^{k-1}\cos\phi_j, \alpha_l=\alpha_0\sin\phi_{l-1}\prod_{j=l}^{k-1}\cos\phi_j, (l=2,3,...,k)$. The above expression clearly shows that the injected quantized field can be stored in the $k$ atomic ensembles. Particularly, if the strengths of all control fields keep the same value during the process that they are turned off, the final atom coherence reads $\alpha_{1}=\alpha_{2}=...=\alpha_{k}=\alpha_0/\sqrt{k}$, which means the quantum information of the initial probe light is stored homogeneously in the $k$ atomic ensembles. Generation of quantum entanglement ================================== In above section, we discussed the interesting phenomenon that one input coherent probe light can convert into many different output coherent probe lights via the dark-state evolution process. In this section we shall discuss another novel application to the generation of entangled states of lights or atomic ensembles with present DSPs theory in multi-level and multi-ensemble atomic systems. For this we should use a non-classical input probe light, for example, a superposition of coherent states [@schrodinger], a single-photon state, etc. Two-photon entanglement ----------------------- The coherent entangled states can be obtained with the quantized DSPs theory of $four$-level system when the injected quantized field is in a Schödinger cat state [@schrodinger], e.g. for the initial total state reads $|\Psi_0\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}(\alpha_0)}}|0\rangle\otimes(|\alpha_0\rangle\pm|-\alpha_0\rangle) \otimes|b\rangle$ where the normalized factor ${\cal N}_{\pm}(\alpha_0)=2\pm2e^{-2|\alpha_0|^2}$, with the same process discussed in the section III.A (see eq. (\[eqn:split1\]), set $m=4$) we find the injected quantized pulse can evolve into a very interesting entangled coherent state (ECS) of two output fields ($|\Psi_0\rangle^{\pm}\rightarrow|\Psi_e\rangle^{\pm}$) $$\begin{aligned} \label{eqn:form} &\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}|0\rangle\otimes\bigr(|\alpha_0\rangle\pm|-\alpha_0\rangle\bigr) \otimes|b\rangle=\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|0\rangle\otimes|\alpha_0\rangle\pm|0\rangle\otimes|-\alpha_0\rangle\bigr) \otimes|b\rangle\longrightarrow\nonumber\\ &\longrightarrow\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(\sum_{j}\sum_{k}P_j(\alpha_{e1})P_k(\alpha_{e2}) |b,j,k\rangle\pm\sum_{j}\sum_{k}P_j(-\alpha_{e1})P_k(-\alpha_{e2}) |b,j,k\rangle\bigr).\end{aligned}$$ The final state in above formula can be rewritten as: $$\begin{aligned} \label{eqn:entangled1} |\Psi_e\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|\alpha_{e1},\alpha_{e2} \rangle\pm|-\alpha_{e1},-\alpha_{e2}\rangle\bigr)_{photon} \otimes|b\rangle.\end{aligned}$$ If $\phi_e=0$, hence $\alpha_{e1}=\alpha_0$ and $\alpha_{e2}=0$, and then the evolution of the quantized fields proceed as $|0\rangle\otimes(|\alpha_0\rangle\pm|-\alpha_0\rangle)/\sqrt{{\cal N}_{\pm}(\alpha_0)}\rightarrow(|\alpha_0\rangle\pm|-\alpha_0\rangle)\otimes|0\rangle/\sqrt{{\cal N}_{\pm}(\alpha_0)}$, which means the input Schödinger cat state is now fully converted into another one with different vibrational mode. On the other hand, for the general case of non-zero value of the coherent parameters $\alpha_{e1}$ and $\alpha_{e2}$, the states of output quantized fields are entangled coherent states. Since the parameters $\alpha_{ei} (i=1,2)$ is controllable, the entanglement of the output states [@entanglement] $E^{\pm}(\alpha_{e1}, \alpha_{e2})=-$ tr$(\rho^{\pm}_{\alpha_{e1}}\ln\rho^{\pm}_{\alpha_{e1}})$ with the reduced density matrix $\rho^{\pm}_{\alpha_{e1}}=$ tr$^{(\alpha_{e2}, atom)}(|\Psi_e\rangle\langle\Psi_e|)^{\pm}$ can also easily be controlled by the re-applied control fields. In particular, for the initial state $|\Psi_0\rangle^{-}$, if $\phi_e=\pi/4$, we have $\alpha_{e1}=\alpha_{e2}=\alpha_0/\sqrt{2}$ and then obtain the maximally entangled state(MES): $|0\rangle\otimes\bigr(|\alpha_0\rangle-|-\alpha_0\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0)}\rightarrow\bigr(|\frac{\alpha_0}{\sqrt{2}},\frac{\alpha_0} {\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}},-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0)}$ which is most useful for quantum information process. With the definitions $|+\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle+|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_+(\alpha_0/2)}$ and $|-\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0/2)}$, the output state can be rewritten as $(|+\rangle|-\rangle+|-\rangle|+\rangle)/\sqrt{2}$ which is the maximum entangled state of output light pulses. Generalization of these results to multi-mode probe pulses is straightforward. Since our scheme of generating the entangled coherent states via quantized DSPs theory is linear and controllable and it only requires a macroscopic quantum superposition for the initial state, this scheme deserves study in experiment which has made much progress recent years [@ent]. Remarkably, the latest works have reported the experimental realization of EIT quantum memory in three-level system[@ent2]. For our scheme, the key point in experiment is to store the quantum states of one non-classical probe light in a multi-level system, e.g. a four-level system, and then use two control fields to convert quantum states of the initial probe light into an entangled state of two output pulses. Since the quantum memory for few probe photons is experimentally realized in three-level system, our scheme of generating entanglement of photons via multi-level system may be reached in near future. Also, our scheme is different from those schemes of generating entangled coherent states via Kerr effect [@entangled] and entanglement swapping using Bell-state measurement [@swap], which are very important and have been widely studied. Consider now a different type of input quantum state corresponding to a single-photon state, i.e. meanwhile the initial total state $$\begin{aligned} \label{eqn:initial2} |\Psi_0\rangle=(|0\rangle\otimes|1\rangle)_{photon}\otimes|b\rangle.\end{aligned}$$ Similarly, after the light state storage and release process discussed above, one can easily obtained the final entangled states of two probe photons: $$\begin{aligned} \label{eqn:photonentangled3} \Phi_{photon}=\frac{1}{\sqrt{2}}\bigr(|1\rangle|0\rangle+|0\rangle|1\rangle\bigr)_{photon}.\end{aligned}$$ Also, if the input quantum state corresponding to a multi-photon state, we can obtain many other entangled forms of the two output probe lights. Three-photon entanglement via $five$-level EIT ---------------------------------------------- Here we consider the similar case that the injected quantized field is in a Schödinger cat state, e.g., meanwhile from the eq. (\[eqn:split1\]) (set $m=5$) the initial total state reads $|\Psi_0\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}(\alpha_0)}}|0,0\rangle\otimes(|\alpha_0\rangle\pm|-\alpha_0\rangle) \otimes|b\rangle$ where the normalized factor ${\cal N}_{\pm}(\alpha_0)=2\pm2e^{-2|\alpha_0|^2}$, with the similar process used for two-photon entanglement generation we find the injected quantized pulse can evolve into the very interesting entangled coherent states (ECS) of three output fields ($|\Psi_0\rangle^{\pm}\rightarrow|\Psi_e\rangle^{\pm}$) $$\begin{aligned} \label{eqn:entangled2} &\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}|0,0\rangle\otimes\bigr(|\alpha_0\rangle\pm|\beta_0\rangle\bigr) \otimes|b\rangle\rightarrow\nonumber\\ &\longrightarrow\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|\alpha_{e1},\alpha_{e2},\alpha_{e3} \rangle\pm|\beta_{e1},\beta_{e2},\beta_{e3}\rangle\bigr)_{photon} \otimes|b\rangle,\end{aligned}$$ where $\alpha_{e1}=\cos\phi\cos\varphi\alpha_0, \alpha_{e2}=\sin\phi\cos\varphi\alpha_0, \alpha_{e3}=\sin\varphi\alpha_0$ and $\beta_{e1}=\cos\phi\cos\varphi\beta_0, \beta_{e2}=\sin\phi\cos\varphi\beta_0$ and $\beta_{e3}=\sin\varphi\beta_0$. If $\phi=\pi/4$ and $\varphi=\tan^{-1}\frac{\sqrt{2}}{2}$, we get $\alpha_{ej}=\alpha=\alpha_0/\sqrt{3}, \beta_{ej}=\beta=\beta_0/\sqrt{3} (j=1,2,3)$, and the final state of the atom coherence: $(|\alpha,\alpha,\alpha \rangle\pm|\beta,\beta,\beta\rangle)_{photon}/\sqrt{{\cal N}_{0\pm}}$. With the definitions $|+\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle+|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_+(\alpha_0/\sqrt{3})}$ and $|-\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)/\sqrt{{\cal N}_-(\alpha_0/\sqrt{3})}$, the output state can be rewritten as $$\begin{aligned} \label{eqn:threephoton1} &\Phi_{photon}(+)=\frac{1}{\sqrt{{\cal N}_{0+}}}\bigr(|\frac{\alpha_0}{\sqrt{3}},\frac{\alpha_0}{\sqrt{3}},\frac{\alpha_0}{\sqrt{3}} \rangle+|\frac{\beta_0}{\sqrt{3}},\frac{\beta_0}{\sqrt{3}},\frac{\beta_0}{\sqrt{3}}\rangle\bigr)_{photon} =h_1|+\rangle|+\rangle|+\rangle+h_2|W_+\rangle\end{aligned}$$ and $$\begin{aligned} \label{eqn:threephoton2} &\Phi_{photon}(-)=\frac{1}{\sqrt{{\cal N}_{0-}}}\bigr(|\frac{\alpha_0}{\sqrt{3}},\frac{\alpha_0}{\sqrt{3}},\frac{\alpha_0}{\sqrt{3}} \rangle-|\frac{\beta_0}{\sqrt{3}},\frac{\beta_0}{\sqrt{3}},\frac{\beta_0}{\sqrt{3}}\rangle\bigr)_{photon}= h'_1|-\rangle|-\rangle|-\rangle+h'_2|W_-\rangle,\end{aligned}$$ where $|W_+\rangle=|+\rangle|-\rangle|-\rangle+|-\rangle|+\rangle|-\rangle+|-\rangle|-\rangle|+\rangle$ and $|W_-\rangle=|-\rangle|+\rangle|+\rangle+|+\rangle|-\rangle|+\rangle+|+\rangle|+\rangle|-\rangle$ are $W$ states [@w-state] of three light fields. $h_1=\sqrt{N_+N^2_-/16N^2_{0+}}$, $h'_1=\sqrt{N_-N^2_+/16N^2_{0-}}$, $h_2=\sqrt{N^3_+/4N^2_{0+}}$ and $h'_2=\sqrt{N^3_-/16N^2_{0-}}$. The eqs.(\[eqn:threephoton1\]) and (\[eqn:threephoton2\]) indicate a fascinating phenomenon: The $two$-light state is $still$ entangled after reducing the third one. Similar to the result of eqs. (\[eqn:initial2\]) and (\[eqn:photonentangled3\]), when the input probe light is in a single-photon state, one can obtain the maximum entangled states of three-mode photons. It is noteworthy that the five-qubit code entanglement can be obtained via a seven-level system that interacts with five probe and five control fields, which is the shortest code that can be a error correcting code (ECC) [@correct]. Furthermore, theoretically the entanglement of $m$ light fields can be obtained using the quantized DSPs theory in multi-level atomic system. Entanglement between two and three atomic ensembles --------------------------------------------------- Generation of entanglement between atomic ensembles has attracted much attentions in very recent years [@ensemble]. Here we also can generate entanglement between atomic ensembles by using multi-atomic-ensemble EIT technique, which is similar to that in generation of entanglement between coherent lights. Firstly, one can find that if the injected quantized field is in a Schödinger cat state [@schrodinger], e.g., for the initial total state reads [@atom] $|\Psi_0\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|\alpha_0\rangle\pm|-\alpha_0\rangle\bigr)_{photon} \otimes|b^{(1)},b^{(2)}\rangle_{atom}$ where the normalized factor ${\cal N}_{\pm}(\alpha_0)=2\pm2e^{-2|\alpha_0|^2}$, with the scheme discussed above (see eq. (\[eqn:atom-spliter1\]), set $k=2$) we can finally obtain a very interesting entangled atomic coherence of two atomic ensembles ($|\Psi_0\rangle^{\pm}\rightarrow|\Psi_e\rangle^{\pm}$) $$\begin{aligned} \label{eqn:entangled3} &\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}\bigr(|\alpha_0\rangle\pm|-\alpha_0\rangle\bigr)_{photon} \otimes|b^{(1)},b^{(2)}\rangle_{atom}\rightarrow\nonumber\\ &\longrightarrow\frac{1}{\sqrt{{\cal N}_{\pm}(\alpha_0)}}|0\rangle_{photon}\otimes\bigr(|\alpha_{1},\alpha_{2} \rangle\pm|-\alpha_{1},-\alpha_{2}\rangle\bigr)_{coherence}.\end{aligned}$$ Particularly, for the initial state $|\Psi_0\rangle^{-}$, if $\phi=\pi/4$, we have $\alpha_{1}=\alpha_{2}=\alpha_0/\sqrt{2}$ and then obtain the maximally entangled state (MES):$(|+\rangle|-\rangle+|-\rangle|+\rangle)_{coherence}/\sqrt{2}$, where $|+\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle+|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)_{coherence}/\sqrt{{\cal N}_+(\alpha_0/2)}$ and $|-\rangle=\bigr(|\frac{\alpha_0}{\sqrt{2}}\rangle-|-\frac{\alpha_0}{\sqrt{2}}\rangle\bigr)_{coherence}/\sqrt{{\cal N}_-(\alpha_0/2)}$ are the orthogonal basis. Secondly, the $three$-atomic-ensemble entanglement can easily obtained for the case of $m=3$. Considering the Schödinger cat state of the injected probe field, for example, if $|\Psi_0\rangle^{\pm}=\frac{1}{\sqrt{{\cal N}_{0\pm}}}\bigr(|\alpha_0\rangle\pm|\beta_0\rangle\bigr)_{photon} \otimes|b^{(1)},b^{(2)},b^{(3)}\rangle_{atom}$ with the normalized factor ${\cal N}_{0\pm}=2\pm2e^{-|\alpha_0-\beta_0|^2/2}$, the entangled quasi spin-waves between $3$-atomic ensembles can be obtained by properly steering the external control fields $$\begin{aligned} \label{eqn:threeatom1} &\frac{1}{\sqrt{{\cal N}_{0\pm}}}\bigr(|\alpha_0\rangle\pm|\beta_0\rangle\bigr)_{photon} \otimes|b^{(1)},b^{(2)},b^{(3)}\rangle_{atom}\rightarrow\nonumber\\ &\longrightarrow\frac{1}{\sqrt{{\cal N}_{0\pm}}}|0\rangle_{photon}\otimes\bigr(|\alpha_{1},\alpha_{2},\alpha_{3} \rangle\pm|\beta_{1},\beta_{2},\beta_{3}\rangle\bigr)_{coherence},\end{aligned}$$ where $\alpha_1=\cos\phi\cos\varphi\alpha_0, \alpha_2=\sin\phi\cos\varphi\alpha_0, \alpha_3=\sin\varphi\alpha_0$ and $\beta_1=\cos\phi\cos\varphi\beta_0, \beta_2=\sin\phi\cos\varphi\beta_0$ and $\beta_3=\sin\varphi\beta_0$. Similar to the eqs. (\[eqn:threephoton1\]) and (\[eqn:threephoton2\]), the final entangled states can then be rewritten as $$\begin{aligned} \label{eqn:threeatom2} \Phi_{123}(+)=\frac{1}{\sqrt{{\cal N}_{0+}}}(|\alpha,\alpha,\alpha \rangle+|\beta,\beta,\beta\rangle)_{coherence}= h_1|+\rangle|+\rangle|+\rangle+h_2|W_+\rangle\end{aligned}$$ and $$\begin{aligned} \label{eqn:threeatom3} \Phi_{123}(-)=\frac{1}{\sqrt{{\cal N}_{0-}}}(|\alpha,\alpha,\alpha \rangle-|\beta,\beta,\beta\rangle)_{coherence}= h'_1|-\rangle|-\rangle|-\rangle+h'_2|W_-\rangle,\end{aligned}$$ where the coefficients $h_{1,2}$ and $h'_{1,2}$ has the same form as that in eq. (\[eqn:threephoton1\]) and (\[eqn:threephoton2\]), and $|W_{\pm}\rangle$ are the corresponding $W$ states. Furthermore, theoretically one can generate entangled atomic states between multi-atomic ensembles by extending present results to $m$-atomic-ensemble system. The above results show many similar features between multi-level systems and multi-ensemble system. In fact, in present large number atoms and weak excitation case, the collective atomic operators satisfy the same commutation relations with the photonic boson operators ($\hat a_j, \hat a_j^{\dag}$). Therefore, we can readily conclude a general understanding of the processes that a quantized probe field can be transferred into many probe ones in multi-level system and can be transferred into many ensembles of atomic coherence, say, the process can be generally regarded that a bosonic field can be transferred into many different bosonic ones via EIT quantum memory technique. This may be the basis that we can use multi-level system to generate multi-photon entanglement and use many-ensemble system to generate multi-atomic-ensemble entanglement. Before conclusion, we should emphasize again the adiabatic condition in the EIT quantum memory process with multi-level and multi-ensemble atomic systems. As we have known, the condition of adiabatic evolution is most important for the quantum memory technique based on the quantized DSPs theory, because the total system should be confined in dark-state subspace during the process of quantum memory. It is interesting that the symmetrical properties of the multi-level system and multi-atomic-ensemble system are dependent on parameters such as the coupling constant $g_i$ and Rabi frequency $\Omega_i$ etc. For multi-level system, the largest zero-degeneracy class besides dark-state subspace will exist for the case $g_1=g_2=...=g_{m-2}$ [@liu], while for the multi-ensemble atomic system, it will do when $\Omega_1=\Omega_2=...=\Omega_k$ [@atom]. For example, we can give a brief discussion on the $five$-level system which has the largest degeneracy class when the parameters satisfy $g_1=g_2=g_3=g$. For this we define $$\begin{aligned} \label{eqn:operator3} \hat u&=&\cos\phi\hat E_1+\sin\phi\hat E_2 , \ \ \ \hat v=-\sin\phi\hat E_1+\cos\phi\hat E_2 ; \nonumber\\ \hat s&=&\cos\varphi\hat u+\sin\varphi\hat E_3 , \ \ \ \hat f=-\sin\varphi\hat u+\cos\varphi\hat E_3; \nonumber\\ \hat a_{12+}&=&\cos\phi\hat a_1+\sin\phi\hat a_2, \ \ \hat a_{12-}=-\sin\phi\hat a_1+\cos\phi\hat a_2; \nonumber\\ \hat a_{123+}&=&\cos\varphi\hat a_{12+}+\sin\varphi\hat a_3, \ \ \hat a_{123-}=-\sin\varphi\hat a_{12+}+\cos\varphi\hat a_3\nonumber\end{aligned}$$ and the BSPs operator $\hat b=\sin\theta\hat a_{123+}+\cos\theta\hat C$. Using these definitions one can find the shift operators as follow $$\begin{aligned} \label{eqn:operator4} \hat Q_{\pm}^{\dag}=\cos\phi\hat s^{\dag}\pm\sin\phi \ \hat b^{\dag}, \ \ \hat P_{\pm}^{\dag}=\hat v^{\dag}\pm\hat a_{12-}^{\dag}, \ \ \hat O_{\pm}^{\dag}=\hat f^{\dag}\pm\hat a_{123-}^{\dag},\end{aligned}$$ which satisfy the commutation relations $[\hat H, \hat Q_{\pm}^{\dag}]=\pm\epsilon_1\hat Q_{\pm}^{\dag}, \ \ [\hat H, \hat P_{\pm}^{\dag}]=\pm\epsilon_2\hat P_{\pm}^{\dag}, \ \ [\hat H, \hat O_{\pm}^{\dag}]=\pm\epsilon_3\hat O_{\pm}^{\dag}$, where $\epsilon_1=\sqrt{g^2N+\Omega_1^2+\Omega_2^2+\Omega_3^2}$ and $\epsilon_2=\epsilon_3=g\sqrt{N}$. Thanks to these results we finally obtain the largest degeneracy class of present system: $$\begin{aligned} \label{eqn:degeneracy3} |r(i,j;k,l;f,g;n)\rangle=\frac{1}{\sqrt{i!j!k!l!f!g!}}(\hat Q_+^{\dag})^i(\hat Q_-^{\dag})^j(\hat P_+^{\dag})^k(\hat P_-^{\dag})^l(\hat O_+^{\dag})^f(\hat O_-^{\dag})^g|D_n\rangle\end{aligned}$$ with eigenvalue $E(i,j;k,l;f,g)=(i-j)\epsilon_1+[(k+f)-(l+g)]\epsilon_2$. We notice that for each given pair of indices $(i,j)$ and $(k+f,l+g)$, $\{|r(i,j;k,l;f,g;n)\rangle \ |n=0,1,2,\cdots \}$ defines a degenerate set of eigenstates. When $i=j$ and $k+f=l+g=m$, $E(i,i;k+f=l+g)=0$, and a larger zero-eigenvalue degeneracy class is given by: $\{|r(i,i;k,l;m-k,m-l;n)\rangle=|d(i,k,l,m;n) \ |m-k\geq0,m-l\geq0;i,k,l,m,n=0,1,2,\cdots\}$, i.e. $$\begin{aligned} \label{eqn:degeneracy4} |d(i,k,l,m;n)\rangle=\frac{1}{i!k!}(\hat Q_+^{\dag}\hat Q_-^{\dag})^i(\hat P_+^{\dag})^k(\hat P_-^{\dag})^l(\hat O_+^{\dag})^{m-k}(\hat O_-^{\dag})^{m-l}|D_n\rangle \ \ (i,k,n=0,1,2,\cdots),\end{aligned}$$ which is constructed by acting ($\hat Q_+^{\dag}\hat Q_-^{\dag}$) $i$ times, $\hat P_+^{\dag}$ $k$ times, $\hat P_-^{\dag}$ $l$ times $\hat O_+^{\dag}$ $m-k$ times and $\hat O_-^{\dag}$ $m-l$ times on $|D_n\rangle$. Only when $i=k=l=m=0$, the larger degeneracy class reduces to the special dark-state subset $\{|D_n\rangle \ |$$ n=0,1,2,\cdots \}$ of present $five$-level atomic system. However, following the method developed in Refs. [@8; @liu; @atom] it is straightforward to confirm that any transition from dark states to other zero-eigenvalue subspace is also forbidden and therefore the robustness of present general EIT quantum memory technique is still perfect, even in the large zero-degeneracy case. Conclusions and further discussions =================================== To sum up, the single-ensemble composed of multi-level atoms and multi-ensemble composed of $three$-level atoms with EIT are studied in detail in this paper, focused on the interesting dynamical symmetry and its applications to quantum information processing. The general definition of dark-state polaritons (DSPs), and then the dark-states of these different systems are obtained by discovering the symmetrical Lie group of various atomic systems, such as single-atomic-ensemble composed of complex $m$-level $(m>3, multi-level)$ atoms, and multi-atomic-ensemble system composed of of $three$-level atoms. It is interesting that the symmetrical properties of the multi-level system and multi-atomic-ensemble system are dependent on some characteristic parameters of the EIT system. Furthermore, a controllable scheme to generate quantum entanglement between light fields or different atomic ensembles via quantized DSPs theory is discussed, which might be experimentally implemented in the near future.. It is noteworthy that there are many counterparts between the multi-level (single-ensemble) and multi-ensemble atomic systems. For example, the entanglement between two light fields (or among three light fields) can be generated using $four$- (or $five$-) level system, and the entanglement between two (or among three) ensembles of atoms can be generated via $two$- (or $three$-) atomic-ensemble system; The dynamical symmetry of $four$-level system is governed by the Lie group $SU(3)\overline{\otimes}H_{3}$, while that of $two$-atomic-ensemble system is partly governed by $SO(4)\overline{\otimes}H_{4}$; The symmetrical properties of multi-level system is dependent on the parameter of coupling constant $g_i$ of probe fields while that of multi-atomic-ensemble system is dependent on the Rabi-frequency of control fields, and the larger degeneracy class of multi-level system is just similar to that of the corresponding multi-atomic-ensemble system, etc.. All these interesting aspects may deserve further study in next work. We thank professors Yong-Shi Wu and J. L. Birman for valuable discussions. This work is supported by NSF of China under grants No.10275036 and No. 10304020, and by Wuhan open fund of state key laboratory of magnetic resonance and atomic and molecular physics, No. T152505. [99]{} S. E. Harris, J. E. Field and A. Kasapi, Phys. Rev. A 46, R29 (1992); M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge 1999). L. V. Hau et al., Nature (London) 397, 594 (1999); M. M. Kash et al., Phys.Rev.Lett.82, 529(1999); C. Liu, Z. Dutton, C. H. Behroozi and L. V. Hau, Nature (London) 409, 490 (2001); D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth and M. D. Lukin, Phys. Rev. Lett. 86,783(2001); M. D. Lukin and A. Imamoǧlu, Phys. Rev. Lett. 84, 1419 (2000); M. D. Lukin, S. F. Yelin and M. Fleischhauer, Phys. Rev. Lett. 84, 4232 (2000); M. Fleischhauer and S. Q. Gong, Phys. Rev. Lett. 88, 070404 (2002); C. Mewes and M. Fleischhauer, Phys. Rev. A 66, 033820 (2002). Y. Wu, J. Saldana and Y. Zhu, Phys. Rev. A 67, 013811 (2003); Y. Li, P. Zhang, P. Zanardi and C. P. Sun, quant-ph/0402177 (2004); G.Juzeliūnas and P.Öhberg, Phys.Rev.Lett. 93, 033602(2004); L. M. Kuang and L. Zhou, Phys. Rev. A 68, 043606 (2003); X. J. Liu, H. Jing and M. L. Ge, Phys. Rev. A 70, 055802 (2004). M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. 84, 5094 (2000); M. Fleischhauer and M. D. Lukin, Phys. Rev. A 65, 022314 (2002). M. D. Lukin, Rev. Mod. Phys. 75, 457 (2003). Irreversible Quantum Dynamics, Edited by F. Benatti and R. Floreanini, “Decoherece-Free subspace and Subsystem” by D. A. Lidar and B. Whaley, pp. 83-120, Springer Lecture Notes in Physics vol.622, Berlin, (2003); e-print:quant-ph/0301032(2003). C. P. Sun, Y. Li and X. F. Liu, Phys. Rev. Lett. 91, 147903 (2003). X. J. Liu, H. Jing, X. T. Zhou and M. L. Ge, Phys. Rev. A, 70,015603(2004); X. J. Liu, H. Jing and M. L. Ge, quant-ph/0403171. H. Jing, X. J. Liu, M. L. Ge and M. S. Zhan, Phys. Rev. A 71, 062336 (2005). R. H. Dick, Phys. Rev. 93, 99 (1954). B. G. Wybourne, [*Classical Groups for Physicists*]{} (John Wiley, NY, 1974); M. A. Shifman, [*Particle Physics and Field Theory*]{}, p775 (World Scientific, Singapore, 1999). F. T. Hioe, Phys. Rev. A, 32, 2824 (1985); Phys. Rev. A 28, 879 (1983). A. B. Matsko, et al., At. Mol. Opt. Phys. 46, 191 (2001); A. S. Zibrov et al., Phys. Rev. Lett. 88, 103601 (2002). A. André and M. D. Lukin, Phys. Rev. Lett. 89, 143602 (2002)); A. Raczyński and J. Zaremba, Opt. Commun. 209, 149 (2002); quant-ph/0307223 (2003). M. D. Lukin, S. F. Yelin and M. Fleischhauer, Phys. Rev. Lett., 84, 4232 (2000); E. Arimondo, Progr. In Optics 35, 259 (1996). J. J. Slosser and G. J. Milburn, Phys. Rev. Lett. 75, 418(1995). O. Hirota, quant-ph/0101096(2001). D. N. Matsukevich and A. Kuzmich, Science, 306, 663 (2004); C. H. van der Wal et al., Science, 301, 196 (2003); O. Mandel et al., Science, 425, 937 (2003); K. Hammerer et al., arXiv: quant-ph/0312156 (2003). T. Chanelière et al., Nature 438, 833 (2005); M. D. Eisaman et al., Nature 438, 837 (2005). M. Paternostro, M. S. Kim, and B. S. Ham, Phys. Rev. A 67, 023811 (2003); Xiaoguang Wang and Barry C. Sanders, Phys. Rev. A 65, 012303 (2003); F. L. Kien et al., Phys. Rev. A 68, 063803 (2003); N. A. Ansari et al., Phys. Rev. A 50, 1492 (1994). V. Coffman, J. Kundu and W. K. Wootters, Phys. Rev. A 61, 052306 (2000); M. A. Nielsen and I. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, 2000). C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A 54, 3824 (1996); R. Laflamme, C. Miquel, J.-P. Paz, and W. H. Zurek, Phys. Rev. Lett. 77, 198 (1996). A. Dantan, A. Bramati and M. Pinard, Europhys. Lett. 67, 881 (2004); V. Josse, A. Dantan, A. Bramati, M. Pinard and E. Giacobino, Phys. Rev. Lett. 92, 123601 (2004). [^1]: Electronic address:xiongjunliu@yahoo.com.cn
--- abstract: 'We present initial results of our analysis of line emission produced in gas disks found at the centers of a sample of nearby, radio galaxies with radio jets. We obtained data using STIS (The Space Telescope Imaging Spectrograph) at three parallel slit positions on the nucleus of each galaxy. This allows us to map the H$\alpha$ + \[NII\] flux, the gas radial velocity and the velocity dispersion. We find evidence of rotating disks in 11 of the sample galaxies and we can not currently rule out a rotating disk model for the remaining eight. For rotating systems, we find that the minimum central enclosed mass is greater than or similar to the predicted black hole mass based on ground-based stellar velocity dispersions. By modeling the gas dynamics we will go on to constrain the masses of the black holes. We will also investigate the properties of the gas disks themselves, giving us an insight into fueling, ionization mechanisms and the structure of the central regions.' author: - 'J. Noel-Storr, C. M. Carollo' - 'S. A. Baum, R. P. van der Marel, C. P. O’Dea' - 'G. A. Verdoes Kleijn, P. T. de Zeeuw' title: 'STIS spectroscopy of gas disks in the nuclei of nearby, radio-loud, early-type galaxies' --- Introduction ============ In seeking to understand the nature and causes of activity in galaxy nuclei, we are conducting a multi-wavelength study of a well-defined sample of 21 radio-loud, early-type galaxies in the local universe. The sample contains all nearby ($v_{\rm r} < 7000\ \rm{km s^{-1}}$), elliptical or S0 galaxies in the UGC catalog (Nilson 1973; magnitude limit $m_B < 14\fm 6$, declination range $-5\deg < \delta < 85\deg$ and angular size $\theta_p > 1\farcm 0$) that are extended radio-loud sources (larger than 10 on VLA A-Array maps and brighter than 150 mJy from single dish flux measurements at 1400 MHz). All of these galaxies fall into Fanaroff & Riley’s (1974) Type-I (FR-I) radio classification (see Xu et al. 2000, for a description of the radio properties of our sample). Though the black hole paradigm has become widely accepted as an essential ingredient in radio galaxies, the mechanics and time-scales of fueling and jet production are poorly understood. In unified schemes (see Urry & Padovani 1995 for a review), which suggest the appearance of AGN depends strongly on orientation, FR-I galaxies are thought to be the unbeamed population of BL-Lac objects. Understanding the central regions of such objects on scales of tens and hundreds of parsecs will allow us to better understand and characterize these connections. We have observed 19 of our sample galaxies with STIS (the Space Telescope Imaging Spectrograph; see Kimble et al. 1998), the sample members M84 and M87 having previously been observed by others. By placing three parallel slits adjacent to each other on the galaxy nuclei (Figure 1) along the stellar major axis we have obtained sets of spectra which allow us to map, for example, the kinematics and H$\alpha$ + \[NII\] flux for the very central regions of each galaxy. Kinematic classifications ========================= By inspecting the velocity field of each galaxy it has been possible to classify them into three broad groups (see also Baum, Heckman, & van Breugel 1992): [*Rotators*]{}; which show a clear, systematic, rotation pattern in their velocity field (i.e. we observe a systematic gradient in velocity across the nucleus). [*Systematic Non-Rotators*]{}; which show some kind of systematic behavior in their velocity field, but do not appear to be in rotation. [*Undefined*]{}; which do not show any clear pattern in their velocity fields. Initially we have made use of the mean velocity dispersion ($\bar{\sigma}$) and $\Delta v = (v_{\rm max}-v_{\rm min})/2$, as estimators of the global parameters within some physical scale of the peak in emission line flux (see Table 1). [lcccc]{} & &\ & $\Delta v$ & $\bar{\sigma}$ & $\Delta v$ & $\bar{\sigma}$\ & ${\rm (km s^{-1}})$ & $({\rm km s^{-1}})$ & ${\rm (km s^{-1})}$ & ${\rm (km s^{-1})}$\ Rotators (11) & $192 \pm\ 137$ & $246 \pm\ 104$ & $253 \pm\ 130$ & $211 \pm\ 72$\ Sys. NR (3) & $100 \pm\ 19$ & $259 \pm\ 134$ & $133 \pm\ 31$ & $225 \pm\ 96$\ Undefined (5) & $132 \pm\ 70$&$244 \pm\ 129$&$156 \pm\ 50$&$229 \pm\ 115$\ The similarity in velocity dispersion across the categories suggests that they represent systems that are kinematically alike, and the failure to detect rotation in some cases may simply be due to adverse slit placement, the presence of dust masking part of the rotation curve or projection effects. We fail to detect rotation in galaxies that have an axis ratio of their central light distribution $b/a \ga 0.5$ (with the exception of NGC 383), i.e. the members of the sample with more nearly face-on central morphologies. Bearing this in mind, we can not rule out the possibility that all of the sample galaxies harbor gas systems of the same type viewed from a range of orientations through different obscurations. Rotating systems ================ In sample members where we have been able to identify systematic rotation in the nucleus, we have made estimates of the total mass enclosed in the central region by using the maximum and minimum velocities observed (not corrected for the inclination) and the radius over which they are separated (see Table 2). Further modeling will allow us to improve our central mass estimates and enable us to identify and characterize the contributions of the various components that we expect, in particular the contributions of stellar populations and supermassive black holes (for example, by building on the work of van der Marel & van den Bosch 1998; Marconi, et al. 2001; Sarzi, et al. 2001; or Barth, et al. 2001). This modeling will also shed light on the relative importance of non-gravitational motions in the gas. An estimate of the anticipated black hole mass ($M_{\bullet}$), computed using the relationship found by Ferrarese & Merritt (2001; see also Gebhardt et al. 2000) is provided in Table 2 ($\sigma_c$ is the central velocity dispersion corrected to an $r_e/8$ aperture). We note that all of the enclosed masses calculated (which are lower limits) are greater than or similar to the black hole mass predicted from the ground based stellar kinematics using this relation as we would expect. [cccccc]{} Galaxy & $\Delta v$ (Rot’n) & Radius & $M_{\rm Enclosed}^b$ & $\sigma_c$ & $M_{\bullet}$\ & $({\rm km s^{-1}})$ & $({\rm pc})$ & $(M_{\sun})$ & $({\rm km s^{-1}})$ & $(M_{\sun})$\ NGC 315 & 344.8 & 25 & $7.0\times 10^8$ & 295 & $8.2\times 10^8$\ NGC 383 & 420.2 & 48 & $2.0\times 10^9$ & 254 & $4.0\times 10^8$\ NGC 741 & 530.3 & 138 & $9.1\times 10^9$ & 265 & $4.9\times 10^8$\ UGC 7115 & 413.3 & 44 & $1.8\times 10^9$ & 175 & $6.9\times 10^7$\ NGC 4261$^c$ & 174.0 & 73 & $5.1\times 10^8$ & 291 & $7.6\times 10^8$\ NGC 4335 & 305.9 & 121 & $2.6\times 10^9$ &\ NGC 5127 & 315.3 & 190 & $4.4\times 10^9$ & 178 & $7.5\times 10^7$\ NGC 5141 & 471.9 & 87 & $4.5\times 10^9$ &\ NGC 7052$^d$ & 531.5 & 54 & $3.6\times 10^9$ & 247 & $3.5\times 10^8$\ UGC 12064 & 229.1 & 34 & $4.1\times 10^8$ & 257 & $4.2\times 10^8$\ NGC 7626 & 472.6 & 34 & $1.8\times 10^9$ & 248 & $3.6\times 10^8$\ Barth, A. J., et al. 2001, , in press (astro-ph/0012213) Baum, S. A., Heckman, T. M., & van Breugel, W. 1992, , 389, 208 Fanaroff, B. L., & Riley, J. M. 1974, , 167, 31P Ferrarese, L., Ford, H. C., & Jaffe, W. 1996, , 470, 444 Ferrarese, L., & Merritt, D. 2001, , 547, 140 Gebhardt, K., et al. 2000, , 539, 13 Jorgensen, I., Franx, M., & Kjaergaard, P. 1995, , 276, 1341 Kimble, R., et al. 1998, , 492L, 83 Marconi, A., et al. 2001, , 549, 915 Nilson, P. 1973 The Uppsala General Catalog of Galaxies \[UGC\], (Uppsala: Astronomiska Observatorium) Sarzi, M., et al. 2001, , 550, 65 Urry, C. M., & Padovani P. 1995, , 107, 803 van der Marel, R. P., & van den Bosch, F. C. 1998, , 116, 2220 Xu, C., Baum, S. A., O’Dea, C. P., Wrobel, J. M., & Condon, J. J. 2000, , 120, 2950
--- author: - 'Email: abhishek.sinha@ee.iitm.ac.in, matthew.andrews@nokia-bell-labs.com, prasanth.ananth@nokia-bell-labs.com' bibliography: - 'pon.bib' title: 'Scheduling Algorithms for 5G Networks with Mid-haul Capacity Constraints' ---